TemplateSyntaxError

26 views
Skip to first unread message

ejmu...@gmail.com

unread,
Jul 31, 2015, 9:29:46 AM7/31/15
to PANDA Project Users
Hi all, I'm trying to manually upload a fairly large dataset (100ish mb) but panda won't have it. In the logs, it seems that panda is trying to send a disk space alert because the server is at about 90% capacity, however, there is enough available space for this dataset. My educated guess is that panda is trying to alert low disk space but there is something wrong with that error template and it is choking the system. Has anyone had a similar experience? Am I on the right track? How can I solve this issue?

Here's the stack trace from the log:


[ERROR] celery: Task panda.tasks.cron.run_admin_alerts[1454b8d3-6b3f-4544-9ac2-5855e7ff0697] raised exception: TemplateSyntaxError(u"Invalid block tag: 'blocktrans', expected 'elif', 'else' or 'endif'",)


Traceback (most recent call last):


  File "/usr/local/lib/python2.7/dist-packages/celery/execute/trace.py", line 181, in trace_task


    R = retval = fun(*args, **kwargs)


  File "/opt/panda/panda/tasks/run_admin_alerts.py", line 85, in run


    email_message = get_email_body_template('disk_space_alert').render(context)


  File "/opt/panda/panda/utils/notifications.py", line 16, in get_email_body_template


    return get_template('/'.join(['notifications', prefix, 'email_body.txt']))


  File "/usr/local/lib/python2.7/dist-packages/django/template/loader.py", line 145, in get_template


    template, origin = find_template(template_name)


  File "/usr/local/lib/python2.7/dist-packages/django/template/loader.py", line 134, in find_template


    source, display_name = loader(name, dirs)


  File "/usr/local/lib/python2.7/dist-packages/django/template/loader.py", line 42, in __call__


    return self.load_template(template_name, template_dirs)


  File "/usr/local/lib/python2.7/dist-packages/django/template/loader.py", line 48, in load_template


    template = get_template_from_string(source, origin, template_name)


  File "/usr/local/lib/python2.7/dist-packages/django/template/loader.py", line 156, in get_template_from_string


    return Template(source, origin, name)


  File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 125, in __init__


    self.nodelist = compile_string(template_string, origin)


  File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 153, in compile_string


    return parser.parse()


  File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 267, in parse


    compiled_result = compile_func(self, token)


  File "/usr/local/lib/python2.7/dist-packages/django/template/defaulttags.py", line 900, in do_if


    nodelist = parser.parse(('elif', 'else', 'endif'))


  File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 265, in parse


    self.invalid_block_tag(token, command, parse_until)


  File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 320, in invalid_block_tag


    (command, get_text_list(["'%s'" % p for p in parse_until])))


TemplateSyntaxError: Invalid block tag: 'blocktrans', expected 'elif', 'else' or 'endif'



Mike Stucka

unread,
Jul 31, 2015, 10:46:45 AM7/31/15
to panda-pro...@googlegroups.com
Random thought, because this killed a PANDA instance of mine -- have you checked inode capacity?

Try this from the command line:
df -i

Either way, after that command, try:
sudo apt-get autoremove


Mike, wondering how legit this is, or if everything looks like a nail because I'm holding a hammer




--
You received this message because you are subscribed to the Google Groups "PANDA Project Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to panda-project-u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
26.58584, -80.16876 icbm
“A computer lets you make more mistakes faster than any invention in human history, with the possible exceptions of handguns and tequila.” -- Mitch Ratcliffe

ejmu...@gmail.com

unread,
Jul 31, 2015, 12:07:26 PM7/31/15
to PANDA Project Users, stu...@whitedoggies.com
Thanks, I never knew about inodes, that's good to know.

When I run df -i it says my inode use is 100% in the main partition but there are still 2620 free inodes there. I can't imagine panda needs to generate 2620 new files for a dataset, right? I also have a partition mounted on /opt/solr/panda/solr that has 99% free inodes.

Mike Stucka

unread,
Jul 31, 2015, 12:18:10 PM7/31/15
to panda-pro...@googlegroups.com
No idea -- but running
sudo apt-get autoremove

Will probably clean out a bunch of old kernel header files you'll never need, and bring that inode count way down. If you purge unneeded stuff, you're not doing any harm and you may fix the problem; else you can rule it out as a cause.

On Fri, Jul 31, 2015 at 12:07 PM, <ejmu...@gmail.com> wrote:
Thanks, I never knew about inodes, that's good to know.

When I run df -i it says my inode use is 100% in the main partition but there are still 2620 free inodes there. I can't imagine panda needs to generate 2620 new files for a dataset, right? I also have a partition mounted on /opt/solr/panda/solr that has 99% free inodes.

--
You received this message because you are subscribed to the Google Groups "PANDA Project Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to panda-project-u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

ejmu...@gmail.com

unread,
Jul 31, 2015, 1:06:36 PM7/31/15
to PANDA Project Users, stu...@whitedoggies.com
Hmm, when I run sudo apt-get autoremove -f it errors while unpacking the most current headers saying there isn't enough disk space but df -h tells me I have more than a gig available.


On Friday, July 31, 2015 at 12:18:10 PM UTC-4, Mike Stucka wrote:
No idea -- but running
sudo apt-get autoremove

Will probably clean out a bunch of old kernel header files you'll never need, and bring that inode count way down. If you purge unneeded stuff, you're not doing any harm and you may fix the problem; else you can rule it out as a cause.
On Fri, Jul 31, 2015 at 12:07 PM, <ejmu...@gmail.com> wrote:
Thanks, I never knew about inodes, that's good to know.

When I run df -i it says my inode use is 100% in the main partition but there are still 2620 free inodes there. I can't imagine panda needs to generate 2620 new files for a dataset, right? I also have a partition mounted on /opt/solr/panda/solr that has 99% free inodes.

--
You received this message because you are subscribed to the Google Groups "PANDA Project Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to panda-project-users+unsub...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Mike Stucka

unread,
Jul 31, 2015, 1:46:45 PM7/31/15
to panda-pro...@googlegroups.com
Oh, man. Now I'm more sure you're in the hell that I was in.

The no disk space error will look exactly the same if it's struggling with inodes.

I'd found a convoluted way that involved moving files around to try to get at my inode problem. Now I've found this post:
http://askubuntu.com/questions/575793/apt-get-unable-to-autoremove-packages-filling-up-inodes

So ... Try this. Fire up apt-get autoremove again, just to see some of the files it wants to purge. Look for one of the oldest Linux versions on there. Say it's 2.34.56-78. Write it down, hit no or control-c at the apt-get prompt.

Then try following the answer from that askubuntu thread. Purge one or two of the header files (which is actually several thousand files) and apt-get autoremove may start working.

And if you get apt-get autoremove working, then I'm pretty sure your other problem will go away. Either way, you've done something that *really* needed to be done.


Mike





On Fri, Jul 31, 2015 at 1:06 PM, <ejmu...@gmail.com> wrote:
Hmm, when I run sudo apt-get autoremove -f it errors while unpacking the most current headers saying there isn't enough disk space but df -h tells me I have more than a gig available.

On Friday, July 31, 2015 at 12:18:10 PM UTC-4, Mike Stucka wrote:
No idea -- but running
sudo apt-get autoremove

Will probably clean out a bunch of old kernel header files you'll never need, and bring that inode count way down. If you purge unneeded stuff, you're not doing any harm and you may fix the problem; else you can rule it out as a cause.
On Fri, Jul 31, 2015 at 12:07 PM, <ejmu...@gmail.com> wrote:
Thanks, I never knew about inodes, that's good to know.

When I run df -i it says my inode use is 100% in the main partition but there are still 2620 free inodes there. I can't imagine panda needs to generate 2620 new files for a dataset, right? I also have a partition mounted on /opt/solr/panda/solr that has 99% free inodes.

--
You received this message because you are subscribed to the Google Groups "PANDA Project Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to panda-project-u...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
26.58584, -80.16876 icbm
“A computer lets you make more mistakes faster than any invention in human history, with the possible exceptions of handguns and tequila.” -- Mitch Ratcliffe

--
You received this message because you are subscribed to the Google Groups "PANDA Project Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to panda-project-u...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

ejmu...@gmail.com

unread,
Jul 31, 2015, 3:30:26 PM7/31/15
to PANDA Project Users, stu...@whitedoggies.com
Wow.. I went from 2000 free inodes to over 300,000. Now when I do the manual import I'm no longer getting an error in the panda log, however, it does take down the web interface -- users get a 500 status. Is that normal when importing a dataset manually? Will it correct itself? Sorry, I'm pretty new to panda and I inherited this server preconfigured.

Thanks again for all your help.


On Friday, July 31, 2015 at 1:46:45 PM UTC-4, Mike Stucka wrote:
Oh, man. Now I'm more sure you're in the hell that I was in.

The no disk space error will look exactly the same if it's struggling with inodes.

I'd found a convoluted way that involved moving files around to try to get at my inode problem. Now I've found this post:
http://askubuntu.com/questions/575793/apt-get-unable-to-autoremove-packages-filling-up-inodes

So ... Try this. Fire up apt-get autoremove again, just to see some of the files it wants to purge. Look for one of the oldest Linux versions on there. Say it's 2.34.56-78. Write it down, hit no or control-c at the apt-get prompt.

Then try following the answer from that askubuntu thread. Purge one or two of the header files (which is actually several thousand files) and apt-get autoremove may start working.

And if you get apt-get autoremove working, then I'm pretty sure your other problem will go away. Either way, you've done something that *really* needed to be done.


Mike




On Fri, Jul 31, 2015 at 1:06 PM, <ejmu...@gmail.com> wrote:
Hmm, when I run sudo apt-get autoremove -f it errors while unpacking the most current headers saying there isn't enough disk space but df -h tells me I have more than a gig available.

On Friday, July 31, 2015 at 12:18:10 PM UTC-4, Mike Stucka wrote:
No idea -- but running
sudo apt-get autoremove

Will probably clean out a bunch of old kernel header files you'll never need, and bring that inode count way down. If you purge unneeded stuff, you're not doing any harm and you may fix the problem; else you can rule it out as a cause.
On Fri, Jul 31, 2015 at 12:07 PM, <ejmu...@gmail.com> wrote:
Thanks, I never knew about inodes, that's good to know.

When I run df -i it says my inode use is 100% in the main partition but there are still 2620 free inodes there. I can't imagine panda needs to generate 2620 new files for a dataset, right? I also have a partition mounted on /opt/solr/panda/solr that has 99% free inodes.

--
You received this message because you are subscribed to the Google Groups "PANDA Project Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to panda-project-users+unsub...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
26.58584, -80.16876 icbm
“A computer lets you make more mistakes faster than any invention in human history, with the possible exceptions of handguns and tequila.” -- Mitch Ratcliffe

--
You received this message because you are subscribed to the Google Groups "PANDA Project Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to panda-project-users+unsub...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

ejmu...@gmail.com

unread,
Jul 31, 2015, 3:33:21 PM7/31/15
to PANDA Project Users, stu...@whitedoggies.com, ejmu...@gmail.com
Nevermind, as soon as I sent that the server came back up and the dataset is in-progress. I owe you a beer if you're ever in Tampa.
Reply all
Reply to author
Forward
0 new messages