Second crash after 11 days

1,176 views
Skip to first unread message

Steve2Q

unread,
Aug 6, 2018, 4:51:41 PM8/6/18
to weewx-user
Hello. I had a crash 11 days 15 hours ago. At that time, I rebooted the system and all appeared OK (I posted a log at that time). It just crashed again after 11 days 15 hours, which makes me a bit suspicious that something more than a bad card is responsible. Attached is a log from today. The last file ftp'd was was at 7:56:22. Then at 15:35:42, the date changed to Dec 31. I then rebooted the system 15:36:30.

Attached is the log.

Thanks, Steve
syslog

Glenn McKechnie

unread,
Aug 6, 2018, 6:47:19 PM8/6/18
to weewx-user
Hi Steve,

Last file to be ftp'd...
Aug  6 07:56:22 raspi2 weewx[2463]: ftpgenerator: ftp'd 41 files in 9.45 seconds

weewx is still running...
Aug  6 07:58:01 raspi2 weewx[2463]: restx: PWSWeather: Published record 2018-08-06 07:58:00 EDT (1533556680)

It's all downhill from here.
What does this script do? It would help to see its contents as
it seems to be actioned often and the kernel returns an error
message in the next log entry, and in fact kills weewx...
Aug  6 08:00:03 raspi2 /USR/SBIN/CRON[22069]: (pi) CMD (/usr/bin/sudo -H /usr/local/bin/checkwifi.sh >> /dev/null 2>&1)
[...]
Aug  6 08:01:44 raspi2 ifplugd(wlan0)[1762]: Executing '/etc/ifplugd/ifplugd.action wlan0 down'.
Aug  6 08:01:54 raspi2 ifplugd(wlan0)[1762]: Program executed successfully.
Aug  6 08:05:48 raspi2 kernel: [1005647.481952] INFO: task kworker/2:0:21602 blocked for more than 120 seconds.
Aug  6 08:05:50 raspi2 kernel: [1005647.481967]       Not tainted 4.9.33-v7+ #1012

And here's the killer. Out of memory error. Weewx gets booted by the kernel...
Aug  6 08:05:54 raspi2 kernel: [1005653.613593] [26541]    33 26541    56956        9      35       0      490             0 apache2
Aug  6 08:05:54 raspi2 kernel: [1005653.613608] [22228]     0 22228     1067      141       6       0       10             0 cron
Aug  6 08:05:54 raspi2 kernel: [1005653.613615] Out of memory: Kill process 2463 (weewxd) score 899 or sacrifice child
Aug  6 08:05:54 raspi2 kernel: [1005653.613677] Killed process 2463 (weewxd) total-vm:1059160kB, anon-rss:873956kB, file-rss:1224kB, shmem-rss:0kB
Aug  6 08:05:54 raspi2 /USR/SBIN/CRON[22232]: (pi) CMD (/usr/bin/sudo -H /usr/local/bin/checkwifi.sh >> /dev/null 2>&1)

weewx is never restarted aftre this shutdown, the system seems to be working as before though. (checkwifi is still doing it's thing!)

I'd setup pmon (in the weewx examples directory) and see if weewx memory usage is indeed running away on you. (it will also show general memory usage)




Cheers
 Glenn

rorpi - read only raspberry pi & various weewx addons
https://github.com/glennmckechnie

--
You received this message because you are subscribed to the Google Groups "weewx-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to weewx-user+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Andrew Milner

unread,
Aug 6, 2018, 9:36:45 PM8/6/18
to weewx-user
I'd see if it runs better without the checkwifi script.  My first suspect is a memory leak caused by the script.  Why do you need to keep checking wifi anyway?
To unsubscribe from this group and stop receiving emails from it, send an email to weewx-user+...@googlegroups.com.

Steve2Q

unread,
Aug 7, 2018, 12:20:09 PM8/7/18
to weewx-user
Glenn and Andrew: thanks for replying. I renamed the checkwifi script file, and commented out the line in cron which ran the script. Is it OK to just comment out the line as I did, or is it better to delete it entirely from cron?

I added that script a long time ago when I was having wifi problems...It was supposed to reboot the pi if the link failed, but I don't think it ever worked properly. As a matter of fact, until you guys pointed it out, I had forgotten about it.

Steve


vince

unread,
Aug 7, 2018, 12:28:31 PM8/7/18
to weewx-user
On Tuesday, August 7, 2018 at 9:20:09 AM UTC-7, Steve2Q wrote:
Glenn and Andrew: thanks for replying. I renamed the checkwifi script file, and commented out the line in cron which ran the script. Is it OK to just comment out the line as I did, or is it better to delete it entirely from cron?



commenting it out is fine... 

Steve2Q

unread,
Sep 5, 2018, 11:31:30 AM9/5/18
to weewx-user
Hello. I have back from vacation, and I want to try and track down this problem. From this discussion, I think I would like to initiate pmon to see what is happening with regards to any memory issues. I have some questions about installing pmon.

1. I am running Weewx 3.8.0. In /examples/pmon the changelog says the included version is 0.4 date 24 April 2016. Is this the most up to date version?

2. Following the manual install instructions at https://github.com/weewx/weewx/tree/master/examples/pmon I have copied pmon.py and the /skins to the recommended locations. The next step is adding [ProcessMonitor], Is there a specific place that I should add this stanza in weewx.conf?

3. Is pmon a process that can run all the time? If it does indeed find a memory problem and I correct it, is there a way to turn off pmon.

Thanks, Steve

vince

unread,
Sep 5, 2018, 12:46:46 PM9/5/18
to weewx-user
On Wednesday, September 5, 2018 at 8:31:30 AM UTC-7, Steve2Q wrote:
1. I am running Weewx 3.8.0. In /examples/pmon the changelog says the included version is 0.4 date 24 April 2016. Is this the most up to date version?


The git repo says the last change was in May this year.  So yes.
 
2. Following the manual install instructions at https://github.com/weewx/weewx/tree/master/examples/pmon I have copied pmon.py and the /skins to the recommended locations. The next step is adding [ProcessMonitor], Is there a specific place that I should add this stanza in weewx.conf?

It's an extension.  The extension installer should add a template stanza for you.

If you want to work hard and do it manually then you need to follow the many multiple steps in the docs there.  No, there's no place special in weewx.conf - some folks put it at the bottom to be able to find it, others don't.  Weewx doesn't care.

 
3. Is pmon a process that can run all the time? If it does indeed find a memory problem and I correct it, is there a way to turn off pmon.


It's an extension.  If you want to turn it off use the extension installer to uninstall it.

If you manually install it, then you need to manually remove stuff similar to how you added it.
 

Steve Meltz

unread,
Sep 5, 2018, 12:51:02 PM9/5/18
to weewx...@googlegroups.com
Thanks, Vince. I am going to do the manual install as I could not get the extension installer to work. Maybe syntax error or my just not understanding how it works in this case..I have used the installer successfully in the past.

--
You received this message because you are subscribed to a topic in the Google Groups "weewx-user" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/weewx-user/VehicK_KNso/unsubscribe.
To unsubscribe from this group and all its topics, send an email to weewx-user+...@googlegroups.com.

vince

unread,
Sep 5, 2018, 12:56:20 PM9/5/18
to weewx-user
On Wednesday, September 5, 2018 at 9:51:02 AM UTC-7, Steve2Q wrote:
Thanks, Vince. I am going to do the manual install as I could not get the extension installer to work. Maybe syntax error or my just not understanding how it works in this case..I have used the installer successfully in the past.


Then edit away and restart weewx.  Make sure you have debug=1 (at least initially) just in case you typo something. 

Steve Meltz

unread,
Sep 5, 2018, 3:28:03 PM9/5/18
to weewx...@googlegroups.com
Ended up being a syntax error. Thanks again, Vince

--
Message has been deleted

Steve2Q

unread,
Sep 9, 2018, 1:59:45 PM9/9/18
to weewx-user
Ok..I have pmon running and attached are the 3 most recent graphs. I don't know how to interpret them except that it looks like memory usage is going up constantly.

Steve
dayprocmem.png
monthprocmem.png
weekprocmem.png

vince

unread,
Sep 9, 2018, 2:55:26 PM9/9/18
to weewx-user
On Sunday, September 9, 2018 at 10:59:45 AM UTC-7, Steve2Q wrote:
Ok..I have pmon running and attached are the 3 most recent graphs. I don't know how to interpret them except that it looks like memory usage is going up constantly.



Do you have multiple extensions loaded or something ?
I've seen the Forecast extension eat memory like crazy with some configurations of that extension.

Suggest considering getting your system stable with just the default skin and 'no' extensions before adding a bunch of stuff to it....a weewx system should be very stable and not have a growing memory footprint typically.


Steve2Q

unread,
Sep 9, 2018, 3:09:32 PM9/9/18
to weewx-user
Vince: here are the extensions that are running (ignore as3935..I removed it a long time ago but for some reason wee_extension --list keeps showing it there).

vince

unread,
Sep 9, 2018, 4:35:05 PM9/9/18
to weewx-user
On Sunday, September 9, 2018 at 12:09:32 PM UTC-7, Steve2Q wrote:
Vince: here are the extensions that are running (ignore as3935..I removed it a long time ago but for some reason wee_extension --list keeps showing it there).

Again - if you have an unstable system, the best way to get it to 'be' stable is to run it as minimally as possible first.  Then gradually add things.

Turn everything off other than the raspi os and the most minimal weewx possible.  All your cron jobs.  All the software you added that you run at boot. Everything.  Run the bare minimum to get things baselined as stable.
 

Steve Meltz

unread,
Sep 9, 2018, 4:37:18 PM9/9/18
to weewx...@googlegroups.com
Ok..sounds like a project for the weekend..or after the incoming bad weather..I dont want to miss anything!

--

gjr80

unread,
Sep 9, 2018, 7:59:49 PM9/9/18
to weewx-user
Steve,

Been there and done that. These memory issues can be a pain to track down. I had one a while back and it was a case of memory usage ramping up each hour, the slope of the plot was so constant you could set your watch by it. Tracking down the culprit can be a time consuming process as I found it took a few hours from a restart before I could be sure the memory usage was on the way up, there used to be a bit of up and down after startup before the constant increases started. As Vince said one approach is to go back to a bare bones install and add things back one at a time till you find the culprit. Unless you have done anything out of the norm with your RPi I would not worry too much about winding back the operating system, start with WeeWX, its the obvious candidate.

In terms of winding back WeeWX, sure you can uninstall your extension but then you risk losing config settings. Another way to achieve the same effect is to disable services in [Engine] [[Services]], that way you keep your config and just turn services off. Just comment out/remove services for those you wish to disable, if you want to go back to a bare bones setup or want to know what services you have on top of a bare bones install have a look at/compare to the bare bones weewx.conf on gitHub. Of course take a backup copy of weewx.conf before you start so you can easily go back to how it was before.

Another approach is rather than going back to a bare bones install disable one added on service at a time until you find the culprit. You can do this one at a time or cumulatively.

Remember also that even with this problem WeeWX will still be doing everything it should, so you are not losing data. It's just that WeeWX runs out of memory. My stop gap solution until I solved my issue was setting a reminder to remind me to restart WeeWX every so many days (I think it was 4). I am not a fan of forced restarts so did it manually each time.

Gary

Glenn McKechnie

unread,
Sep 10, 2018, 1:27:27 AM9/10/18
to weewx-user
Further to what has been said already.

My experience with that sort of memory usage has been with the PIL,
Pillow (or whatever it's now called) image library.
In one case (a CentOS install) replacing it with a current pip version
fixed it outright.
In my case (debian) I could never get on top of it. Changing to a pip
install of the image library made some difference but not enough to
stop the excessive usage, the slow remorseless climb to oblivion
continued.

I suspect it wasn't just one SLE or other weewx addition, but the load
from them all. I could see memory climbing, I could identify where it
was happening (syslog, ANTI_ALIAS, draw) but it was all so variable
that I couldn't get on top of it. Most of it pointed to the image
generation, perhaps I do too many? and they are too large? If I
didn't generate the images the problem disappeared - but I want the
images, and in particular the configuration I have. (If the wireless
NBN is behaving -- http://203.213.243.61/weewx/ )

Anyway, Life caught up with me and I could no longer justify the time
to run around in ever diminishing circles trying to nail it; so I
resorted to a hack.

I don't expect this to be a solution, but it might offer some insight.
It bypasses the problem in the cleanest way I could come up with and
confirms the problem (mine) is in the report generation cycle. The
result has been more than satisfactory - it's basically stable. (see
attached image, 200K in 30 days). So stable that I haven't needed to
get back to it; but then, maybe I don't want to get in over my head
again. ;-)

Along the lines of Garys advice re: turning off services.
I removed (commented out) from weewx.conf the call to
'weewx.engine.StdReport' , under [Engine][[Services]]. That stops all
reports.
I then edited engine.py to generate a suitable, unique message to
syslog at the conclusion of the loop cycle (ie: the start of what
would have been the report cycle). This ensured I wasn't accessing
weewx 'out of cycle' and kept it as inhouse as possible.
With that done a small plugin was added to rsyslog that then called
wee_reports every time that log message appears. It then generates the
needed reports, exactly where weewx did them before, and most
importantly returns all the memory at its completion.

Good luck Steve. I hope you get it pinpointed and fixed, quickly and easily.



Cheers
Glenn

rorpi - read only raspberry pi & various weewx addons
https://github.com/glennmckechnie


> You received this message because you are subscribed to the Google Groups
> "weewx-user" group.
> To unsubscribe from this group and stop receiving emails from it, send an
memory-usage-weewx-Screenshot_2018-09-10_14-47-18.png

Steve2Q

unread,
Sep 11, 2018, 2:25:20 PM9/11/18
to weewx-user
Gary, Vince, and Glenn. So far I upgraded Weewx to 3.8.2, and removed the [Engine] entries for user.alarm.Myalarm and user.rtgd.RealtimeGaugeData as suggested by Gary. I also did updates and upgrades to the RPi running Weewx.

I am now letting pmon run and and will watch it over the next several day.

Not sure if it is relevant but I should add that my archive interval is 2 minutes, and when I have Steel Gauges running my update time is 2 seconds (the loop time from my Ultimeter).

If the memory usage keeps going up, I guess that next step is a clean install.??

Steve

vince

unread,
Sep 11, 2018, 2:34:24 PM9/11/18
to weewx-user
On Tuesday, September 11, 2018 at 11:25:20 AM UTC-7, Steve2Q wrote:
Not sure if it is relevant but I should add that my archive interval is 2 minutes, and when I have Steel Gauges running my update time is 2 seconds (the loop time from my Ultimeter).



Hmmm......I guess I'd suggest a 2-sec update time isn't going to be too easy on the pi, those things aren't particularly fast to say the least....

Leave things as-is and get a baseline of how it behaves as-is.

 

gjr80

unread,
Sep 11, 2018, 10:41:25 PM9/11/18
to weewx-user
2 second update should not be too much for a RPi2, though of course it depends on what other system load there is. If it were a problem I would expect you would be seeing other symptoms (high CPU load, SteelSeries gauges updates being missed/erratic, long execution times for reports) rather than just memory usage ramping up.

Gary

Steve Meltz

unread,
Sep 12, 2018, 8:22:43 AM9/12/18
to weewx...@googlegroups.com
Gary, I was thinking the same thing. None of the actions directly related to Weewx ever became erratic, so there must be something else involved.

Steve

--

Steve2Q

unread,
Oct 3, 2018, 10:58:46 AM10/3/18
to weewx-user
Following up on this problem. I removed these two items from [Engine] [Services] report services:  user.alarm.MyAlarm   and   user.rtgd.RealtimeGaugeData

Weewx has now been running for 13 days 20 hours with the Pi running for 19 days 13 hours. Before I removed the two items Weewx was crashing at 11 days xx hours consistently.

It does appear that that memory usage is still going up, but slower (I don't know how to interpret the numbers from pmon,however). Attached are the latest charts from pmon. Does it look like the system is going to go down again, just at a later date, or is this normal behavior? I am planning on adding the removed report services one at a time to see if either is the culprit in accelerating the memory usage, but I want to wait until the experts weigh in.

Thanks, Steve



.
dayprocmem.png
index.html
monthprocmem.png
weekprocmem.png

gjr80

unread,
Oct 4, 2018, 9:23:09 AM10/4/18
to weewx-user
Steve,

Looks to me like you are still going to crash, just a bit later than previous. In fact if you were able to look at your previous plots when you had a crash at 11 days xx hours, you could probably make a pretty good guesstimate as to when your system will crash; it will be when memory usage hits that magic value. I think you need to keep looking, those memory plots should be near a flat line in a matter of hours after WeeWX startup. There is not much point adding things back until you get that flat line.

Gary

Steve Meltz

unread,
Oct 4, 2018, 12:25:53 PM10/4/18
to weewx...@googlegroups.com
Thanks Gary. Is it possible that a small cron i have set up could do this? Every day at 0001 i have the archive.sdb zipped and uploaded to a folder on my web site. Other than that is there any tools available that i can use to track down the leak?
Steve 

--

vince

unread,
Oct 4, 2018, 3:27:20 PM10/4/18
to weewx-user
On Thu, Oct 4, 2018, 9:23 AM gjr80 <gjrod...@gmail.com> wrote:
Looks to me like you are still going to crash, just a bit later than previous. In fact if you were able to look at your previous plots when you had a crash at 11 days xx hours, you could probably make a pretty good guesstimate as to when your system will crash; it will be when memory usage hits that magic value. I think you need to keep looking, those memory plots should be near a flat line in a matter of hours after WeeWX startup. There is not much point adding things back until you get that flat line.



Again, you need to baseline your system with 'nothing' running but weewx and your station, then add things back in one-by-one.

FWIW, I have a pi-zero running latest Raspbian here that I just installed my memory extension (link) on that has just vanilla weewx 3.8.2 plus the weatherflow-UDP driver.  I'll let it run for a day or so and see where memory usage stabilizes on a vanilla pi, just as another data point.

Steve Meltz

unread,
Oct 4, 2018, 4:57:48 PM10/4/18
to weewx...@googlegroups.com
Gary and Vince After sending my previous post, I started reading some more about memory leaks. It appears that cron jobs (if poorly coded) can sometimes be the cause. With that in mind, I am attaching both the cron, and the process it  starts so maybe you can pick up something. I am also having a problem with the crontab; if I issue the command crontab -l  I get the following at the end:

The cron:
# m h  dom mon dow   command
MAILTO=XX...@gmail.com
59 23 * * * /home/bin/./weewxbackup

I want to edit this crontab to stop running the following process, but when I issue crontab -e I get a generic crontab, not the one I wish to edit.


The process (weewxbackup in /home/bin)
#!/bin/bash
#This script zips and backs up to web site
#Following line added to prevent "TERM invronment variable not set" error
export TERM=${TERM:-dumb}
clear

echo "Backing up weewx.sdb"

cd /home/weewx/archive
sudo cp weewx.sdb weewxpi2.cpy
sudo gzip weewxpi2.cpy
sudo lftp -e 'put /home/weewx/archive/weewxpi2.cpy.gz; bye' -u xxxxx,xxxxxxxftp.xxxabcd.org
sudo rm weewxpi2.cpy.gz

echo "Done - weewxpi2.sdb zipped and uploaded to xxxabcd.org"







vince

unread,
Oct 4, 2018, 5:59:15 PM10/4/18
to weewx-user
On Thursday, October 4, 2018 at 1:57:48 PM UTC-7, Steve2Q wrote:
Gary and Vince After sending my previous post, I started reading some more about memory leaks. It appears that cron jobs (if poorly coded) can sometimes be the cause.

Unlikely if they just run and exit.
 
With that in mind, I am attaching both the cron, and the process it  starts so maybe you can pick up something. I am also having a problem with the crontab; if I issue the command crontab -l  I get the following at the end:

The cron:
# m h  dom mon dow   command
MAILTO=XX...@gmail.com
59 23 * * * /home/bin/./weewxbackup


ok - you are getting the crontab associated with the account you're using when you run the crontab command.   The actual file is likely  /var/spool/cron/crontabs/your_user_name_here

I want to edit this crontab to stop running the following process, but when I issue crontab -e I get a generic crontab, not the one I wish to edit.


I've never seen that happen.   I could potentially see you getting permission denied if you are running as user 'pi' and trying to see or edit the crontab of user 'root', but it would be very surprising if you're really seeing what you described.

Try "sudo ls -la /var/spool/cron/crontabs" and I'd expect it would look something like:

drwx-wx--T 2 root crontab 4096 Aug 17 11:10 .
drwxr-xr-x 3 root root    4096 Feb 25  2016 ..
-rw------- 1 pi   crontab 1275 Aug 17 11:08 pi
-rw------- 1 root crontab 1090 Aug 17 11:10 root

 
The process (weewxbackup in /home/bin)
#!/bin/bash
#This script zips and backs up to web site
#Following line added to prevent "TERM invronment variable not set" error
export TERM=${TERM:-dumb}
clear

echo "Backing up weewx.sdb"

cd /home/weewx/archive
sudo cp weewx.sdb weewxpi2.cpy
sudo gzip weewxpi2.cpy
sudo lftp -e 'put /home/weewx/archive/weewxpi2.cpy.gz; bye' -u xxxxx,xxxxxxxftp.xxxabcd.org
sudo rm weewxpi2.cpy.gz

echo "Done - weewxpi2.sdb zipped and uploaded to xxxabcd.org"



I don't know what 'clear' in a non-interactive window will do, but it can't be good.  It's probably throwing an error, but you're not capturing stderr in your crontab invocation so we're in the blind here.   Comment out the 'clear' line for starters.

Change your 'echo' commands to be 'logger' and the lines will be written to your syslog.   Just glancing at it, it is possible that the 'bye' to close the connection isn't really closing the lftp session, but it's hard to say for sure.  I'd expect the other side should close the connection in a few minutes anyway, but lots of ISPs have odd implementations.

 

Steve Meltz

unread,
Oct 4, 2018, 7:29:16 PM10/4/18
to weewx...@googlegroups.com
Vince..this is the output of sudo ls -la /var/spool/cron/crontabs

total 16
drwx-wx--T 2 root crontab 4096 Oct  4 16:52 .
drwxr-xr-x 3 root root    4096 Dec 31  1969 ..
-rw------- 1 pi   crontab 1151 Sep  5 09:35 pi
-rw------- 1 root crontab 1090 May 14  2015 root

This is the output of crontab -e

# Edit this file to introduce tasks to be run by cron.
#
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
#
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').#
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
#
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
#
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
#

I am following your suggestion of editing the weeewxbackup process and I will let you know what happens.

Steve



vince

unread,
Oct 4, 2018, 8:32:52 PM10/4/18
to weewx-user
After 7 hours up, my pi-zero with nothing extra has stabilized with:

57.7 MB total memory used by weewx
32.7 MB resident set size for weewx
 7.5 MB shared memory usage for weewx

 

Thomas Keffer

unread,
Oct 4, 2018, 9:05:14 PM10/4/18
to weewx-user
After 701 days (!), my RPi-B has stabilized with

71.6 MB total
41.3 MB RSS
 6.0 MB shared

WeeWX can run with extraordinary long uptimes.

-tk

--
You received this message because you are subscribed to the Google Groups "weewx-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to weewx-user+...@googlegroups.com.

gjr80

unread,
Oct 4, 2018, 9:44:48 PM10/4/18
to weewx-user
I really don't think this is an issue with any cron jobs, certainly not a backup script that runs independently of WeeWX at 2359 daily. I guess we have tried winding back the services being run so that it was easier to re-enable them later but the issue has remained. I think the next step will be to go back to a plain vanilla install and work our way up from a known good install.

But first, I don't seem to be able to find a weewx.conf anywhere from you Steve. Could you post a sanitised weewx,conf, with disabled services is fine?

Gary

Steve Meltz

unread,
Oct 4, 2018, 10:15:34 PM10/4/18
to weewx...@googlegroups.com
Gary..attached is my weewx.conf file (called weewxa.conf just to keep it separated in my backup folder)

--
You received this message because you are subscribed to a topic in the Google Groups "weewx-user" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/weewx-user/VehicK_KNso/unsubscribe.
To unsubscribe from this group and all its topics, send an email to weewx-user+...@googlegroups.com.
weewxa.conf

gjr80

unread,
Oct 5, 2018, 9:20:43 AM10/5/18
to weewx-user
Hmmm, not much there, though it looks like a fairly well used weewx.conf !

I really think the best course of action is to start with a fresh install, copy over your old data and install cmon/pmon. Monitor you memory to see that it stabilises, can't remember how long that would take on my system that was playing up but it was certainly within a day. Once you know you have a stable system add one service/extension/customisation at a time restarting WeeWX each time and only moving onto the next when you know your memory use is staying flat.

Sorry it's more messing around but I can't think of a better approach.

Gary

Steve2Q

unread,
Oct 13, 2018, 8:25:20 PM10/13/18
to weewx-user
Hello all. A new followup. After quite a bit of back and forth with Glenn (thank you again!!) and trying a lot of different approaches, this is where I am at now. I did a complete reinstall of Raspian Stretch (from Raspberrypi.org), followed by a reinstall of Weewx. I installed Glenn's pmon+ to watch what is going on, but no other extensions or add-ons of my own doing. My biggest problem was forgetting to backup weewx.sdb, but fortunately that was still being done to my web server, so I only lost several hours. This is no longer being done as I have yet to make the necessary cron.

You can see the webpage at www.photokinetics.org/Weather    and the pmon+ output at www.photokinetics.org/Weather/pmon+

I did a restart at approx 1600 because of a small change I made in index.html.tmpl and wxformulas.py but neither should have effected memory use.

As of now (2000 hours) it appears as if memory use has not yet leveled off, but hope springs eternal.

Steve2Q

unread,
Oct 15, 2018, 1:21:27 PM10/15/18
to weewx-user
Memory uses till climbing after 24 hours. It appears from the pmon+ graph that it bumps up close to every 3 hours. Is it possible there is a problem with the raspberry pi itself; either hardware or the kernal?

vince

unread,
Oct 15, 2018, 5:23:40 PM10/15/18
to weewx-user
On Monday, October 15, 2018 at 10:21:27 AM UTC-7, Steve2Q wrote:
Memory uses till climbing after 24 hours. It appears from the pmon+ graph that it bumps up close to every 3 hours. Is it possible there is a problem with the raspberry pi itself; either hardware or the kernal?

Anything's possible.

Here's my pi-zero with just the weatherflow-UDP extension driver and my memory measurement extension.  Mine's slowly growing too, but not a pace that concerns me.  The total usage is still tiny regardless.

python-pil is version 4.0.0-4
kernel is 4.14.52+ as reported by 'uname`
weewx is 3.8.2 installed via setup.py method
 




 
Screen Shot 2018-10-15 at 2.14.40 PM.png

Steve2Q

unread,
Oct 15, 2018, 10:31:07 PM10/15/18
to weewx-user
Vince et al...I decided to stop pmon+ and am running cmon right now. Letting it go overnight and will see how it looks in the am. This is running on a brand new Pi3+ with a new install of Weewx.

Steve
Message has been deleted

Steve2Q

unread,
Oct 19, 2018, 3:28:09 PM10/19/18
to weewx-user
A new followup: I you look at     http://photokinetics.org/Weather/cmon/   it appears that the memory usage has leveled off after 3 days. I am not sure how to analyze the charts other than to see that it has stopped climbing. I will wait until tomorrow, and then start adding in some hardware ("soft switch"  see https://mausberry-circuits.myshopify.com/) followed by adding a few extensions one at a time.

Any comments, or does this look normal at this point??

Thanks, Steve

tomn...@frontier.com

unread,
Oct 19, 2018, 8:12:41 PM10/19/18
to weewx-user
I was going to say that if the thing that's growing memory is the memory monitor, the developers "have some 'splainin to do."
I haven't used pmon or cmon.  I tend to roll my own inferior solutions since at least at work they have to run in user space.
I'm more a fan of periodic snapshots than a daemon sort of thing. 

Chris

Steve2Q

unread,
Nov 18, 2018, 9:00:51 PM11/18/18
to weewx-user
Hello again: after October 19, I had my new pi with a new install of weewx and steel gauges running. I had another crash today after almost 11 days of running and this is a snip from syslog which continues until a hard reboot of the pi: Any thoughts? Could Steel Gauges be causing this. Also attached are some results after the df command was issued. The files are named days, hours, minutes since last reboot.

Nov 18 16:34:23 Pi3 weewx[472]: restx: PWSWeather: Published record 2018-11-18 16:34:00 EST (1542576840)
Nov 18 16:35:06 Pi3 weewx[472]: imagegenerator: Generated 11 images for SteelSeries in 106.90 seconds
Nov 18 16:36:24 Pi3 weewx[472]: manager: Added record 2018-11-18 16:36:00 EST (1542576960) to database 'weewx.sdb'
Nov 18 16:36:26 Pi3 weewx[472]: manager: Added record 2018-11-18 16:36:00 EST (1542576960) to daily summary in 'weewx.sdb'
Nov 18 16:36:30 Pi3 weewx[472]: restx: PWSWeather: Published record 2018-11-18 16:36:00 EST (1542576960)
Nov 18 16:36:31 Pi3 weewx[472]: engine: Launch of report thread aborted: existing report thread still running
Nov 18 16:38:09 Pi3 weewx[472]: ftpgenerator: ftp'd 40 files in 181.18 seconds
Nov 18 16:38:19 Pi3 weewx[472]: manager: Added record 2018-11-18 16:38:00 EST (1542577080) to database 'weewx.sdb'
Nov 18 16:38:22 Pi3 weewx[472]: manager: Added record 2018-11-18 16:38:00 EST (1542577080) to daily summary in 'weewx.sdb'
Nov 18 16:38:25 Pi3 weewx[472]: restx: PWSWeather: Published record 2018-11-18 16:38:00 EST (1542577080)
Nov 18 16:38:45 Pi3 systemd[1]: systemd-tmpfiles-clean.service: Failed to fork: Cannot allocate memory
Nov 18 16:38:46 Pi3 systemd[1]: systemd-tmpfiles-clean.service: Failed to run 'start' task: Cannot allocate memory
Nov 18 16:38:48 Pi3 systemd[1]: Failed to start Cleanup of Temporary Directories.
Nov 18 16:38:48 Pi3 systemd[1]: systemd-tmpfiles-clean.service: Unit entered failed state.
Nov 18 16:38:49 Pi3 systemd[1]: systemd-tmpfiles-clean.service: Failed with result 'resources'.
Nov 18 16:39:44 Pi3 systemd[1]: systemd-tmpfiles-clean.service: Failed to fork: Cannot allocate memory
Nov 18 16:39:45 Pi3 systemd[1]: systemd-tmpfiles-clean.service: Failed to run 'start' task: Cannot allocate memory
Nov 18 16:39:46 Pi3 systemd[1]: Failed to start Cleanup of Temporary Directories.
Nov 18 16:39:48 Pi3 systemd[1]: systemd-tmpfiles-clean.service: Failed with result 'resources'.
Nov 18 16:39:50 Pi3 systemd[1]: systemd-tmpfiles-clean.service: Failed to fork: Cannot allocate memory
Nov 18 16:39:51 Pi3 systemd[1]: systemd-tmpfiles-clean.service: Failed to run 'start' task: Cannot allocate memory
Nov 18 16:39:52 Pi3 systemd[1]: Failed to start Cleanup of Temporary Directories.
Nov 18 16:39:54 Pi3 systemd[1]: systemd-tmpfiles-clean.service: Failed with result 'resources'.
Nov 18 16:39:55 Pi3 systemd[1]: systemd-tmpfiles-clean.service: Failed to fork: Cannot allocate memory
Nov 18 16:39:57 Pi3 systemd[1]: systemd-tmpfiles-clean.service: Failed to run 'start' task: Cannot allocate memory
Nov 18 16:39:58 Pi3 systemd[1]: Failed to start Cleanup of Temporary Directories.
Nov 18 16:40:00 Pi3 systemd[1]: systemd-tmpfiles-clean.service: Failed with result 'resources'.
Nov 18 16:40:02 Pi3 systemd[1]: systemd-tmpfiles-clean.service: Failed to fork: Cannot allocate memory
Nov 18 16:40:03 Pi3 systemd[1]: systemd-tmpfiles-clean.service: Failed to run 'start' task: Cannot allocate memory
Nov 18 16:40:05 Pi3 systemd[1]: Failed to start Cleanup of Temporary Directories.
Nov 18 16:40:06 Pi3 systemd[1]: systemd-tmpfiles-clean.service: Failed with result 'resources'.
Nov 18 16:40:09 Pi3 systemd[1]: systemd-tmpfiles-clean.service: Failed to fork: Cannot allocate memory
Nov 18 16:40:10 Pi3 systemd[1]: systemd-tmpfiles-clean.service: Failed to run 'start' task: Cannot allocate memory
Nov 18 16:40:11 Pi3 systemd[1]: Failed to start Cleanup of Temporary Directories.
Nov 18 16:40:12 Pi3 systemd[1]: systemd-tmpfiles-clean.service: Failed with result 'resources'.
Nov 18 16:40:15 Pi3 systemd[1]: systemd-tmpfiles-clean.service: Failed to fork: Cannot allocate memory
Nov 18 16:40:16 Pi3 systemd[1]: systemd-tmpfiles-clean.service: Failed to run 'start' task: Cannot allocate memory
Nov 18 16:40:19 Pi3 systemd[1]: Failed to start Cleanup of Temporary Directories.
Nov 18 16:40:20 Pi3 systemd[1]: systemd-tmpfiles-clean.service: Failed with result 'resources'.
Nov 18 16:40:23 Pi3 systemd[1]: systemd-tmpfiles-clean.service: Failed to fork: Cannot allocate memory
Nov 18 16:40:24 Pi3 systemd[1]: systemd-tmpfiles-clean.service: Failed to run 'start' task: Cannot allocate memory
Nov 18 16:40:25 Pi3 systemd[1]: Failed to start Cleanup of Temporary



2d1h14m.JPG
10d5h52m.JPG

vince

unread,
Nov 18, 2018, 9:27:44 PM11/18/18
to weewx-user
On Sunday, November 18, 2018 at 6:00:51 PM UTC-8, Steve2Q wrote:
Hello again: after October 19, I had my new pi with a new install of weewx and steel gauges running. I had another crash today after almost 11 days of running and this is a snip from syslog which continues until a hard reboot of the pi: Any thoughts? Could Steel Gauges be causing this. Also attached are some results after the df command was issued. 

Well looking at the 'week' and 'month' cmon plots at  http://photokinetics.org/Weather/cmon/   you certainly seem to have quite a memory leak but there is no data in the plots saying what is leaking.

Best I can suggest is to run "top" occasionally, perhaps once per day, and save the output and see if you can see who is using up memory.

An example from my pi-zero:

pi@zero:~ $ top
top - 18:10:42 up 18 days,  6:27,  1 user,  load average: 1.46, 1.34, 1.28
Tasks:  69 total,   2 running,  43 sleeping,   0 stopped,   0 zombie
%Cpu(s): 95.5 us,  4.5 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem :   443892 total,    53052 free,   142484 used,   248356 buff/cache
KiB Swap:   102396 total,   102140 free,      256 used.   218624 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S %CPU %MEM     TIME+ COMMAND
 4156 root      20   0   14916  10940   6452 R 48.9  2.5   7599:57 python
  290 root      20   0  140692 105996   6488 S 48.6 23.9   2091:12 weewxd

The 'python' is a UDP listener program I wrote that basically just idles listening for broadcasts from my WeatherFlow station and then sends the output to MQTT on a different pi.  

The next line for weewxd of course is weewx using the WeatherFlow UDP driver, listening for the same messages and saving to sqlite and using the vanilla Standard skin.   The resident set size on mine 'does' creep upward slowly, but it's waaaay from being any kind of issue and my experience is it'll level out eventually.  I've never seen what you've experienced, so you must have unique or unusual software somehow.

I'd suggest you run top periodically from cron, saving to a file.  Once per day should be enough since your system goes whacko so quickly.   A crontab along the lines of the following should get it done (untested):

10 * * * *  top -b -n 1 > /root/top.`date +%Y%m%d.%H%M%S`

That'll create files /root/top.20181118.121002 and the like that sort nicely (corresponds to Nov-18-2018 12:10:02)
 
 

gjr80

unread,
Nov 18, 2018, 11:07:48 PM11/18/18
to weewx-user
Steve,

You can easily take the realtime gauge data extension out of the equation if you wish by disabling the service. In weewx.conf under [Engine] [[Services]] just delete , user.rtgd.RealtimeGaugeData. Of course you will need to restart weeWX, afterwards you can re-enable it by adding the entry back in and restarting weeWX. This will not affect any of your config (steelseries or otherwise) it just means gauge-data.txt will not be produced.

Gary

Steve2Q

unread,
Nov 18, 2018, 11:52:55 PM11/18/18
to weewx-user
Vince...thanks. I added that cron job and will watch and wait.

Steve2Q

unread,
Nov 18, 2018, 11:56:20 PM11/18/18
to weewx-user
Gary..I am going to keep Steel Gauges running while I try Vinces suggestion. Once the pi goes off the rails again, hopefully top will yield some useful information.
Steve

Steve2Q

unread,
Nov 19, 2018, 6:59:01 PM11/19/18
to weewx-user
Vince..having lots of problems creating a cron. I have done it before, so that's my issue. I decided to just run the script manually and it runs, but I have to log on as root in order to run it. Anyway, not really a Weewx issue, so I will research it further. I will run the command daily until I get another system crash.

Steve

vince

unread,
Nov 19, 2018, 7:30:36 PM11/19/18
to weewx-user
On Monday, November 19, 2018 at 3:59:01 PM UTC-8, Steve2Q wrote:
Vince..having lots of problems creating a cron. I have done it before, so that's my issue. I decided to just run the script manually and it runs, but I have to log on as root in order to run it. Anyway, not really a Weewx issue, so I will research it further. I will run the command daily until I get another system crash.



Ummm..... "sudo crontab -e" will put you into an editor that edits root's crontab.

I think you might want to get some free Linux training - see the Linux Foundation self-paced course at edx.org and work through it.  It'll lower your blood pressure. 

Steve2Q

unread,
Nov 19, 2018, 8:57:06 PM11/19/18
to weewx-user
Vince; sudo crontab -e is exactly what I have done. Following is the snip (email redacted)  from the nano editor before I hit "control O" to save (it says it will be saved in /etc/crontab.xxxxxx/crontab) and then rebooted.



I assume that the 35 20 * * * will run the top command at 35 minutes after 8 pm local time. I would also assume that it is not UTC as the Pi is on local time. If these assumptions are correct the file should have been written to /etc at 8:35:00PM, but it was not.

The problem I am having (and I have looked in a LOT of places including edx) is that none have answered this question: once I save the crontab, why does a "new" one come up rather than the edited version when I run sudo crontab -e again? If I go (as root)  to /etc/crontab.xxxxxx/crontab and open crontab with nano the lines I incorporated are not there. Since this is happening I can only figure that my edits are not sticking. So, trust me when I say that I have been going at this for quite a while before I thought of bringing it here.

Steve

tomn...@frontier.com

unread,
Nov 20, 2018, 9:27:42 AM11/20/18
to weewx-user
Two things.  You don't have to run top as root, and if root's crontab you are editing is /etc/crontab, then the entry has to include the userID after the time spec,
whereas a regular user's crontab entries do not.
Chris


vince

unread,
Nov 20, 2018, 12:36:14 PM11/20/18
to weewx-user
On Monday, November 19, 2018 at 5:57:06 PM UTC-8, Steve2Q wrote:
...: once I save the crontab, why does a "new" one come up rather than the edited version when I run sudo crontab -e again? If I go (as root)  to /etc/crontab.xxxxxx/crontab and open crontab with nano the lines I incorporated are not there. Since this is happening I can only figure that my edits are not sticking.

Unfortunately I can't help you learn how to edit files remotely....you have to know how to edit a file and have the edits save.

I'd agree with the other followup - you don't need to run top as root so you could get the same stuff via user 'pi' or whatever user you typically use.   So it would be the same stanza, just saving to /home/pi (if you're user 'pi') or the like.

But I'd sure expect 'sudo crontab -e' would work.  Maybe 'sudo crontab -e -u root' if you want to be explicit.

Steve2Q

unread,
Nov 21, 2018, 4:30:26 PM11/21/18
to weewx-user
Vince and Liz..thanks for your help. I finally bit the bullet and called Amazon (the font of all knowledge). A very helpful employee got me going and even showed me a raft of other useful commands.

Steve


Steve2Q

unread,
Dec 13, 2018, 8:42:59 PM12/13/18
to weewx-user
I had the pi run for almost 22 days this time before it stopped. As previously noted, I was finally able to create a cron to run Top once daily. I am including the first one, another about midway through, and the last two. The only thing of note that I see is the sudden big jump in memory usage by weewxd according to top. Besides Weewx itself, the only other thing running on the pi is the cron.
top.21-11-2018-22-35
top.28-11-2018-22-35
top.11-12-2018-22-35
top.12-12-2018-22-35

vince

unread,
Dec 13, 2018, 9:01:16 PM12/13/18
to weewx-user
On Thursday, December 13, 2018 at 5:42:59 PM UTC-8, Steve2Q wrote:
I had the pi run for almost 22 days this time before it stopped. As previously noted, I was finally able to create a cron to run Top once daily. I am including the first one, another about midway through, and the last two. The only thing of note that I see is the sudden big jump in memory usage by weewxd according to top. Besides Weewx itself, the only other thing running on the pi is the cron.


The data sure shows weewxd growing in resident memory, but I cannot explain how you're the only user in the hundreds (at least) of people running weewx on a raspi that this is happening to, and happening over-and-over again.  Other folks run vanilla weewx on smaller systems like the original pi-zero (including me) for months and months with no issues.

I can only assume (wild guess) that you have 'something' other than bare-minimum weewx installed.  Some skin or extension or something, or alternately you're running a non-standard kernel or library or something.   Really really odd.

Again, the only way to baseline the system is install weewx 'only' with no customizations on top of 'unmodified' Raspbian.   If that's what your setup was, then I have no idea what your next steps would be.



gjr80

unread,
Dec 13, 2018, 9:34:58 PM12/13/18
to weewx-user
I think you might find there have been a few over the years, I certainly had one a couple of years back, you could set your watch by it it was so predictable. It eventually disappeared for reasons unknown to me. I suspect Neil's plot here was also symptomatic of a leak. I'm not sure that Steve is presently running a pure vanilla OS/weeWX install but that is about the only way to track it down; go right back to the basics, monitor usage and once you are happy there is no leak add one thing at a time and monitor again until you are certain there is no leak. Repeat. And I would be adding things in order of complexity, simplest first, more complex later.

Gary

Steve2Q

unread,
Dec 13, 2018, 9:50:26 PM12/13/18
to weewx-user
Vince and Gary..I forgot that I do have one other program running...Real Time Gauges. Maybe that is the problem. Would just disabling RTG under [Engine] [[Services]] be sufficient to have "plain vanilla", or should I do a total reinstall?

gjr80

unread,
Dec 13, 2018, 9:59:33 PM12/13/18
to weewx-user
Disabling the service and restarting weeWX will be fine. Then on the other hand if you are really paranoid....

Gary

vince

unread,
Dec 14, 2018, 5:04:44 PM12/14/18
to weewx-user
On Thursday, December 13, 2018 at 6:50:26 PM UTC-8, Steve2Q wrote:
Vince and Gary..I forgot that I do have one other program running...Real Time Gauges. Maybe that is the problem. Would just disabling RTG under [Engine] [[Services]] be sufficient to have "plain vanilla", or should I do a total reinstall?

I continue to blame the gauges.

Again, if you aren't willing to go pure-unaltered-nothing-extra weewx to stabilize your system, I can't be willing to spend more time trying to debug this one.  
 

Steve2Q

unread,
Dec 14, 2018, 5:24:18 PM12/14/18
to weewx-user
Vince..I disabled the gauges (under [Engine][[Services]] and will look at the output from top as it runs this way. If it doesn't break down after at least a month, I guess I will just forget about the Steel Gauges (but I do like the appearance). I wonder if it because I have such a short update time (2 seconds) which equals my Loop output. Anyway, it is not working now and time will tell.

Steve

steeple ian

unread,
Dec 14, 2018, 6:22:10 PM12/14/18
to weewx...@googlegroups.com
Steve,

Try installing Webmin. It is an open source Control Panel style application. It makes the creation of crons an absolute doddle.

Ian

On Mon, Nov 19, 2018 at 11:59 PM Steve2Q <ste...@gmail.com> wrote:
Vince..having lots of problems creating a cron. I have done it before, so that's my issue. I decided to just run the script manually and it runs, but I have to log on as root in order to run it. Anyway, not really a Weewx issue, so I will research it further. I will run the command daily until I get another system crash.

Steve

--
You received this message because you are subscribed to the Google Groups "weewx-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to weewx-user+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Steve2Q

unread,
Jan 27, 2019, 10:39:22 AM1/27/19
to weewx-user
Ok..here is what has been happening so far. I am running 3.8.2 which is "plain vanilla" with the exception of a cron which zips weewx.sdb once/day, uploads the zip file to my website for storage, and send me an email that the cron finished successfully. I thought the memory problem was gone (probably associated with my RealTimeGauges). I have not had a crash, but attached is a graph of the what the memory used by weexd looks like over the past 22 days. It looks like it is going to crash in the next day or so. Any more ideas?
graph.jpg

gjr80

unread,
Jan 27, 2019, 5:53:25 PM1/27/19
to weewx-user
Steve, so just to be absolutely 100% clear, rtgd has not been running at any time while the attached graph was compiled?  If that is the case there must be something fundamental causing the leak, I don't see a daily cron such as you describe causing this. I have not seen a plain vanilla WeeWX install do this. I have no ideas that are anything other than just clutching at straws.

Gary

Steve2Q

unread,
Jan 28, 2019, 5:28:04 PM1/28/19
to weewx-user
Gary..Yes, RTG is not enabled. At the time I am writing this, top yields weewxd using 92.5% of memory. Putty is now very slow when trying to access the pi, so I think it is very close to going down. The current up time is 21D 23H 31M. I am going to let it run till it goes down.


vince

unread,
Jan 28, 2019, 9:27:13 PM1/28/19
to weewx-user
On Monday, January 28, 2019 at 2:28:04 PM UTC-8, Steve2Q wrote:
Gary..Yes, RTG is not enabled. At the time I am writing this, top yields weewxd using 92.5% of memory. Putty is now very slow when trying to access the pi, so I think it is very close to going down. The current up time is 21D 23H 31M. I am going to let it run till it goes down.



I continue to be at a loss.  Dozens and dozens of people are running on pi without experiencing this.  There has to be something you've installed that is leaking.  Usual suspect is imaging libraries.

I'll install a clean pi3 tonight and let it run with the simulator without adding anything, using the current Raspbian Lite.

Are you setup.py or apt-get for your installation method ?

Steve2Q

unread,
Jan 28, 2019, 9:42:30 PM1/28/19
to weewx-user
Vince; I used setup.py for installation. Additional info:  Running Debian 9.6 (Stretch) on a pi3 B+  . I do not have the "lite" version as I was using the pi for other things. Do you think not having Stretch Lite could be part of the problem?

Steve


vince

unread,
Jan 28, 2019, 10:03:09 PM1/28/19
to weewx-user
On Monday, January 28, 2019 at 6:42:30 PM UTC-8, Steve2Q wrote:
Vince; I used setup.py for installation. Additional info:  Running Debian 9.6 (Stretch) on a pi3 B+  . I do not have the "lite" version as I was using the pi for other things. Do you think not having Stretch Lite could be part of the problem?



Are you running Debian or Raspbian ?
If you're running Debian, I can't speculate what the heck is in there.

If you're running Raspbian, then there's nothing in it that should prevent things from working that I'm aware of. 

Steve2Q

unread,
Jan 28, 2019, 10:30:54 PM1/28/19
to weewx-user
Vince, I am running Raspian with the "image with Desktop base on Debian Stretch" (this is from the download section of raspberrypi.org). Wonder if it is possible that the image I used has some elements for running the Desktop that may be causing the problem. I am just speculating and it would be interesting to know if those who do not have this problem are either using an "old" Raspian version or the newer Stretch Lite.

Here are two reports from the command prompt:



Andrew Milner

unread,
Jan 28, 2019, 10:55:02 PM1/28/19
to weewx-user
Looks the same as mine!!
pi@RPi3:~/perl $ cat /etc/os-release
PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"
NAME="Raspbian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
pi@RPi3:~/perl $ cat /etc/debian_version
9.6

rich T

unread,
Jan 28, 2019, 11:48:02 PM1/28/19
to weewx-user
I'm running the same as you and not having any memory issues.

pi@stormRPI3:~ $ cat /etc/os-release
PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"
NAME="Raspbian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=raspbian
ID_LIKE=debian
pi@stormRPI3:~ $ cat /etc/debian_version
9.6

Steve2Q

unread,
Jan 29, 2019, 8:05:08 AM1/29/19
to weewx-user
Andrew and Rich...are you running the version of Stretch with the desktop, of just the Lite version?

My system did crash last evening as I thought it would.

Steve

Steve2Q

unread,
Jan 29, 2019, 8:20:04 AM1/29/19
to weewx-user
Here is syslog from shortly before the crash.. Is there any other files that may be useful for analysis?

Jan 28 23:44:15 raspberrypi weewx[9770]: manager: Added record 2019-01-28 23:44:00 EST (1548737040) to database 'weewx.sdb'
Jan 28 23:44:16 raspberrypi weewx[9770]: manager: Added record 2019-01-28 23:44:00 EST (1548737040) to daily summary in 'weewx.sdb'
Jan 28 23:44:18 raspberrypi weewx[9770]: restx: PWSWeather: Published record 2019-01-28 23:44:00 EST (1548737040)
Jan 28 23:44:26 raspberrypi weewx[9770]: cheetahgenerator: Generated 14 files for report StandardReport in 8.67 seconds
Jan 28 23:44:36 raspberrypi weewx[9770]: imagegenerator: Generated 13 images for StandardReport in 9.49 seconds
Jan 28 23:44:36 raspberrypi weewx[9770]: copygenerator: copied 0 files to /home/weewx/public_html
Jan 28 23:44:47 raspberrypi weewx[9770]: ftpgenerator: ftp'd 27 files in 11.38 seconds
Jan 28 23:46:15 raspberrypi weewx[9770]: manager: Added record 2019-01-28 23:46:00 EST (1548737160) to database 'weewx.sdb'
Jan 28 23:46:16 raspberrypi weewx[9770]: manager: Added record 2019-01-28 23:46:00 EST (1548737160) to daily summary in 'weewx.sdb'
Jan 28 23:46:39 raspberrypi systemd[1]: apt-daily.service: Failed to fork: Cannot allocate memory
Jan 28 23:46:39 raspberrypi systemd[1]: apt-daily.service: Failed to run 'start' task: Cannot allocate memory
Jan 28 23:46:39 raspberrypi systemd[1]: Failed to start Daily apt download activities.
Jan 28 23:46:39 raspberrypi systemd[1]: apt-daily.timer: Adding 1h 13min 18.732028s random time.
Jan 28 23:46:39 raspberrypi systemd[1]: apt-daily.service: Unit entered failed state.
Jan 28 23:46:39 raspberrypi systemd[1]: apt-daily.timer: Adding 9h 23min 52.201118s random time.
Jan 28 23:46:39 raspberrypi systemd[1]: apt-daily.service: Failed with result 'resources'.
Jan 28 23:48:31 raspberrypi weewx[9770]: engine: Garbage collected 248392 objects
Jan 28 23:48:32 raspberrypi kernel: [3815265.633466] top invoked oom-killer: gfp_mask=0x14040d0(GFP_KERNEL|__GFP_COMP|__GFP_RECLAIMABLE), nodemask=(null),  order=0, oom_score_adj=0
Jan 28 23:48:32 raspberrypi kernel: [3815265.633480] top cpuset=/ mems_allowed=0
Jan 28 23:48:32 raspberrypi kernel: [3815265.633495] CPU: 3 PID: 17293 Comm: top Tainted: G         C      4.14.87-v7+ #1178
Jan 28 23:48:32 raspberrypi kernel: [3815265.633497] Hardware name: BCM2835
Jan 28 23:48:32 raspberrypi kernel: [3815265.633521] [<8010ff30>] (unwind_backtrace) from [<8010c174>] (show_stack+0x20/0x24)
Jan 28 23:48:32 raspberrypi kernel: [3815265.633531] [<8010c174>] (show_stack) from [<8078b424>] (dump_stack+0xd4/0x118)
Jan 28 23:48:32 raspberrypi kernel: [3815265.633543] [<8078b424>] (dump_stack) from [<80224bac>] (dump_header+0xac/0x208)
Jan 28 23:48:32 raspberrypi kernel: [3815265.633553] [<80224bac>] (dump_header) from [<80223f14>] (oom_kill_process+0x478/0x584)
Jan 28 23:48:32 raspberrypi kernel: [3815265.633560] [<80223f14>] (oom_kill_process) from [<80224874>] (out_of_memory+0x124/0x334)
Jan 28 23:48:32 raspberrypi kernel: [3815265.633570] [<80224874>] (out_of_memory) from [<8022a3b8>] (__alloc_pages_nodemask+0x10cc/0x11c0)
Jan 28 23:48:32 raspberrypi kernel: [3815265.633579] [<8022a3b8>] (__alloc_pages_nodemask) from [<80275a60>] (new_slab+0x454/0x558)
Jan 28 23:48:32 raspberrypi kernel: [3815265.633587] [<80275a60>] (new_slab) from [<802778a4>] (___slab_alloc.constprop.11+0x228/0x2c0)
Jan 28 23:48:32 raspberrypi kernel: [3815265.633595] [<802778a4>] (___slab_alloc.constprop.11) from [<80277980>] (__slab_alloc.constprop.10+0x44/0x90)
Jan 28 23:48:32 raspberrypi kernel: [3815265.633602] [<80277980>] (__slab_alloc.constprop.10) from [<80278118>] (kmem_cache_alloc+0x1f4/0x230)
Jan 28 23:48:32 raspberrypi kernel: [3815265.633611] [<80278118>] (kmem_cache_alloc) from [<802f764c>] (proc_alloc_inode+0x2c/0x5c)
Jan 28 23:48:32 raspberrypi kernel: [3815265.633622] [<802f764c>] (proc_alloc_inode) from [<802a7ca8>] (alloc_inode+0x2c/0xb4)
Jan 28 23:48:32 raspberrypi kernel: [3815265.633631] [<802a7ca8>] (alloc_inode) from [<802aa01c>] (new_inode_pseudo+0x18/0x5c)
Jan 28 23:48:32 raspberrypi kernel: [3815265.633639] [<802aa01c>] (new_inode_pseudo) from [<802aa07c>] (new_inode+0x1c/0x30)
Jan 28 23:48:32 raspberrypi kernel: [3815265.633646] [<802aa07c>] (new_inode) from [<802fb850>] (proc_pid_make_inode+0x24/0xc0)
Jan 28 23:48:32 raspberrypi kernel: [3815265.633654] [<802fb850>] (proc_pid_make_inode) from [<802fbdc8>] (proc_pident_instantiate+0x2c/0xb0)
Jan 28 23:48:32 raspberrypi kernel: [3815265.633660] [<802fbdc8>] (proc_pident_instantiate) from [<802fbee8>] (proc_pident_lookup+0x9c/0xf0)
Jan 28 23:48:32 raspberrypi kernel: [3815265.633667] [<802fbee8>] (proc_pident_lookup) from [<802fbf84>] (proc_tgid_base_lookup+0x20/0x28)
Jan 28 23:48:32 raspberrypi kernel: [3815265.633675] [<802fbf84>] (proc_tgid_base_lookup) from [<8029ad60>] (path_openat+0xe0c/0x10c0)
Jan 28 23:48:32 raspberrypi kernel: [3815265.633684] [<8029ad60>] (path_openat) from [<8029c2c4>] (do_filp_open+0x70/0xd4)
Jan 28 23:48:32 raspberrypi kernel: [3815265.633693] [<8029c2c4>] (do_filp_open) from [<80288f80>] (do_sys_open+0x120/0x1d0)
Jan 28 23:48:32 raspberrypi kernel: [3815265.633701] [<80288f80>] (do_sys_open) from [<8028905c>] (SyS_open+0x2c/0x30)
Jan 28 23:48:32 raspberrypi kernel: [3815265.633709] [<8028905c>] (SyS_open) from [<80108000>] (ret_fast_syscall+0x0/0x28)
Jan 28 23:48:32 raspberrypi kernel: [3815265.633713] Mem-Info:
Jan 28 23:48:32 raspberrypi kernel: [3815265.633724] active_anon:111538 inactive_anon:112531 isolated_anon:0
Jan 28 23:48:32 raspberrypi kernel: [3815265.633724]  active_file:502 inactive_file:552 isolated_file:32
Jan 28 23:48:32 raspberrypi kernel: [3815265.633724]  unevictable:0 dirty:3 writeback:13 unstable:0
Jan 28 23:48:32 raspberrypi kernel: [3815265.633724]  slab_reclaimable:1872 slab_unreclaimable:2785
Jan 28 23:48:32 raspberrypi kernel: [3815265.633724]  mapped:962 shmem:1453 pagetables:906 bounce:0
Jan 28 23:48:32 raspberrypi kernel: [3815265.633724]  free:4134 free_pcp:28 free_cma:435
Jan 28 23:48:32 raspberrypi kernel: [3815265.633731] Node 0 active_anon:446152kB inactive_anon:450124kB active_file:2008kB inactive_file:2208kB unevictable:0kB isolated(anon):0kB isolated(file):128kB mapped:3848kB dirty:12kB writeback:52kB shmem:5812kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
Jan 28 23:48:32 raspberrypi kernel: [3815265.633742] Normal free:16536kB min:16384kB low:20480kB high:24576kB active_anon:446152kB inactive_anon:449608kB active_file:2452kB inactive_file:2316kB unevictable:0kB writepending:0kB present:970752kB managed:949448kB mlocked:0kB kernel_stack:1144kB pagetables:3624kB bounce:0kB free_pcp:72kB local_pcp:0kB free_cma:1740kB
Jan 28 23:48:32 raspberrypi kernel: [3815265.633744] lowmem_reserve[]: 0 0
Jan 28 23:48:32 raspberrypi kernel: [3815265.633751] Normal: 123*4kB (UMEHC) 104*8kB (UEHC) 105*16kB (UEHC) 76*32kB (UEHC) 39*64kB (UEHC) 20*128kB (UEH) 16*256kB (UEHC) 2*512kB (H) 1*1024kB (C) 0*2048kB 0*4096kB = 16636kB
Jan 28 23:48:32 raspberrypi kernel: [3815265.633790] 2600 total pagecache pages
Jan 28 23:48:32 raspberrypi kernel: [3815265.633794] 18 pages in swap cache
Jan 28 23:48:32 raspberrypi kernel: [3815265.633797] Swap cache stats: add 967890, delete 967873, find 539252/1220763
Jan 28 23:48:32 raspberrypi kernel: [3815265.633799] Free swap  = 0kB
Jan 28 23:48:32 raspberrypi kernel: [3815265.633801] Total swap = 102396kB
Jan 28 23:48:32 raspberrypi kernel: [3815265.633803] 242688 pages RAM
Jan 28 23:48:32 raspberrypi kernel: [3815265.633806] 0 pages HighMem/MovableOnly
Jan 28 23:48:32 raspberrypi kernel: [3815265.633808] 5326 pages reserved
Jan 28 23:48:32 raspberrypi kernel: [3815265.633810] 2048 pages cma reserved
Jan 28 23:48:32 raspberrypi kernel: [3815265.633812] [ pid ]   uid  tgid total_vm      rss nr_ptes nr_pmds swapents oom_score_adj name
Jan 28 23:48:32 raspberrypi kernel: [3815265.633832] [   92]     0    92     8780       34      15       0       75             0 systemd-journal
Jan 28 23:48:32 raspberrypi kernel: [3815265.633839] [  124]     0   124     3638       13       8       0      157         -1000 systemd-udevd
Jan 28 23:48:32 raspberrypi kernel: [3815265.633846] [  246]   100   246     4320        8       9       0      102             0 systemd-timesyn
Jan 28 23:48:32 raspberrypi kernel: [3815265.633852] [  294]     0   294     5969       12      10       0      245             0 rsyslogd
Jan 28 23:48:32 raspberrypi kernel: [3815265.633857] [  298]   108   298     1601       28       7       0       63             0 avahi-daemon
Jan 28 23:48:32 raspberrypi kernel: [3815265.633863] [  299] 65534   299     1324        4       6       0       58             0 thd
Jan 28 23:48:32 raspberrypi kernel: [3815265.633868] [  304]   105   304     1629       40       8       0       77          -900 dbus-daemon
Jan 28 23:48:32 raspberrypi kernel: [3815265.633874] [  326]   108   326     1601        0       6       0       77             0 avahi-daemon
Jan 28 23:48:32 raspberrypi kernel: [3815265.633879] [  336]     0   336     1845       10       7       0      101             0 systemd-logind
Jan 28 23:48:32 raspberrypi kernel: [3815265.633885] [  339]     0   339     1325       13       6       0       43             0 cron
Jan 28 23:48:32 raspberrypi kernel: [3815265.633890] [  424]     0   424     2533       16       9       0      119             0 wpa_supplicant
Jan 28 23:48:32 raspberrypi kernel: [3815265.633897] [  461]     0   461      524        0       4       0       31             0 hciattach
Jan 28 23:48:32 raspberrypi kernel: [3815265.633902] [  465]     0   465     1818        0       8       0       84             0 bluetoothd
Jan 28 23:48:32 raspberrypi kernel: [3815265.633907] [  466]     0   466     8759        0      13       0      108             0 bluealsa
Jan 28 23:48:32 raspberrypi kernel: [3815265.633913] [  530]     0   530      737        8       6       0       94             0 dhcpcd
Jan 28 23:48:32 raspberrypi kernel: [3815265.633918] [  546]     0   546     1470        2       8       0      115             0 login
Jan 28 23:48:32 raspberrypi kernel: [3815265.633923] [  571]     0   571     2552       11       8       0      127         -1000 sshd
Jan 28 23:48:32 raspberrypi kernel: [3815265.633929] [  806]  1000   806     2414        2       9       0      178             0 systemd
Jan 28 23:48:32 raspberrypi kernel: [3815265.633935] [  810]  1000   810     2815        0       9       0      315             0 (sd-pam)
Jan 28 23:48:32 raspberrypi kernel: [3815265.633940] [  817]  1000   817     1526        2       6       0      316             0 bash
Jan 28 23:48:32 raspberrypi kernel: [3815265.633945] [  841]     0   841     1808        2       8       0       93             0 sudo
Jan 28 23:48:32 raspberrypi kernel: [3815265.633951] [  845]     0   845      877        0       5       0       22             0 tail
Jan 28 23:48:32 raspberrypi kernel: [3815265.633957] [  857]     0   857     1808        2       7       0       93             0 sudo
Jan 28 23:48:32 raspberrypi kernel: [3815265.633962] [  861]     0   861      877        0       5       0       22             0 tail
Jan 28 23:48:32 raspberrypi kernel: [3815265.633968] [ 2317]     0  2317    11818       48      18       0      308             0 packagekitd
Jan 28 23:48:32 raspberrypi kernel: [3815265.633974] [ 2321]     0  2321    10046       40      14       0      160             0 polkitd
Jan 28 23:48:32 raspberrypi kernel: [3815265.633979] [ 2424]     0  2424     1808        2       7       0       93             0 sudo
Jan 28 23:48:32 raspberrypi kernel: [3815265.633985] [ 2428]     0  2428      877        0       5       0       22             0 tail
Jan 28 23:48:32 raspberrypi kernel: [3815265.633990] [ 2491]     0  2491     1808        2       8       0       93             0 sudo
Jan 28 23:48:32 raspberrypi kernel: [3815265.633995] [ 2495]     0  2495      877        0       5       0       22             0 tail
Jan 28 23:48:32 raspberrypi kernel: [3815265.634001] [ 2570]     0  2570     1808        2       7       0       93             0 sudo
Jan 28 23:48:32 raspberrypi kernel: [3815265.634007] [ 2574]     0  2574      877        0       7       0       22             0 tail
Jan 28 23:48:32 raspberrypi kernel: [3815265.634012] [ 5138]     0  5138     1808        2       8       0       92             0 sudo
Jan 28 23:48:32 raspberrypi kernel: [3815265.634018] [ 5142]     0  5142      877        0       6       0       21             0 tail
Jan 28 23:48:32 raspberrypi kernel: [3815265.634023] [ 7788]     0  7788     1808        2       7       0       92             0 sudo
Jan 28 23:48:32 raspberrypi kernel: [3815265.634029] [ 7792]     0  7792      877        0       6       0       21             0 tail
Jan 28 23:48:32 raspberrypi kernel: [3815265.634035] [ 7924]     0  7924     1808        2       7       0       91             0 sudo
Jan 28 23:48:32 raspberrypi kernel: [3815265.634040] [ 7928]     0  7928      877        0       6       0       21             0 tail
Jan 28 23:48:32 raspberrypi kernel: [3815265.634045] [ 9770]     0  9770   243016   221290     473       0     9449             0 weewxd
Jan 28 23:48:32 raspberrypi kernel: [3815265.634051] [ 9793]     0  9793     1808        2       7       0       91             0 sudo
Jan 28 23:48:32 raspberrypi kernel: [3815265.634056] [ 9797]     0  9797      877        0       5       0       21             0 tail
Jan 28 23:48:32 raspberrypi kernel: [3815265.634061] [ 6926]  1000  6926     2028        1       8       0      126             0 top
Jan 28 23:48:32 raspberrypi kernel: [3815265.634067] [12607]     0 12607     1808        0       7       0       92             0 sudo
Jan 28 23:48:32 raspberrypi kernel: [3815265.634072] [12611]     0 12611      877        0       6       0       21             0 tail
Jan 28 23:48:32 raspberrypi kernel: [3815265.634077] [12661]     0 12661     1808        2       8       0       92             0 sudo
Jan 28 23:48:32 raspberrypi kernel: [3815265.634083] [12665]     0 12665      877        0       6       0       21             0 tail
Jan 28 23:48:32 raspberrypi kernel: [3815265.634089] [12895]     0 12895     1808        2       8       0       92             0 sudo
Jan 28 23:48:32 raspberrypi kernel: [3815265.634094] [12899]     0 12899      877        0       6       0       21             0 tail
Jan 28 23:48:32 raspberrypi kernel: [3815265.634099] [17160]     0 17160     2882      184      10       0        2             0 sshd
Jan 28 23:48:32 raspberrypi kernel: [3815265.634105] [17171]  1000 17171     2915      195      10       0        3             0 sshd
Jan 28 23:48:32 raspberrypi kernel: [3815265.634111] [17174]  1000 17174     1526      281       7       0       38             0 bash
Jan 28 23:48:32 raspberrypi kernel: [3815265.634116] [17194]  1000 17194     2028      128       9       0        1             0 top
Jan 28 23:48:32 raspberrypi kernel: [3815265.634122] [17211]  1000 17211     2028        0       8       0      129             0 top
Jan 28 23:48:32 raspberrypi kernel: [3815265.634127] [17290]  1000 17290     2028      113      10       0        1             0 top
Jan 28 23:48:32 raspberrypi kernel: [3815265.634132] [17293]  1000 17293     2028      391       8       0        1             0 top
Jan 28 23:48:32 raspberrypi kernel: [3815265.634140] Out of memory: Kill process 9770 (weewxd) score 852 or sacrifice child
Jan 28 23:48:32 raspberrypi kernel: [3815265.634171] Killed process 9770 (weewxd) total-vm:972064kB, anon-rss:885160kB, file-rss:0kB, shmem-rss:0kB
Jan 28 23:48:32 raspberrypi kernel: [3815265.972280] oom_reaper: reaped process 9770 (weewxd), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
Jan 29 00:17:01 raspberrypi CRON[17411]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Jan 29 01:17:01 raspberrypi CRON[17456]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Jan 29 02:17:01 raspberrypi CRON[17500]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Jan 29 03:17:01 raspberrypi CRON[17526]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Jan 29 04:17:01 raspberrypi CRON[17555]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Jan 29 05:17:01 raspberrypi CRON[17581]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Jan 29 06:13:38 raspberrypi systemd[1]: Starting Daily apt upgrade and clean activities...
Jan 29 06:13:41 raspberrypi systemd[1]: Started Daily apt upgrade and clean activities.
Jan 29 06:13:41 raspberrypi systemd[1]: apt-daily-upgrade.timer: Adding 24min 23.836691s random time.
Jan 29 06:13:41 raspberrypi systemd[1]: apt-daily-upgrade.timer: Adding 4min 9.519638s random time.
Jan 29 06:17:02 raspberrypi CRON[17652]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Jan 29 06:25:01 raspberrypi CRON[17667]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ))
Jan 29 06:25:02 raspberrypi liblogging-stdlog:  [origin software="rsyslogd" swVersion="8.24.0" x-pid="294" x-info="http://www.rsyslog.com"] rsyslogd was HUPed

Thomas Keffer

unread,
Jan 29, 2019, 9:27:49 AM1/29/19
to weewx-user
If I understand the situation correctly, you 
  • Are running plain-vanilla wheezy on a stock RPi;
  • Installed weewx using setup.py;
  • Are using an Ultimeter station;
  • Have absolutely no extensions installed (in particular, rtg has been disabled);
yet are experiencing memory growth.

Please correct me if any of these assumptions are wrong. If they are correct, I'll make up an instrumented version of weewxd that will show which objects are growing in size and number.

-tk



Steve2Q

unread,
Jan 29, 2019, 9:40:09 AM1/29/19
to weewx-user
Tom: I am running Stretch on a RPi 3 + (most recent Raspian from raspberrypi.org; the version that includes desktop however I have the pi set to boot into the cli, not the GUI).  I have run the necessary commands to update, upgrade, and update the firmware. I used setup.py with the Ultimeter station, and except for a cron to backup .sdb, there are no other extensions installed. If you need to look at any particular files, please let me know.

Thomas Keffer

unread,
Jan 29, 2019, 10:52:00 AM1/29/19
to weewx-user
I am not concerned about any crontab extensions. It's weewx.conf extensions that we care about. Can you please run

cd /home/weewx
./bin/wee_debug --info --output

then email me (tke...@gmail.com) the file /var/tmp/weewx.debug.



On Tue, Jan 29, 2019 at 6:40 AM Steve2Q <ste...@gmail.com> wrote:
Tom: I am running Stretch on a RPi 3 + (most recent Raspian from raspberrypi.org; the version that includes desktop however I have the pi set to boot into the cli, not the GUI).  I have run the necessary commands to update, upgrade, and update the firmware. I used setup.py with the Ultimeter station, and except for a cron to backup .sdb, there are no other extensions installed. If you need to look at any particular files, please let me know.

rich T

unread,
Jan 29, 2019, 2:08:43 PM1/29/19
to weewx-user
Steve

I'm running with desktop.

Thomas Keffer

unread,
Jan 29, 2019, 8:11:33 PM1/29/19
to weewx-user
OK, Steve, I think we're ready. This is going to take a little preparation on your part.

1. Install the tool pympler. This is a memory profiler.

pip install pympler


2. Replace your version of engine.py with the attached version. You should find it in /home/weewx/bin/weewx/engine.py.


3. Edit your version of weewx.conf and add the highlighted line near the top, under the 'debug' option:
# This section is for general configuration information.

# Set to 1 for extra debug info, otherwise comment it out or set to zero
debug = 0

debug_memory = True
# Root directory of the weewx data file hierarchy for this station
WEEWX_ROOT = /home/weewx

4. Run weewxd normally. It will profile memory after every archive interval, and add the results to /var/tmp/weewx_memory_summary.

5.  Let it run overnight or, at least, long enough that you can see memory climbing. Post the file /var/tmp/weewx_memory_summary.

Let me know if you have any questions or problems.

-tk

--
engine.py

Steve2Q

unread,
Jan 29, 2019, 9:40:57 PM1/29/19
to weewx-user
Tom: I followed the instruction I received the following:

Steve2Q

unread,
Jan 29, 2019, 9:57:08 PM1/29/19
to weewx-user
Tom; the previous is what I got just by stopping weewx and then restarting after the mods (no reboot of the pi).
If I reboot the pi, it hangs here:


I reverted back to the original setting so the station will keep running until I hear back from you.

Steve

Glenn McKechnie

unread,
Jan 29, 2019, 10:21:34 PM1/29/19
to weewx...@googlegroups.com
Hi Steve,

The error is "No module named pympler"

Looks like you've either missed step 1. of Toms instructions, or it's
failed to install.

Try installing it with sudo

sudo pip install pympler


On 30/01/2019, Steve2Q <ste...@gmail.com> wrote:
> Tom; the previous is what I got just by stopping weewx and then restarting
> after the mods (no reboot of the pi).
>
>> If I reboot the pi, it hangs here:
>
>
> I reverted back to the original setting so the station will keep running
> until I hear back from you.
>
> Steve
>
> --
> You received this message because you are subscribed to the Google Groups
> "weewx-user" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to weewx-user+...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>


--


Cheers
Glenn

rorpi - read only raspberry pi & various weewx addons
https://github.com/glennmckechnie

Steve2Q

unread,
Jan 29, 2019, 10:42:37 PM1/29/19
to weewx-user
Glenn..it did say that it installed, but i will try your suggestion tomorrow.
Thanks

Steve2Q

unread,
Jan 30, 2019, 6:30:03 AM1/30/19
to weewx-user
Tom and Glenn..thanks..it is running now. I will let it run for a full 24 hours and post the results tomorrow. At this moment (weewx running for 5 minutes), top shows 4.6% memory usage by weewxd.

Steve2Q

unread,
Jan 31, 2019, 9:28:18 AM1/31/19
to weewx-user
Tom: attached is the summary file. Weewx has been running for 24 hours. Weewxd is using 9.6%

Steve
weewx_memory_summary

Thomas Keffer

unread,
Jan 31, 2019, 11:00:11 AM1/31/19
to weewx-user
It appears that "weakref" references  are steadily climbing over time. Weak references are used to aid garbage collection in Python. They are not used in WeeWX, so they are probably being used by an underlying library. My candidate is the driver for your Ultimeter.

Could you please do two things?

1, First, run the command lsusb, then cut and paste the results.

2. Then unplug, then plug back in your Ultimeter, then run the command 'dmesg'. Cut and paste the last 20 lines or so that it prints out. It will look something like this:

[93723.169773] usb 2-2: new full-speed USB device number 29 using xhci_hcd
[93723.318669] usb 2-2: New USB device found, idVendor=05ad, idProduct=0fba
[93723.318676] usb 2-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[93723.318680] usb 2-2: Product: USB-Serial Controller
[93723.318684] usb 2-2: Manufacturer: Prolific Technology Inc.
[93723.319500] pl2303 2-2:1.0: pl2303 converter detected
[93723.322547] usb 2-2: pl2303 converter now attached to ttyUSB0

This will tell us what modules are used by your Ultimeter.

-tk

On Thu, Jan 31, 2019 at 6:28 AM Steve2Q <ste...@gmail.com> wrote:
Tom: attached is the summary file. Weewx has been running for 24 hours. Weewxd is using 9.6%

Steve

--

vince

unread,
Jan 31, 2019, 12:27:37 PM1/31/19
to weewx-user
On Thursday, January 31, 2019 at 8:00:11 AM UTC-8, Thomas Keffer wrote:
It appears that "weakref" references  are steadily climbing over time. Weak references are used to aid garbage collection in Python. They are not used in WeeWX, so they are probably being used by an underlying library. My candidate is the driver for your Ultimeter.



The map shows 14 Ultimeter stations registered currently.
Might be interesting to see if any of those stations are showing similar behavior...

Thomas Keffer

unread,
Jan 31, 2019, 12:38:26 PM1/31/19
to weewx-user
Good idea. I've sent a note to a couple of them.

-tk

--

Steve2Q

unread,
Jan 31, 2019, 1:45:41 PM1/31/19
to weewx-user
Tom and Vince:

Should I leave Weewx running as is or should I comment out the debug_memory = True line, and switch back to the original engine.py?

I have cut and pasted the information you asked for, Tom. If it ends up being the driver, I could always use an old version I have (11rc3) and see if that is any better. I will say that I was never sure why the driver changed. The old one seemed to work fine, and the new one does not correct the console clock like the old (I had posted about this in the past).

Result of lsusb:


Result of dmesg:






Thomas Keffer

unread,
Jan 31, 2019, 4:09:31 PM1/31/19
to weewx-user
You can comment out the debug_memory option.

I'm suspecting your serial-to-usb cable. Do you have another one you can try?

-tk

Steve2Q

unread,
Jan 31, 2019, 4:39:58 PM1/31/19
to weewx-user
I substituted another cable. This is the result of lsusb for this one:


With the reboot, top shows 3.4%, will watch it over the next 24 hours. Just for info; what makes you suspect the cable?

Steve



Thomas Keffer

unread,
Jan 31, 2019, 4:51:26 PM1/31/19
to weewx-user
Several things:
  • The climb in the number of weakrefs, which are not used by WeeWX, so must be used indirectly in a library or driver;
  • Of the libraries used by WeeWX, only the usb drivers use weakrefs (I checked);
  • User Kurt has an Ultimeter, but with a serial connection (instead of usb), and has no such problems;
  • Serial cables are notorious for hardware and software problems;
  • There isn't much left!
But, I could well be wrong! Hopefully, this stab-in-the-dark will work out.

-tk

Steve2Q

unread,
Jan 31, 2019, 7:36:34 PM1/31/19
to weewx-user
Tom: will the memory usage stay fairly flat if things are working properly? I seem to remember some comments that it can go up, but then levels off. Is there some average % of usage that is considered "normal"?

Steve

Thomas Keffer

unread,
Jan 31, 2019, 7:48:43 PM1/31/19
to weewx-user
It should go up, but stabilize within a couple hours. If you let it run overnight, you should have a pretty good indication by morning.

-tk

On Thu, Jan 31, 2019 at 4:36 PM Steve2Q <ste...@gmail.com> wrote:
Tom: will the memory usage stay fairly flat if things are working properly? I seem to remember some comments that it can go up, but then levels off. Is there some average % of usage that is considered "normal"?

Steve

--

vince

unread,
Jan 31, 2019, 8:13:41 PM1/31/19
to weewx-user
On Thursday, January 31, 2019 at 4:48:43 PM UTC-8, Thomas Keffer wrote:
It should go up, but stabilize within a couple hours. If you let it run overnight, you should have a pretty good indication by morning.

-tk

On Thu, Jan 31, 2019 at 4:36 PM Steve2Q <ste...@gmail.com> wrote:
Tom: will the memory usage stay fairly flat if things are working properly? I seem to remember some comments that it can go up, but then levels off. Is there some average % of usage that is considered "normal"?

Steve



Steve - give it 2-3 days since you've previously seen it grow quickly.  What the heck...have a weekend maybe and see what it looks like Sunday or Monday.  It should stay up that long regardless based on your past history, eh ?


Steve2Q

unread,
Feb 1, 2019, 9:57:16 AM2/1/19
to weewx-user
Vince..good idea. I will let it keep running and chart the results as I did last time. Hopefully it will level off.

Thomas Keffer

unread,
Feb 1, 2019, 10:29:27 AM2/1/19
to weewx-user
Still, if you've got any early results...

On Fri, Feb 1, 2019 at 6:57 AM Steve2Q <ste...@gmail.com> wrote:
Vince..good idea. I will let it keep running and chart the results as I did last time. Hopefully it will level off.

Steve - give it 2-3 days since you've previously seen it grow quickly.  What the heck...have a weekend maybe and see what it looks like Sunday or Monday.  It should stay up that long regardless based on your past history, eh ?


vince

unread,
Feb 1, 2019, 11:02:24 AM2/1/19
to weewx-user
On Friday, February 1, 2019 at 7:29:27 AM UTC-8, Thomas Keffer wrote:
Still, if you've got any early results...



yeah - this one is pretty interesting.....

Steve2Q

unread,
Feb 1, 2019, 4:01:25 PM2/1/19
to weewx-user
Weewxd using 9.1% of memory as of now.


Thomas Keffer

unread,
Feb 1, 2019, 4:04:11 PM2/1/19
to weewx-user

... and climbing? Or, has it been stable?

On Fri, Feb 1, 2019 at 1:01 PM Steve2Q <ste...@gmail.com> wrote:
Weewxd using 9.1% of memory as of now.


Steve2Q

unread,
Feb 1, 2019, 4:13:40 PM2/1/19
to weewx-user
Tom, so far it is climbing. Started at 3.4, this AM it was 7.7, and now 9.1

Thomas Keffer

unread,
Feb 1, 2019, 5:25:10 PM2/1/19
to weewx-user
Rats. It should have leveled off by now.

On Fri, Feb 1, 2019 at 1:13 PM Steve2Q <ste...@gmail.com> wrote:
Tom, so far it is climbing. Started at 3.4, this AM it was 7.7, and now 9.1

--

Steve2Q

unread,
Feb 2, 2019, 4:18:04 PM2/2/19
to weewx-user
Now up to 15%  :(
It is loading more messages.
0 new messages