Re: My Experience Installing Moonfire NVR on Gentoo Linux systems

390 views
Skip to first unread message

Scott Lamb

unread,
May 26, 2020, 2:34:41 AM5/26/20
to John, moonfire-nvr-users
On Sun, May 24, 2020 at 12:36 PM John <jlpo...@gmail.com> wrote:
I installed Moonfire NVR a few days ago on my Dell laptop running Gentoo Linux.  I had to emerge
  • ev-lang/rust-bin
  • dev-util/cargo-c - had to unmask for amd74:
    • ares /usr/local/src/moonfire # cat >/etc/portage/package.accept_keywords/cargo
                  dev-util/cargo-c  ~amd64
                  [Ctrl-D]
  • sys-apps/yarn - had to unmask sys-apps/yarn
                ares /usr/local/src/moonfire # cat >/etc/portage/package.accept_keywords/yarn
                sys-apps/yarn ~amd64

Otherwise, the install instructions were straight forward, though I opted to leave the install under /usr/local/src/moonfire rather than integrate it into the standard locations.

Great. I've never used Gentoo, but I gather unmasking is a fairly routine thing to do. Is that right?
 
Today I installed Moonfire on a Xen VM running on an Intel Atom processor now.

I'll be testing against two Reolink  RLC-420 [4 megapixal] and two RLC-420-5MP.

Moonfire NVR interests me as I have had in mind a design of a video surveillance system wherein one component is simply the streams are captured and preserved while another off-camera component handles processing and storage management.

Yeah, that's similar to my own philosophy. It may require beefier or specialized hardware to do analytics NVR-side but it seems preferable to camera-side when affordable.
 
My opinion is one should use cameras for basic camera functions and defer the intelligent analysis at the back end. I'm finding the Reolink's cameras' detection systems relatively good, I've been running the RLC-420s now for over a year and find I have several hundred 20-40 second videos each day.  Using a codec pack, K-Lite Codec Pack, I use Windows Explorer's file preview function to help isolate videos of interest.  What is nifty about the K-Lite Code Pack is you can have it display a frame from a specific time point, e.g. 10 seconds.  The Reolinks tend to have the alarm event at the 10 second mark (you cannot configure the pre-roll amount) or each capture, so having a codec and DLL which causes an Nth frame or Nth second in time to appear is really helpful in identifying videos of interest.

I've been intending to enhance Moonfire NVR's UI to do something similar. My setup is recording on-camera motion detection events in the "signals" part of Moonfire NVR's database schema. I want a UI for listing each motion event, along with a link to a video covering that event +/- 30 seconds, and have it start playing 30 seconds in. So you start playing right on the motion event but can easily rewind a bit if desired for context.
 
  I'm getting a lot of false positives from wind, sudden change in light, and wind.  I'd like to explore detection, but the first step is to capture and preserve a database of samples and then learn about detection and re-iteratively test the detection algorithms.  I wanted to use AI, but estimated that to do so would cost hundreds, if not thousands of dollars in electricity alone to develop a model, so creating an AI model at this time is simply out of my league.

How did you arrive at that figure? Is that for a model trained from scratch?

I still haven't trained a model but it's something I'm interested in doing. My understanding is that transfer learning (starting from, say, MobileNet SSD v2/v3) is supposed to be fairly cheap. I haven't actually quantified it though.

At any rate, I'd be very interested in collaborating on a database of samples.
 

I'd like to add to the install instructions, especially as to what values to place in the configurations as it was not clear what the choice types are nor the significance of the values.  I also was tripped up on the configure disk, then camera, then go back to disk and set a limit.  I suppose a careful reading of the documentation might have given me a better understanding, but I really like to test a software by just walking into it and seeing how well it can guide me through its setup &etc without my having to invest a lot of time reading documentation.  Yes, I'm lazy (and pledge allegiance to Perl).  Aren't you?

PRs welcome for the instructions. I'm also open to changing the UI itself to make it more friendly. Maybe it could support adding a sample file dir from the "Add camera" page's "sample file dir" pull-down to avoid the backtracking.
 
Looks like this is the first posting to this group, so I am taking the time to expound.  I think this is a great project, Scott. Lamb certainly has the chops to carry it.

Thanks!
 

Some install times (Gentoo VM on Gentoo Xen on Intel Atom) of the steps from the Installation Manual:

ares /usr/local/src/moonfire/moonfire-nvr # uname -a
Linux ares 4.9.76-gentoo-r1 #1 SMP Fri Mar 2 21:47:45 PST 2018 x86_64 Intel(R) Atom(TM) CPU C2750 @ 2.40GHz GenuineIntel GNU/Linux
ares /usr/local/src/moonfire/moonfire-nvr #

  • yarn 1 min 6 sec
  • yarn build 1 min 5 secs
  • cargo test 11 min 6 secs
  • cargo build --release 28 min 32 secs
I'm thinking this system is like the following:  you have a hose that continually output water.  This program captures it and then stores it in containers, e.g. 2 minute, chunks until the limit of the storage device is met.  Filename are camera id + HEX beginning at 0.  You can extract whatever segment of video you want from the existing inventory of segments.  The maintenance of the current rolling inventory is first-in and first-out, so if you have 2 GB of allocation to a camera and the allocation is nearing its maximum, the oldest segment is removed to make room for the newest.

That's about right. The segments (called "recordings" in the database schema) are a mostly-invisible implementation detail. You can fetch an arbitrary timespan from the database; it doesn't need to start at a recording boundary.

A nit that might affect your calculations below: recordings are typically very close to 1 minute long, not 2 minutes. The first one in a "run" will be longer (1–2 minutes); the last one (aka one ended by network error or shutdown or some such) will of course be shorter. Some details in design/time.md. (Search for "rotation".)

I'm really pressing limits as my Reolink video feeds at 2560 x {1440,1920} at 30 frames per second.  Each 2 minute segment is:
  • Reolink-420 [4 Megapixal] 59 MB
  • Realink-420-5MP 48 MB
So at their highest settings all four cameras are consuming 110 MBs per minute or 6.6GB per hour or ~160 GB/day.  That's about the same as the Windows Reolink client generates and which I review manually using Windows Explorer (+ VLC for the ones I actually view) using the K-Lite Codec Pack codecs and its utility the Codec Tweak Tool.

110 MB/minute (aka 15 Mbit/sec) in total (or twice that) should be totally fine. Are you seeing signs of strain? That'd be surprising. My Raspberry Pi 4 setup handles up to 31 Mbit/sec across all cameras with no apparent problems. It's using <40% CPU now (out of 400%, so <10% of the total machine). I'd expect any Intel-based system to be fine with any reasonable number of cameras.

Note I said "up to 31 Mbit/sec" because my cameras are configured for VBR (variable bit rate). I'm not sure if they ever all hit their max configured bit rate at once, but it should be fine if they do.

--
You received this message because you are subscribed to the Google Groups "moonfire-nvr-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moonfire-nvr-us...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moonfire-nvr-users/e1df7ebc-dabd-46bd-925e-30bb395d8bcf%40googlegroups.com.


--

John L. Poole

unread,
May 26, 2020, 12:42:59 PM5/26/20
to Scott Lamb, moonfire-nvr-users

I'm taking this snipped form a topic, "My Experience Installing Moonfire NVR on Gentoo Linux systems", I posted initially on this forum with many items and isolating this topic with a new heading so discussion about AI and videos can be organized herein.

On 5/25/2020 11:34 PM, Scott Lamb wrote:
 
  I'm getting a lot of false positives from wind, sudden change in light, and wind.  I'd like to explore detection, but the first step is to capture and preserve a database of samples and then learn about detection and re-iteratively test the detection algorithms.  I wanted to use AI, but estimated that to do so would cost hundreds, if not thousands of dollars in electricity alone to develop a model, so creating an AI model at this time is simply out of my league.

How did you arrive at that figure? Is that for a model trained from scratch?

The high dollar amount is for making a model, not using a pre-built model.

I looked into AI in April of 2019.  I had done some test processing and realized how little horsepower my i7 workstation with 32Gb ram had. Here's a link to a posting about hardware and costs:  https://l7.curtisnorthcutt.com/build-pro-deep-learning-workstation  See towards the bottom of the posting at "GCE Cost per Epoch".

The metric that blew me away was "ImageNet for 100 epochs would cost around $1277".

A thought I had which I created while reviewing items on IPVM (a very informative subscription service at $200/year) was the vendors such as Bosch might go into the business of creating customized models for their customers who had fixed cameras.  So the idea of building a model based on actual data from the camera whose output the model would be applied to was inviting.


--

John Laurence Poole
1566 Court ST NE
Salem OR 97301-4241
707-812-1323 office

Scott Lamb

unread,
Jun 6, 2020, 12:10:12 PM6/6/20
to John, moonfire-nvr-users
On Tue, May 26, 2020 at 9:42 AM John L. Poole <jlpo...@gmail.com> wrote:

I'm taking this snipped form a topic, "My Experience Installing Moonfire NVR on Gentoo Linux systems", I posted initially on this forum with many items and isolating this topic with a new heading so discussion about AI and videos can be organized herein.

On 5/25/2020 11:34 PM, Scott Lamb wrote:
 
  I'm getting a lot of false positives from wind, sudden change in light, and wind.  I'd like to explore detection, but the first step is to capture and preserve a database of samples and then learn about detection and re-iteratively test the detection algorithms.  I wanted to use AI, but estimated that to do so would cost hundreds, if not thousands of dollars in electricity alone to develop a model, so creating an AI model at this time is simply out of my league.

How did you arrive at that figure? Is that for a model trained from scratch?

The high dollar amount is for making a model, not using a pre-built model.

I looked into AI in April of 2019.  I had done some test processing and realized how little horsepower my i7 workstation with 32Gb ram had. Here's a link to a posting about hardware and costs:  https://l7.curtisnorthcutt.com/build-pro-deep-learning-workstation  See towards the bottom of the posting at "GCE Cost per Epoch".

The metric that blew me away was "ImageNet for 100 epochs would cost around $1277".

Thanks for the link. Yeah, that seems prohibitive, particularly when as a newbie to machine learning I'm likely to screw up the first several rounds, and there aren't too many interested folks yet to split costs.

Fortunately, I don't think training from scratch is the only option for machine learning. There's also:
  • just using a pre-trained model. I've tried this and the results are mediocre. But https://github.com/wb666greene/AI-Person-Detector is interesting. It doesn't just downscale the whole frame to the model size (300x300 pixels) and feed it to the model. It does that as a first step, then feeds zoomed-in versions of the identified regions back to the model so it can "see" them more closely. Apparently this does decently well at reducing false positives. Similarly, I read somewhere when you want to get more resolution than your model supports, a common technique is to feed it overlapping tiles taken from your source image.
  • transfer learning. This seems most promising to me. I don't have numbers, but my understanding is it doesn't need a super-expensive hardware setup, an enormous amount of time/CPU, or nearly as many labeled images to work from. You can adapt it to your own conditions and label, and perhaps also to a different image size.
  • federated learning. This seems like a slick, privacy-aware option for products with lots of users and a UI for easily correcting mislabeled data. Moonfire NVR probably won't ever get a large user count, but it's interesting to read about and a demonstration that you can do some training on low-cost hardware.
I'd love to have some public, easy-to-contribute-to database of training data to use for transfer learning. It'd hold images or video snippets captured on home security cameras in a variety of conditions (ideally labelled as such):
  • night (IR or extreme low-light) or day
  • weather: sunny, overcast, raining/snowing, wind
  • lighting: back-lit, shadows
  • spiderwebs or flying insects in front of the camera
  • noisy images (I have varying quality cameras...)
  • different camera angles
  • indoor/outdoor
  • ...
which feature a variety of people and detection conditions:
  • sizes (toddlers up to adults)
  • race/skin color
  • clothes (including masks these days...)
  • gaits
  • distance
  • ...
as well as objects that have been falsely detected as people, animals if folks want them to be detected, etc.

A thought I had which I created while reviewing items on IPVM (a very informative subscription service at $200/year) was the vendors such as Bosch might go into the business of creating customized models for their customers who had fixed cameras.  So the idea of building a model based on actual data from the camera whose output the model would be applied to was inviting.


--

John Laurence Poole
1566 Court ST NE
Salem OR 97301-4241
707-812-1323 office

Reply all
Reply to author
Forward
0 new messages