Answering Some Early FAQs

65 views
Skip to first unread message

Evaluation of Latent Friction Ridge Technology (ELFT)

unread,
Apr 30, 2021, 5:02:17 PM4/30/21
to Evaluation of Latent Friction Ridge Technology (ELFT)

Hello potential ELFT participants,

We've fielded several questions about our evaluation plans, so we thought it might be good to share our thoughts with the wider group. Some of these questions have resulted in updates to our GitHub repository, so please make sure you are up to date.

Limited Scope to Start

The required API is very flexible to all sorts of data and scenarios. However, we will not be exercising all those possible scenarios at first. What we have committed to our sponsor for initial analysis includes:

  • Probes: Single Latent (Test Plan, Section 3.1.1) 
  • Database: Plain, rolled, and palm (i.e., no latent)
    • The size of the database will begin with ~1.8m subjects with an average of 10-20 regions (mostly fingers) each. For example, a typical subject might have 10 rolled prints and left, right, and thumb slap images. We have plans to grow the database up to 5m or more of such subjects.

While you should support all functionality (e.g., latents in database, multi-latent searches, etc.), we can guarantee we will not be evaluating those scenarios within the next 3 months, at least. This means you could submit a minimal implementation now, so long as a new submission was in by the time those new scenarios are exercised. 

Database Access

We've also had a few questions regarding the enrollment database. Our intent was to have participants write any desired database structure to disk, and then NIST would copy that data to a RAM disk for fast format-agnostic access during search. However, it's becoming evident through some recent feedback that participants may be struggling to compact their databases to fit into RAM.

The current minimum RAM amount on our evaluation systems is 128 GB. We planned to provide a maximum of 96 GB (75%) for the database RAM disk, with the remaining 32 GB for the OS and many dozen of your fork()ed calls to search()

If you will be unable to fit ~1.8m subjects with 20 regions into 96 GB and call search() with 32 GB, or if you will be unable to create your database with ~1.8-5m of such templates loaded into a vector controlled by NIST (i.e, createReferenceDatabase()), please let us know immediately with your expected RAM usage. We do have some machines with more RAM, but their quantities are limited.

Persisting Data

A clarification was noted on GitHub that calls to insert() and remove() shall persist. The database directory will be read/write when these methods are called. The intent was to limit the number of potentially expensive calls to createReferenceDatabase() that NIST needs to make. It seems that for some implementations, these modification methods may be just as expensive. Please let us know if this is the case for you.

Again, please contact el...@nist.gov if you think you are affected by these database restrictions. If we need to change the API to enable easier participation, we’d like to do it as soon as possible.

Thank you!

-Greg (@nist.gov)


Reply all
Reply to author
Forward
0 new messages