Has anyone found the upper bound of how much data Trunking Recorder can handle? I’ve got a number of large systems to monitor… with 3 months of retention, I’ve got more than 9 million entries and a SQLite database north of 3GB. Importing has become quite slow, despite being on a 32-core Ryzen beast. Have I just found the limit? Anyone played with going to SQL Server for a backend vs. SQLite?
Thanks,
Terry
I'm using SDRTrunk to monitor 7 separate systems with a total of 17 sites and have just over 2 years of recordings. The only performance issue I have with TR is my desire for it to locate and return a larger number of calls. If I perform an all-date search and look a specific TG or RID, the more calls I ask TR to look for, the longer the search takes. Simple enough but I wish there was a way to set a date range in a search because some of my searches take a while. If was want to go back 2 years in time, I have to set the max calls to a large number to allow the search will go back far enough in time.
I'm running a Ryzen 9 with 64GB memory and TR files are being saved on a 1TB M.2 stick. My SQLite db is currently at 20.3GB.
If I set max calls to 25,000, it takes about 2 1/2 minutes for the "ALL" date search to complete. If I reduce the Search Max Calls to 5,000, the same search only takes 18 seconds. If I search only a specific day it takes less than 3 seconds.
I ended up doing the conversion to SQL Server just to see what would happen, but obviously it’ll take a while to refill the data. I do notice that, at the moment, if I enter a talkgroup, TR constantly returns zero calls. At the moment, the SQL database has 128,326 calls in it and 724 talkgroups (distinct on systemid, targetid). I am running the full edition of SQL 2022, so I won’t run into the 10GB limit.

The system in question is a 32-core Ryzen 3970X with 64GB RAM. Storage is a mirrored pair of high-tier 2TB Samsung PCIe 4.0 x4 SSDs. I have benchmarked everything and found no issues.
I guess the next step is to nuke all of the settings and try a reinstall, see what happens. I’ve also had issues with SDRTrunk 0.5.0 final throwing errors and dying after 24 hours running – of course, that’s munching 30 control channels across 3 Airspys. Beta 6 is stable as can be.
From: sdrt...@googlegroups.com <sdrt...@googlegroups.com> On Behalf Of JasoVeen
Sent: Thursday, January 12, 2023 5:19 PM
To: sdrtrunk <sdrt...@googlegroups.com>
Subject: Re: Trunking Recorder - realistic limits
For the slow importing of SDRTrunk calls I would be curious what is in the Trunking Recorder logs. Follow the steps at https://www.scannerbox.us/TrunkingRecorder/support/ and send me your logs and I will take a look.
For large databases with multiple millions of calls search time for the more complex "All Dates" searches can very by a lot
Disk performance is probably the biggest factor since SQLite really doesn't keep data in memory so disk reads and a big factor.
Moving to Microsoft SQL Server doesn't guarantee that searches will be any faster. SQL Server generally requires more hardware to get the same performance since it a much larger program.
The free SQL Server Express does have a 10GB database size limit so that might limit how many calls you can store.
If you are willing to send me your SQLite database file I can take a look and see if there is any additional SQLite optimizations that I could add (main SQL indexes) to help improve performance.
Reach out to me at sup...@scannerbox.us and we can work out how to get me the file.
Thanks
Jason
On Thursday, January 12, 2023 at 12:29:32 PM UTC-5 hrus...@gmail.com wrote:
SDRTrunk is MUCH more resource efficient with Linux.
On Thursday, January 12, 2023 at 6:25:57 AM UTC-5 charley....@gmail.com wrote:
My first step in diagnosing a performance issue is to make sure that all drivers at both the OS and system board levels are up to date, including the bios.
Side note: I've noticed that SDRT (latest version) has been using about 65% GPU resources of a GTX1080Ti when running. No waterfall.
On Wednesday, January 11, 2023 at 9:03:45 PM UTC-5 abq...@gmail.com wrote:
I'm using SDRTrunk to monitor 7 separate systems with a total of 17 sites and have just over 2 years of recordings. The only performance issue I have with TR is my desire for it to locate and return a larger number of calls. If I perform an all-date search and look a specific TG or RID, the more calls I ask TR to look for, the longer the search takes. Simple enough but I wish there was a way to set a date range in a search because some of my searches take a while. If was want to go back 2 years in time, I have to set the max calls to a large number to allow the search will go back far enough in time.
I'm running a Ryzen 9 with 64GB memory and TR files are being saved on a 1TB M.2 stick. My SQLite db is currently at 20.3GB.
If I set max calls to 25,000, it takes about 2 1/2 minutes for the "ALL" date search to complete. If I reduce the Search Max Calls to 5,000, the same search only takes 18 seconds. If I search only a specific day it takes less than 3 seconds.
Here is my current TR folder.
I'd say I haven't found the limit yet.
On Tuesday, January 10, 2023 at 11:05:39 PM UTC-7 tvsjr wrote:
Has anyone found the upper bound of how much data Trunking Recorder can handle? I’ve got a number of large systems to monitor… with 3 months of retention, I’ve got more than 9 million entries and a SQLite database north of 3GB. Importing has become quite slow, despite being on a 32-core Ryzen beast. Have I just found the limit? Anyone played with going to SQL Server for a backend vs. SQLite?
Thanks,
Terry
--
You received this message because you are subscribed to the Google Groups "sdrtrunk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sdrtrunk+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sdrtrunk/e1287aa3-c8fc-4c55-bb42-4473bdef8657n%40googlegroups.com.
Aaaaand, the search issue is…
2023-01-12 21:19:14,790 [WebServerThread50] ERROR Trunking_Recorder.Database.MSSQL.Call.SelectSearch - Database error in SelectSearch. 'Must declare the scalar variable "@talkgroupsearch1".
Invalid usage of the option NEXT in the FETCH statement.'
To view this discussion on the web visit https://groups.google.com/d/msgid/sdrtrunk/018f01d926fb%24fd4b0820%24f7e11860%24%40tvsjr.com.