I have a question for those working with Harbour and PostgreSQL (or other SQL backends).
Is anyone here actually using TBrowse (or another browser control) with live SQL data, meaning without loading the entire table into an array first?
Most of the examples I see rely on:
loading the result of SELECT * FROM ... into a local array (aData[]),
then browsing that array locally (so the data is "frozen" at the time of loading).
In my case, I need the data to be always current – for example, if someone adds/edits/deletes a record, other users should see that immediately in the browser.
So I can't rely on a static data buffer.
I've already implemented my own Skip() logic using LIMIT 1 and keyset pagination, and it's functional – but performance is very poor when TBrowse tries to render 50+ rows, because it fetches each record individually, which is expensive in SQL.
So my questions are:
Has anyone successfully implemented TBrowse (or similar) with live SQL access, without full buffering, and achieved acceptable performance?
Or alternatively, if you're using buffered arrays – what are your practical strategies for keeping the data up to date for the user?
For example:
Periodic refresh every few seconds?
Triggers with a dirty flag?
Timestamp comparisons?
PostgreSQL LISTEN/NOTIFY?
I'd love to hear about your experiences or ideas. Thanks in advance.
Best regards,
Damian
Hi Damian.
I hope I didn't missed the point. I do not work with GUI, just plain text mode.
Long ago I found and used (still use) some function that did what you explained, but browsing DBF file.
It is not based on Tbrowse but on PROMPT and DO WHILE commands.
I am sending working test PRG and test file. In one moment it displays number of records that fits on one screen. I can forward and backward using PgDn and PgUp keys, reading and displaying another portion of records. On screen I can select desired record using arrow keys. When I see desired record I select it and press Enter key.
While looking at current records on screen we can not see its modification in real file, but next PgDn or PgUp will read most fresh data.
This way I do not load whole file at once but only a part of records that fit on one screen.
I did not used it with Sql databases, but I suppose that it can be modified to wok with.
May be this can give you some new idea.
Regards,
Simo.--
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
Unsubscribe: harbour-user...@googlegroups.com
Web: https://groups.google.com/group/harbour-users
---
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to harbour-user...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/harbour-users/ea4f3654-2d6e-4983-ba65-0ee4ecd40e02n%40googlegroups.com.
Hi Francesco,
Thanks for the detailed questions – they help clarify a lot.
I already use a custom modified TBrowse class that works well with DBFs and now with SQL too.
Each time I move between rows (arrow keys, PgUp/PgDn), the record I leave and the one I enter are refreshed directly from the database.
After exiting an operation (Enter), the current record is also refreshed.
When the user is idle (waiting for INKEY()), the full visible set of rows in TBrowse is refreshed every 5 seconds.
So by "immediately" I mean:
Live row updates on navigation and user action,
Full refresh every ~5 seconds when idle – this is enough for my use case.
The TBrowse window usually shows about 70 rows at once – my terminal size is typically 260x80 characters.
The base table I work with has around 15,000 records, and I have related tables used for footer data (via relation (DBF) or separate lookups).
In normal operation, there are just a few changes per minute (inserts/updates/deletes), but under heavy use there can be dozens per minute from multiple users.
Users can sort (3 criteria), filter, search, and also edit records, so the browser needs to stay interactive and responsive.
I’ve implemented my own Skip() logic using LIMIT 1 and keyset pagination (based on indexed keys). It works, but fetching 50–70 rows individually is slow and doesn't scale well.
I haven’t used LISTEN/NOTIFY in production yet, but I plan to add support for it directly into the hbpqlib library I’m working with.
I have the same dilemma as you regarding the move from DBF to SQL – my coding style and logic don’t map 1:1 either.
But I think I’ll accept the cost and effort of the transition in order to benefit from PostgreSQL features like transactions, indexing, and server-side query optimization.
Thanks again!
Best regards,
Damian
To view this discussion visit https://groups.google.com/d/msgid/harbour-users/9c7473ff-5cc1-45d2-9b9b-853df6ce4c4dn%40googlegroups.com.
Hi Francesco,
Here are my detailed answers to your last set of questions, based on what I’ve measured and tested so far.
Your TBrowse shows between 50 and 70 rows on screen, but how many rows can the SELECT return? How long does it take to return the results?
I don't fetch the full table at once. I implemented my own Skip() logic using keyset pagination with a compound key (e.g., data, no_invo).
Each skip involves an indexed SELECT ... WHERE (key) > (...) ORDER BY ... LIMIT 1.
A single-row fetch (including PQgetvalue on 3 fields) takes on average ~0.040 ms, which is very fast.
However, the way TBrowse operates is problematic:
- When moving a page (e.g., PgDn), it calls Skip() for each row — from 1 to 70 — and then again in reverse (e.g., from 70 down to 9) to place the highlight bar correctly.
- This results in double the expected number of fetches, e.g., 140 queries for a 70-row movement.
So even if each query is fast, the cumulative cost adds up, and that’s where performance degrades.
The "refresh every 5 seconds" is quite easy, the refresh at cursor move is a bit nastier but it depends on the time you need to refresh the data from SQL.
Exactly — the refresh every 5 seconds (during INKEY idle loop) is simple and acceptable.
The real issue is the cursor-based navigation, where TBrowse triggers excessive SQL queries just to display one page.
That’s not due to SQL being slow, but rather due to the TBrowse design assuming fast, sequential file access like DBF row-by-row
How much is the cost of a missed screen update? No problem or 1 billion dollar fee?
Not critical in terms of money, but practically important.
This system is used in warehouse/logistics (WMS).
If a user updates a document or stock level at one station, others expect to see it shortly afterward.
So I need the data to reflect live changes within a few seconds, otherwise operators start second-guessing what’s real.
Do you control the software that writes to SQL?
Yes, I control both the reader (browser) and the writer (form logic, importers, barcode scans, etc.).
So I can freely add triggers, last_modified timestamps, or even implement LISTEN/NOTIFY support if needed — and in fact I plan to integrate LISTEN/NOTIFY directly into the hbpqlib I’m using.
Can't you populate a DBF from the SQL query and TBrowse() the DBF?
Yes — technically I could populate a temporary DBF, add indexes, and let TBrowse fly over it.
But that breaks the “live data” requirement. I’d have to:
- periodically re-query and rebuild the DBF,
- manage synchronization,
- and still deal with stale views between updates.
There are tons of possibilities, also writing a brand new, custom, specific browse for this use may be an option.
Yes — and honestly, I'm getting closer to that conclusion. TBrowse is a beautiful tool for DBF-based systems, but it doesn't scale well in a set-based SQL world.
So I'm now seriously considering building a custom paged-grid:
- fetch 70 rows at a time (LIMIT 70 with keyset or offset),
- store in a memory buffer,
- render via simpler logic (not SkipBlock),
- refresh every X seconds or on LISTEN/NOTIFY.
This would give me:
- near-DBF scrolling speed,
- live data (within a short window),
- and no need for per-row SQL fetches during rendering.
Damian
To view this discussion visit https://groups.google.com/d/msgid/harbour-users/1eeff672-6c1b-400d-9cd0-e060d61cd6e6n%40googlegroups.com.
Hi
I use Harbor, SQLRDD, and SQLServer 2019.
Excellent speed and integrity.
Same problem.
Real-time query in a query.
My product table contains 15,000 records.
Search for a specific word.
When the cursor moves, the data is read and updated on the screen in real time (in purchases, sales, and batches).
These queries are performed with a query.
Purchases [F9]
Sales [F10]
Products
Batch
Can this be done in the same record when the cursor moves?
Video attached.
Greetings!
https://mega.nz/file/xzIByIiL#TvEH_4Z3Hivty-lrFNWC5SnYfyOPH_gEmvH8geJOQoM
To view this discussion visit https://groups.google.com/d/msgid/harbour-users/1eeff672-6c1b-400d-9cd0-e060d61cd6e6n%40googlegroups.com.