TBrowse with live SQL data – anyone using it without full table buffering?

358 views
Skip to first unread message

Damian

unread,
Apr 13, 2025, 8:59:15 PMApr 13
to Harbour Users
Hi all,

I have a question for those working with Harbour and PostgreSQL (or other SQL backends).

Is anyone here actually using TBrowse (or another browser control) with live SQL data, meaning without loading the entire table into an array first?

Most of the examples I see rely on:

  • loading the result of SELECT * FROM ... into a local array (aData[]),

  • then browsing that array locally (so the data is "frozen" at the time of loading).

In my case, I need the data to be always current – for example, if someone adds/edits/deletes a record, other users should see that immediately in the browser.
So I can't rely on a static data buffer.

I've already implemented my own Skip() logic using LIMIT 1 and keyset pagination, and it's functional – but performance is very poor when TBrowse tries to render 50+ rows, because it fetches each record individually, which is expensive in SQL.

So my questions are:

  • Has anyone successfully implemented TBrowse (or similar) with live SQL access, without full buffering, and achieved acceptable performance?

  • Or alternatively, if you're using buffered arrays – what are your practical strategies for keeping the data up to date for the user?
    For example:

    • Periodic refresh every few seconds?

    • Triggers with a dirty flag?

    • Timestamp comparisons?

    • PostgreSQL LISTEN/NOTIFY?

I'd love to hear about your experiences or ideas. Thanks in advance.

Best regards,
Damian


cod...@outlook.com

unread,
Apr 14, 2025, 9:42:23 AMApr 14
to Harbour Users

Hi Damian.

I hope I didn't missed the point. I do not work with GUI, just plain text mode.

Long ago I found and used (still use) some function that did what you explained, but browsing DBF file.

It is not based on Tbrowse but on PROMPT and DO WHILE commands.

I am sending working test PRG and test file. In one moment it displays number of records  that fits  on one screen. I can forward and backward using PgDn and PgUp keys, reading and displaying another portion of records. On screen I can select desired record using arrow keys. When I see desired record I select it and press Enter key.

While looking at current records on screen we can not see its modification in real file, but next PgDn or PgUp will read most fresh data.

This way I do not load whole file at once but only a part of records that fit on one screen.

I did not used it with Sql databases, but I suppose that it can be modified to wok with.

May be this can give you some new idea.

Regards,

Simo.
.
file1.dbf
browse1.prg

Francesco Perillo

unread,
Apr 14, 2025, 2:39:36 PMApr 14
to harbou...@googlegroups.com
SQL is a different beast from dbf files. A very different way to handle data. With sql you get a recordset as the answer, and the recordset is like a snapshot,  a "picture" of the data in that moment. 

Anyway, your question is really interesting

I think the main requirement is: In my case, I need the data to be always current – for example, if someone adds/edits/deletes a record, other users should see that immediately in the browser.

I have some question to try to better describe the problem:
- do you already have you routine working with DBFs? How do you notice when the table changes ? TBrowse recognizes data changes when it is forced to repaint everything or when moving from one row to another, it doesn't refresh the data automatically.

- can you specify that "immediately" ? can it be 10 seconds? 1 second ? 1 ms ?

- how many rows are usually in the TBrowse() ? 10 or 10000 ?

- how long does it take the SELECT to retrieve the full data ?

- how many rows do you expect to be changed every time the query is run ?

- do you need to interact with the TBrowse() like paginate, press F keys to select, open, whatever...


Anyway I once wrote some code for Harbour+hbQt, two threads, one was doing a long search populating a array shared between threads, the data populated a sort of TBrowse() that could be used while the job continued. But my case was for a growing-only table,

If you need an ASAP update, LISTEN/NOTIFY is probably the way but I don't know if it is supported by the driver. All the other options are valid.

Please let me know !

I'm willing to move all my code from DBF to SQL but my coding style and use of DBF doesn't map cleanly to sql... for some operations it would be great (transactions and queries optimized by the server) but I'd need to radically change some UI interfaces and I don't know it my users will be happy.

Francesco



--
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
Unsubscribe: harbour-user...@googlegroups.com
Web: https://groups.google.com/group/harbour-users
---
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to harbour-user...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/harbour-users/ea4f3654-2d6e-4983-ba65-0ee4ecd40e02n%40googlegroups.com.

Damian

unread,
Apr 14, 2025, 5:38:55 PMApr 14
to Harbour Users
Thank you, Simo. I’ll review the code and maybe I’ll be able to implement it with SQL. By the way, I also use text mode.

Damian

unread,
Apr 14, 2025, 5:46:32 PMApr 14
to Harbour Users

Hi Francesco,

Thanks for the detailed questions – they help clarify a lot.

  1. I already use a custom modified TBrowse class that works well with DBFs and now with SQL too.

    • Each time I move between rows (arrow keys, PgUp/PgDn), the record I leave and the one I enter are refreshed directly from the database.

    • After exiting an operation (Enter), the current record is also refreshed.

    • When the user is idle (waiting for INKEY()), the full visible set of rows in TBrowse is refreshed every 5 seconds.

  2. So by "immediately" I mean:

    • Live row updates on navigation and user action,

    • Full refresh every ~5 seconds when idle – this is enough for my use case.

  3. The TBrowse window usually shows about 70 rows at once – my terminal size is typically 260x80 characters.

  4. The base table I work with has around 15,000 records, and I have related tables used for footer data (via relation (DBF) or separate lookups).

  5. In normal operation, there are just a few changes per minute (inserts/updates/deletes), but under heavy use there can be dozens per minute from multiple users.

  6. Users can sort (3 criteria), filter, search, and also edit records, so the browser needs to stay interactive and responsive.

I’ve implemented my own Skip() logic using LIMIT 1 and keyset pagination (based on indexed keys). It works, but fetching 50–70 rows individually is slow and doesn't scale well.

I haven’t used LISTEN/NOTIFY in production yet, but I plan to add support for it directly into the hbpqlib library I’m working with.

I have the same dilemma as you regarding the move from DBF to SQL – my coding style and logic don’t map 1:1 either.
But I think I’ll accept the cost and effort of the transition in order to benefit from PostgreSQL features like transactions, indexing, and server-side query optimization.

Thanks again!
Best regards,
Damian

Francesco Perillo

unread,
Apr 14, 2025, 6:24:43 PMApr 14
to harbou...@googlegroups.com
Ok so please answer 2 more questions:

Your tbrowse shows between 50 and 70 rows on screen, but how many rows can the SELECT return ? How long does it take to return the results?

The "refresh every 5 seconds" is quite easy, the refresh at cursor move is a bit nastier but it depends on the time you need to refresh the data from SQL.

There are some missing informations, for example how much is the cost of a missed screen update? No problem or 1 billion dollar fee? Do you control the software that writes to sql?

Can't you populate a DBF from the sql query and tbrowse() the dbf?

There are tons of possibilities, also writing a brand new, custom, specific browse for this use may be an option.


Damian

unread,
Apr 15, 2025, 4:03:05 PMApr 15
to Harbour Users

Hi Francesco,

Here are my detailed answers to your last set of questions, based on what I’ve measured and tested so far.


Your TBrowse shows between 50 and 70 rows on screen, but how many rows can the SELECT return? How long does it take to return the results?

I don't fetch the full table at once. I implemented my own Skip() logic using keyset pagination with a compound key (e.g., data, no_invo).
Each skip involves an indexed SELECT ... WHERE (key) > (...) ORDER BY ... LIMIT 1.

A single-row fetch (including PQgetvalue on 3 fields) takes on average ~0.040 ms, which is very fast.

However, the way TBrowse operates is problematic:

- When moving a page (e.g., PgDn), it calls Skip() for each row — from 1 to 70 — and then again in reverse (e.g., from 70 down to 9) to place the highlight bar correctly.

- This results in double the expected number of fetches, e.g., 140 queries for a 70-row movement.

So even if each query is fast, the cumulative cost adds up, and that’s where performance degrades.


The "refresh every 5 seconds" is quite easy, the refresh at cursor move is a bit nastier but it depends on the time you need to refresh the data from SQL.

Exactly — the refresh every 5 seconds (during INKEY idle loop) is simple and acceptable.

The real issue is the cursor-based navigation, where TBrowse triggers excessive SQL queries just to display one page.

That’s not due to SQL being slow, but rather due to the TBrowse design assuming fast, sequential file access like DBF row-by-row


How much is the cost of a missed screen update? No problem or 1 billion dollar fee?

Not critical in terms of money, but practically important.

This system is used in warehouse/logistics (WMS).
If a user updates a document or stock level at one station, others expect to see it shortly afterward.
So I need the data to reflect live changes within a few seconds, otherwise operators start second-guessing what’s real.


Do you control the software that writes to SQL?

Yes, I control both the reader (browser) and the writer (form logic, importers, barcode scans, etc.).

So I can freely add triggers, last_modified timestamps, or even implement LISTEN/NOTIFY support if needed — and in fact I plan to integrate LISTEN/NOTIFY directly into the hbpqlib I’m using.


Can't you populate a DBF from the SQL query and TBrowse() the DBF?

Yes — technically I could populate a temporary DBF, add indexes, and let TBrowse fly over it.

But that breaks the “live data” requirement. I’d have to:

- periodically re-query and rebuild the DBF,

- manage synchronization,

- and still deal with stale views between updates. 


There are tons of possibilities, also writing a brand new, custom, specific browse for this use may be an option.

Yes — and honestly, I'm getting closer to that conclusion. TBrowse is a beautiful tool for DBF-based systems, but it doesn't scale well in a set-based SQL world.

So I'm now seriously considering building a custom paged-grid:

- fetch 70 rows at a time (LIMIT 70 with keyset or offset),

- store in a memory buffer,

- render via simpler logic (not SkipBlock),

- refresh every X seconds or on LISTEN/NOTIFY.

This would give me:

- near-DBF scrolling speed,

- live data (within a short window),

- and no need for per-row SQL fetches during rendering.

Damian

Francesco Perillo

unread,
Apr 15, 2025, 5:54:32 PMApr 15
to harbou...@googlegroups.com
Think about your proposal for an ad hoc browser and imagine a TBrowse that works on a array and you fetch 70 records from SQL and update the corresponding array values.

I mean, one SELECT  .. LIMIT 70. The skip() is on the array. One column may hold the "lastread time" and if too old do another refresh... something you'll be doing anyway every 5 seconds.

There are still some points not clear to me but in the end I'd opt to a "PUSH" setup where the server sends all the updates in realtome and the client keep it in memory, in dbf, in array.
Or viceversa, the server keeps a copy of the data updated in realtime and the client asks record by record, not using sql but a quicker, specific, protocol.

Anyway, if you remove LIMIT 70, how many records would you get?


arielso...@gmail.com

unread,
Apr 15, 2025, 6:04:45 PMApr 15
to harbou...@googlegroups.com

Hi

 

I use Harbor, SQLRDD, and SQLServer 2019.

Excellent speed and integrity.

 

Same problem.

Real-time query in a query.

 

My product table contains 15,000 records.

Search for a specific word.

 

When the cursor moves, the data is read and updated on the screen in real time (in purchases, sales, and batches).

These queries are performed with a query.

 

Purchases [F9]

Sales [F10]

Products

Batch

 

Can this be done in the same record when the cursor moves?

 

Video attached.

 

Greetings!

 

https://mega.nz/file/xzIByIiL#TvEH_4Z3Hivty-lrFNWC5SnYfyOPH_gEmvH8geJOQoM

david...@gmail.com

unread,
May 28, 2025, 10:36:41 PMMay 28
to Harbour Users
Hi Damian,

Just saw your message and maybe you could use ADO for your SQL usage.
ADO will update automatically the recordset but:
1. You must define your conexión object CursorLocation as adUseServer
2. Define the RecordSet CursorLocation as asUseServer
3. Define the RecordSet CursorType as adOpenDynamic
4. Define the RecordSet CacheSize as 1

This will be somewhat slow specially because of the CacheSize set to 1.
This tells the RecordSet to update the info every X (in this case 1) Records

This will still NOT update the current record you are on until you move from it and go back but you can :Resync with the server.

If the Tbrowse is being used by a user you could just tell them to refresh the page (using Resync or Requery) whenever they want or you could setup a timer (say every 10 sec).

ADO will allow to resync only the records you want which is an advantage, this way you could set your cachesize to the size of records you are viewing in the tbrowse and resync only those.

In any case, I believe ADO can make it easier for what you are trying to do.

You can learn more about this at ADO Recordset Object

Hope this helps.

Regards,
David Field
Reply all
Reply to author
Forward
0 new messages