Using Embedded PostgreSQL in Harbour

575 views
Skip to first unread message

Eric Lendvai

unread,
Nov 24, 2023, 5:10:04 AM11/24/23
to Harbour Users

As you may know we had some great success lately with integrating Harbour with/To other systems and languages.


-Calling Python classes natively from Harbour (With the help and sample code from Antonio Linares)

-Create an entire FastCGI framework (learning from an initial mod_harbour project)

-Have a nice VSCode Extension (Antonino Perricone, but sadly does not believe in Harbour anymore, so we will need to find a way to support it)

-Mutli-Window Desktop HTML (webview UI), meaning calling Harbour from HTML/JS (Again some initial code from Antonio Linares, but also used hbhttpd)

-Full Client Server Rest API (curl)

-A complete Harbour ORM (hybrid with in-memory tables)


But now I would like to find a solution to add SELECT SQL locally, to the equivalent of in-memory tables).

Imagine querying a backend, like PostgreSQL, getting one or multiple results sets (in-memory tables), and then without querying a PostgreSQL server execute some additional SELECT SQL statements.

Imagine the remote query injecting the data into a local, in-memory embedded PostgreSQL.

Some people are using SQLite to do so, but what a nightmare with all the missing field types, and not using the same SQL statements as PostreSQL.

PostgreSQL is the most advanced open source database product now. Using it from Harbour will help us grow to be Enterprise grade.

For that reason, I am even enhancing DataWharf and Harbour_ORM to take advantage of even more PostgreSQL features, like CTEs, unlogged tables, array fields, JSON fields and more.

Harbour Hash Arrays are a perfect match to PostgreSQL JSON field types.


There are several project to create Embedded Postgres. There was an old project for C, but it was abandoned. There is one for Java and Node, but the most active and stared repo is for GO.


embedded-postgres

https://github.com/fergusstrange/embedded-postgres

658 stars

79 forks


But to use this, we would need to call Go from C (Harbour)


I did find an article about this:

https://medium.com/learning-the-go-programming-language/calling-go-functions-from-other-languages-4c7d8bcc69bf


So my question to all of us here, is this a good idea? Is anyone interested in working on a solution with me on this?


Harbour will have a bright future, we just need to be working on it :)


By the way the domain name for WharfSystems was changed to https://wharfsystems.com

Gerald Drouillard

unread,
Nov 24, 2023, 8:16:37 AM11/24/23
to harbou...@googlegroups.com
I am not sure how you are using the word embedded here.  I have been spending the last couple months porting a very large app with 25+ years of code from dbfcdx (with m6/comix/netio functionality) and tables with 16+ million records to sqlrdd and postgres 16 backend.  I have tweaked sqlrdd to support more field types for efficiency but I don't think I had to do much tweaking for functionality.  

One area of change is the use of smallint/int/bigint fields and defaulting recno to the right size int.  

The use of partitioned tables is nice also. Especially for those tables that will grow infinitely over time like orders or logs.

I have added support for jsonb fields that do work nicely with hashes.  That is a nice feature for those one off fields that don't get used very often but you still want to save off, and the sql backend can still treat it as a field.
As far as in memory tables, I think that functionality exists just by using the appropriate rdd.  I think some of the other sql rdd's in contrib try to do things in memory a little more, but from what I can tell at the moment sqlrdd uses a write cache on a single recno before flushing it out to the server on a commit.  So the days of doing a commit after each replace (especially on indexed fields) have to be adjusted for the sql world.  Postgres does not just update the field in a table but re-writes the entire record on change.

Working with/on the sqlrdd has been a real pleasure.  It is nicely designed to be mostly prg code that ties into the underrated rdd design of clipper.

I would agree with your praise of Postgres.  It really provides a lot of options for applications that the databases continue to grow over time and you need replication and the ability to expand the number of servers responding to requests.

Here is what chatgpt says about temp tables in PG
Yes, PostgreSQL supports temporary tables, which are a useful feature for storing and working with intermediate query results or datasets that are only needed for the duration of a session or a transaction.

### Creating Temporary Tables

You can create a temporary table in PostgreSQL using the `CREATE TEMPORARY TABLE` (or `CREATE TEMP TABLE`) command. Temporary tables are visible only within the current session and are dropped automatically at the end of the session, or optionally at the end of the current transaction.

Here's an example of creating a temporary table:

```sql
CREATE TEMPORARY TABLE temp_table_name (
    column1 datatype,
    column2 datatype,
    ...
);
```

### Characteristics of Temporary Tables

1. **Session-Scoped**: Temporary tables exist only within the current database session. Once the session ends, the temporary table is automatically dropped.
   
2. **Transaction-Scoped Option**: You can optionally create temporary tables that last only for the duration of the current transaction by using the `ON COMMIT DROP` clause.

   ```sql
   CREATE TEMPORARY TABLE temp_table_name (
       ...
   ) ON COMMIT DROP;
   ```

3. **Isolation**: Data in a temporary table is private to the session that created it. Other database sessions cannot see or access the data in your temporary table.

4. **Performance**: Temporary tables can be a good way to optimize complex queries. By storing intermediate results in a temporary table, you can break down a complex query into simpler steps.

5. **Indexing**: You can create indexes on temporary tables just like regular tables, which can improve query performance on temporary tables.

### Using Temporary Tables

Temporary tables are often used in complex data processing operations where you need to store intermediate results, perform multiple transformations or calculations, or when dealing with large datasets that don't need to be persisted beyond the current session.

For instance, you might use a temporary table to store the result of a subquery for further processing in subsequent queries within the same session.

### Example

```sql
-- Create a temporary table
CREATE TEMPORARY TABLE temp_user_data AS
SELECT id, name, age FROM users WHERE age > 30;

-- Use the temporary table in another query
SELECT avg(age) FROM temp_user_data;
```

In this example, `temp_user_data` is a temporary table that holds a subset of data from the `users` table for use in subsequent operations within the same session.



Some functionality I have added to sqlrdd2.prg like the following 2 methods to generate/execute sql commands and return results.  The first one uses the settings from the workarea like filter/for and returns the "restrictors'' for use in the "where" segment in the sql and returns the matching array of recno's.  Kind of needed when coming from a m6/comix world.  
The second method just checks if a recno exists in the table.  This probably could be expanded to any field/value(s).  But in my immediate use case I am importing millions of records so it works for me.


METHOD sqlrlSelect(cSort)  CLASS SR_WORKAREA

   Local n, aRet := {}
   Local lDeleteds, cSql, cRet := ""
   if hb_isnil(cSort)
      cSort:=::cRecnoName
   endif
   lDeleteds := (!Empty(::hnDeleted)) .and. set( _SET_DELETED )

   cSql := "SELECT " + SR_DBQUALIFY( ::cRecnoName, ::oSql:nSystemID ) + " FROM " + ::cQualifiedTableName + " A " +;
            if( lDeleteds, " WHERE " + SR_DBQUALIFY( ::cDeletedName, ::oSql:nSystemID ) + " != " + if(::nTCCompat > 0, "'*'", "'T'"), "" )


   cRet := ::SolveRestrictors()
   If len( cRet ) > 0
      If !lDeleteds
         cRet := " WHERE" + cRet
      Else
         cRet := " AND " + cRet
      EndIf
      cSql += cRet
   EndIf
   cSql += " ORDER BY " + SR_DBQUALIFY( cSort, ::oSql:nSystemID )
   trax(cSql)
   ::oSql:exec( cSql,,.t.,@aRet )
   trax(aRet)

   If len(aRet) > 0
      n:=len(aRet[1])
      if aRet[1][n]=NIL
         aSize(aRet[1],n-1)
      EndIf
      return(aRet[1])
   EndIf
Return aRet
//SELECT A.* FROM "dbf_s" A  WHERE A."sr_recno" = 885
METHOD recno_exist(nRecno)  CLASS SR_WORKAREA
   Local aRet := {}
   Local cSql
   //cSql := 'SELECT A."sr_recno" FROM ' + ::cQualifiedTableName + ' A WHERE A."sr_recno"='+hb_ntos(nRecno) +' LIMIT 1'
   cSql:='SELECT EXISTS(SELECT 1 FROM '+::cQualifiedTableName+' WHERE sr_recno = '+hb_ntos(nRecno)+');'
   ::oSql:exec( cSql,,.t.,@aRet )
   trax(nRecno,aRet)
   return(aRet[1][1])


antonio....@gmail.com

unread,
Nov 24, 2023, 8:31:57 AM11/24/23
to Harbour Users
Dear Gerald,

Yesterday I sent you an email, just to check that you got it


many thanks

Mario H. Sabado

unread,
Dec 1, 2023, 1:51:28 AM12/1/23
to harbou...@googlegroups.com
HI Eric,

FirebirdSQL also supports embedded database (when no FB Server is running and no host specified in the connection).  You can use the same SQL operations for server and embedded mode without changes.  Even the database can be copied directly from/to local/server data path or via backup/restore.  It also has high SQL Standard compliance so I suppose there should not be much problem using the same SQL script between PostGre and FB.   Although I agree that if PostgeSQL embedded can be adapted to Harbour environment, that would be much better.


image.png

Regards,
Mario



--
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
Unsubscribe: harbour-user...@googlegroups.com
Web: https://groups.google.com/group/harbour-users
---
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to harbour-user...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/harbour-users/5f2cf9a2-b91b-4dd1-9011-612b1d1553ccn%40googlegroups.com.

Eric Lendvai

unread,
Dec 1, 2023, 2:12:11 AM12/1/23
to Harbour Users
Hello Mario,

Yes it looks a lot better than SQLite.
Did you integrate it with Harbour already ?

I know a few contributors to PostgreSQL, now I will be able to tease them about this ... or at least once I can make a proof of concept in Harbour.

Thanks :)

Mario H. Sabado

unread,
Dec 1, 2023, 2:25:48 AM12/1/23
to harbou...@googlegroups.com
Hi Eric,

I'm currently using SQLRDD from Marcos Gambeta's fork.  I'm using PostgreSQL to dump/sync app DBFs because most of BI/Reporting Tools only support SQL DB connections.  But I prefer FirebirdSQL DB if possible since it's more portable aside from being an enterprise-grade DB.

Regards,
Mario


Mario H. Sabado

unread,
Dec 1, 2023, 2:33:17 AM12/1/23
to harbou...@googlegroups.com

Hi Eric,

Just in case you did not see my screenshot using FirebirdSQL embedded

*********************************************************
Connecting to FIREBIRD5=c:\firebird\firebird_5_0\data\TEST.FDB;uid=SYSDBA;pwd=masterkey;client=fbclient.dll;charset=ISO8859_1;
RDD in use          : SQLRDD
12/01/23 15:29:11.059
   Uploading... stokmast (166976 records)
**********************************************************

image.png



Regards,
Mario

Eric Lendvai

unread,
Dec 1, 2023, 2:45:04 AM12/1/23
to Harbour Users
Thanks again Mario,

In my Harbour_ORM I am using SQLMIX and SDDODBC, meaning via ODBC (Windows and Linux)

I guess going the embedded method way, I would not use ODBC. 

Do you need to link the FireBird embedded library in your Harbour exe?
Do you have any code you could share to see how this is done or point me to a sample app?

Thanks, Eric

Mario H. Sabado

unread,
Dec 1, 2023, 3:00:01 AM12/1/23
to harbou...@googlegroups.com
Hi Eric,

Here's the link of Marcos's SQLRDD fork:


I used the dbf2sql.prg and connect.prg in the samples folder.  You just need the fbclient_ms.lib from FirebirdSQL's lib folder to link in your Harbour app then execute your compiled app in the FB server folder so that you don't need to copy the dll dependencies (i.e., fbintl, icu*, etc).

Here's the embedded guide for fb3 but should work the same with latest FB:


Let me get back to you with the SDDFB testing as I have tried it long ago but not yet with embedded at that time.

Regards,
Mario




Eric Lendvai

unread,
Dec 1, 2023, 3:08:04 AM12/1/23
to Harbour Users
Thank you Mario, that would be kind for you to check it out.
Then later I will tried is out and probably add a Sample at https://github.com/EricLendvai/Harbour_Samples
Have a great weekend, Eric

Mario H. Sabado

unread,
Dec 1, 2023, 8:22:59 AM12/1/23
to harbou...@googlegroups.com
Hi Eric,

My apology but upon checking, there's no existing sample program under SDDFB contrib for me to test.  I used the HBFBIRD contrib instead to test the embedded.

First, I set the following environment to be able to build the required libs:
set HB_WITH_FIREBIRD =c:\firebird\firebird_5_0\include

Next, I modified the hbfbird\tests\simple.prg and set the cServer value to empty to force embedded connection

*********************************************************************
* simple.prg
#require "hbfbird"
PROCEDURE Main()
   LOCAL oServer, oQuery, oRow, i, x, aTables, aStruct, aKey
   LOCAL cServer := "" //set to empty for embedded connection
**********************************************************************

After compiling the simple.prg (hbmk2 simple-prg -o\firebird\firebird_5_0\simple), go to Firebird server folder then run simple.exe.  When the execution is completed, a simple.fdb file should be created. 

To view the created database, I started the Firebird server then ran the FB Admin Tool (Flamerobin) and registered the simple.fdb database. Once registered, I was able to view the contents of the test data.


image.png

Regards,
Mario


Reply all
Reply to author
Forward
0 new messages