Universe-Sequel/MySQL Import Routines

112 views
Skip to first unread message

George Gallen

unread,
Apr 8, 2016, 11:35:50 AM4/8/16
to mvd...@googlegroups.com
After standardizing some programming into subroutines, I put this document together.
It is only for reading data - fairly basic - but gets the job done. It is not an ODBC
it is not a command line solution - just Subroutines.

Make sure you read the Security Concerns I noted on page 2 of the document.

http://www.carcarealert.com/pick/Universe-SQL-Integration.pdf

George


Brian Speirs

unread,
Apr 9, 2016, 1:09:49 AM4/9/16
to Pick and MultiValue Databases, gga...@live.com
Hi George,

I'm sure that some people will find that useful to some people who wish to retrieve data from an SQL server. Moreover, this doesn't have to be done from UniVerse - it could be any MV database.

However, you do realise that UniVerse has a native ability to connect to SQL databases, don't you? Like most things in UniVerse, it isn't intuitively documented, but it is there.

Look up the BCI interface (Basic Client Interface). As long as you have an ODBC driver for the database in question, then you should be able to use the BCI functions to query and update the SQL server.

Using the BCI functions should get around one of the security issues you raise - the username and password should not be easily visible to anyone sniffing your network.

Cheers,

Brian

GGinNJ

unread,
Apr 11, 2016, 9:21:54 AM4/11/16
to Pick and MultiValue Databases, gga...@live.com
I believe the reason why we went with the programmatic solution was I think you needed UVNet in order to use BCI interface.  UVNet is not installed by default
on linux installations (at least not at the time of our installation). I tried playing with the BCI interface, but it wasn't successful. I may have just been going about
it wrong at the time, and at that point, it was easier to program around it, given we only needed Read Only.  It started to get a little more difficult, when some of
the data fields had embedded CRs, and then return data was too large to be read into a dynamic array and not be too SLLOOWW to process, so we then went
with the OPENSEQ. I guess the next step for when the data becomes too large to open, would be to copy to a FIFO file (pipe) and then OPENSEQ the pipe file.

George
Reply all
Reply to author
Forward
0 new messages