We have a progress database as a standalone application and a whole load of SQl Server databases that rely on teh information in the progress system for more accurate information. We have OpenEdge driver 10.2B. I use a SQL DTS (becasue I am stuck on a SQL 2000 system(!)) to transfer the data.
The DTS normally works brilliantly. However, sometimes it seems that the progress database is allowing a user to enter a larger entry than the field size would suggest and I get error messages that state, for example:
We have a progress database as a standalone application and a whole load of SQl Server databases that rely on teh information in the progress system for more accurate information. We have OpenEdge driver x.yb. I use a SQL DTS (becasue I am stuck on a SQL 2000 system(!)) to transfer the data.
[DataDirect][ODBC Progress OpenEdge Wire Protocol driver][OPENEDGE]Column X in table Y has value exceeding its max length or precision.
[DataDirect][ODBC Progress OpenEdge Wire Protocol driver] Error in row.
Thanks both for responding - coming from a SQL server background (and not wanting to sound like a Microsoft snob) it goes against every DB design principle I have ever been taught that a DB can, basically, ignore table design.
Is there an actual purpose to this? or is it a feature of the software? The reason our supplier is using is that users like the flexibility, but if it makes the DB inefficient then it is really counterproductive...
It isn't ignoring the table design as such. The table is created with a best guess at the width. You are then provided with a way to expand it based on data added subsequently. If you're using SQL to access a Progress database there are a couple of things you should do reasonably regularly. One is to update the SQL widths with DBTool. The other is to update statistics so that the queries run more efficiently.
All Progress CHAR fields are VARCHAR with a max size of 32K. Since SQL absolutely wants a field width, Progress sets that using a special field attribute called sql_width. dbtool is the program provided by Progress to automatically adjust the SQL width attribute. IIRC, dbtool sets sql_width to twice the width of the widest instance if the field in question.
I used to run it weekly and I'm sure it's not a bad thing per se, but I doubt that data distributions change enough from week to week to justify it. There may be a couple of tables that see a lot of ADD/DELETE action that would benefit from frequent UPDATE STATISTICS, but most tables (think customer, item, etc...) would not.
I'm just talking from the perspective of hearsay as we don't have SQL clients. A couple of folks over on Progresstalk champion the weekly run. Maybe they'll jump the great divide to fill us in here! ;)
The idea that field width ought to be limited for efficiency reasons is one of those ancient premature optimization things that doesn't actually hold water. It's actually kind of surprising (to me anyway) that Codd & Date didn't put up a stink about it because it really doesn't fit with "relational" thinking very well at all.
Rich has said that you should run UPDATE STATISTICS when 20% of your data has changed. He then asks, rhetorically, how you can know that 20% of your data has changed... (I suppose you could keep track of bi cluster closed?)
Not particularly query efficiency, more like storage space efficiency. If I was going to design a SQL database like that I would be using VARCHAR(MAX) field sizes which would increase my file size because of the memory each field would use. If you have ever seen the number of columns in any of the datasets that exist in the NHS, you would realise why sometimes limits in SQL Server have to be set. Integration with other DB systems is also more problematic if the DB is saying one thing but the data is saying another.
That 20% is applicable at the table level more than the DB level, I would think. Customer and item are unlikely to change 20% very often but inventory, shipper, invoice, gl...all those tables are constantly being modified.
(Theoretically) I wonder if basing the decision on index operations (_IndexStat) would give a more precise trigger. A table that has x amount of write activity to indexes is potentially in need of an UPDATE STATISTICS.
But you are correct, you script it once and forget about it. I think monthly is more than sufficient but as you say, weekly does no harm and you are probably running other maintenance tasks weekly anyways.
Progress fields are not. So they only take up whatever space they actually need. That's actually pretty efficient from a disk and memory usage perspective. Sure, the "overstuffed" fields use more space -- but the "understuffed" ones conserve a whole lot.
The 4GL does not impose a max width on varchar columns, only on the total row size. This is different from SQL VARCHAR, which does have a maximum size. The max width used by the OpenEdge SQL is derived from the default /display/ format. But it is just a default for display. As some else noted previously, you can make the 4GL runtime enforce the max width setting if you choose to, but most people do not.
Progress, Telerik, Ipswitch and certain product names used herein are trademarks or registered trademarks of Progress Software Corporation and/or one of its subsidiaries or affiliates in the U.S. and/or other countries. See Trademarks for appropriate markings.
I have a project requirement that needs an OE 11.7 codebase (Red Hat Linux 7.6) running against a OE 11.7 database (also Red Hat Linux 7.6) to connect to a SQL Server database (Windows Server 2016) and be able to view the SQL Server schema as a native Progress 4GL application would.
Can someone please recommend a non-JDBC approach to doing this? Ideally an ODBC driver that allows me to do this would be fantastic. If there is such an ODBC driver, please point me to what that is and how to obtain it.
Have you ever looked for it or tested dataserver for yourself? SQL Server is born-and-raised as a client-server database. Data is transmitted to the client in an entire batch of rows, rather than communicating with a bunch of tiny TCP exchanges that contain a single row or a single column or a single byte at a time. Server-side *join* functionality is available in progress dataserver (see PS), as well as other sophisticated features like server-side stored *procedures*.
On the other hand when you are using client-server against a native Progress DB, you have to format your FOR-EACH-NO-LOCK loops in precisely the correct way for data to be transferred in larger batches. Otherwise you end up with 2-3 network round-trips per *record* (capping out at about 2-3000 records per second on the LAN). Server-side joins were never possible (until OE 12 - and even here I suspect it is based on a certain set of special circumstances).
I only evaluated SQL Dataserver for a short period of time, but my experience with it was very positive. Progress put some serious talent into the development of DataServer. I ran it in the dataserver-self-service mode with a schema-holder that was available on the local SSD (-RO -1) of the machine running the AVM client sessions. There was only one hop between the ABL logic and the SQL Server database. I never had the feeling that the database was "holding back" on my requests. (Whereas the Progress DB, on the other hand, seems to spend a lot of its time *pausing* between time slices - even when there is more work waiting to be done. And there is effectively a CPU *quota* that you see yourself running into - which is easily observed in the CPU usage of the _mprosrv processes.).
Hope this is clear. Again, whether you benefit from the advantages of dataserver may depend on how the ABL code is written, and how much data is used. If your application interacts with only a handful of database records on each screen then you won't have any performance challenges to begin with, no matter what the underlying technologies are.
Short Answer is yes, however, there may be SQL schema constructs that are not supported by the Database schema . You will find out quickly enough by importing the SQL schema into the Dataserver schema holder. Also, actually processing and speed may not equate to how a native Progress DB & 4GL interact. Also, your Dataserver Install will need to be on a SQL Server box, although your schema holder can live on Linux. This could introduce additional lag due to network traffic.
Check availability guide for Opededge Version to MS SQl Server version for the dataserver. Dataserver is a separate licensed product. The Migration tools only work when you have a Development product installed also..
I played with it briefly for a time. Depending on how the code is written, performance can be equal to or better than client/server ABL on a native Progress DB. There are a lot of moving parts to configure. I had an advantage in that everything was running in windows, and dataserver was runing in self-service mode.
b1e95dc632