Cobol For Pc

0 views
Skip to first unread message

Romilda Tiger

unread,
Aug 4, 2024, 4:36:38 PM8/4/24
to chamroricul
Thatis, as opposed to merely the age of most programs written in it and subsequent need to skimp on memory/disk usage driven by old hardware and the fact that nobody anticipated those programs to survive for 30 years?

People don't realize that the capacity of their laptop hard drive today would have cost millions in 1980. You think saving two bytes is silly? Not when you have a 100,000 customer records and a hard drive the size of a refrigerator held 20 megabytes and required a special room to keep cool.


Yes and No. In COBOL you had to declare variables such that you actually had to say how many digits there were in a number i.e., YEAR PIC 99 declared the variable YEAR such that it could only hold two decimal digits. So yes, it was easier to make that mistake than in C were you would have int or short or char as the year and still have plenty of room for years greater than 99. Of course that doesn't protect you from printfing 19%d in C and still having the problem in your output, or making other internal calculations based on thinking the year would be less than or equal to 99.


1: They didn't design forward looking 30 years. I can't blame them really. If I had memory constraints, between squeezing 2 bytes per date and making it work 30 years latter, most likely I would make the same decision.


Fascinating question. What is the Y2K problem, in essence? It's the problem of not defining your universe sufficiently. There was no serious attempt to model all dates, because space was more important (and the apps would be replaced by then). So in Cobol at every level, that's important: to be efficient and not overdeclare the memory you need, both at the store and at the program level.


Where efficiency is important, we commit Y2Kish errors... We do this every time we store a date in the DB without a timezone. So modern storage is definitely subject to Y2Kish errors, because we try to be efficient with space used (though I bet it's over-optimizing in many cases, especially at the enterprise overdo-everything level).


On the other hand, we avoid Y2Kish errors on the application level because every time you work with, say, a Date (in Java, let's say) it always carries around a ton of baggage (like timezone). Why? Because Date (and many other concepts) are now part of the OS, so the OS-making smart dudes try to model a full-blown concept of date. Since we rely on their concept of date, we can't screw it up... and it's modular and replaceable!


First- Most software of that age used only 2 digit numbers for year storage, since no one figured their software would last that long! COBOL had been adopted by the banking industry, who are notorious for never throwing away code. Most other software WAS thrown away, while the banks didn't!


Secondly, COBOL was constrained to 80 characters per record of data (due to the size of punch cards!), developers were at an even greater pressure to limit the size of fields. Because they figured "year 2000 won't be here till I'm long and retired!" the 2 characters of saved data were huge!


When your quarter of a million bucks computer had 128K and 4 disks totalling about 6 megabytes you could either ask your management for another quarter mill for a 256K machine with 12 meg of disk storage or be very very efficient about space.


So all sorts of space saving tricks were usered. My favourite was to store YYMMDD date as 991231 in a packed decimal field x'9912310C' then knock of the last byte and store it as '991231'. So instead of 6 bytes you only took up 3 bytes.


It was much more related to storing the year in data items that could only hold values from 0 to 99 (two characters, or two decimal digits, or a single byte). That and calculations that made similar assumptions about year values.


I've seen giant Fortran programs with no actual subroutines. Really, one 3,000-line main program, not a single non-library subroutine, that was it. I suppose this might have happened in the COBOL world, so now you have to read every line to find the date handling.


Some solutions were very bad vis-a-vis the millennium. Most of those bad solutions did not matter as the applications did not live 40+ years. The not-so tiny minority of bad solutions cause the well-known Y2K problem in the business world.


(Some solutions were better. I know of COBOL systems coded in the 1950s with a date format good until 2027 -- must have seemed forever at the time; and I designed systems in the 1970s that are good until 2079).


COBOL 85 (the 1985 standard) and earlier versions didn't have any way to obtain the current century**, which was one factor intrinsic to COBOL that discouraged the use of 4-digit years even after 2 bytes extra storage space was no longer an issue.


However, even this was not necessarily a problem as we wrote code in the 90's using this statement which just checked if the year portion was less than 70 and assumed that the date was 20YY, which would have made it a Y2K070 problem. :-)


i think the thing not really being mentioned is that cobol was a big part of the issue because cobol was so widespread as a main legacy language from those earlier years, particularly in old mainframe applications. and then compounded by the fact that a lot of the generation that programmed in cobol had retired or were dying off.


Hello everyone



I started to code in cobol since 6 month ago, I'm from Venezuela. I used to compile the programs from x3670 terminal emulator using tso ispf endevor, etc. Now, I'm using IDZ, we need to do everything from there and not using the terminal. I already know to compile cobol from there and also did a ztrial to learng basics concepts of IDZ, in that trial I learned how to debugging and visualize variables, put break points, etc.


In order to analyze your source code with SonarQube you need to first extract it onto a filesystem. You can use your own tool or an open-source tool; SonarSource does not provide any connectors or source code extraction tools.


The Indicator Area that has a special meaning (for instance * means that the line is a comment line, D means that the line is only taken into account in debug mode, etc.) is located at column 0. The size of the source code area is not limited.


Copybooks are, by definition, COBOL files that are not syntactically valid by themselves. However, copybooks are usually needed to properly parse COBOL programs. Thus, paths to the copybooks must be listed through the sonar.cobol.copy.directories property.


To have copybooks imported into a project, and issues logged against them, the copybook directories must be added to sonar.sources AND the copybook file suffixes must be added to sonar.cobol.file.suffixes. E.G.:


Note that it is possible to analyze a COBOL project without file suffixes. To do this, remove the two suffix-related properties from your configuration and substitute the following setting for sonar.lang.patterns.cobol:


The -Si (include) flag controls the actions of the source code control system. It must be followed by an argument that specifies a pattern that the compiler will search for in the Identification Area of each source line. If the pattern is found, then the line will be included in the source program, even if it is a comment line. However, if the pattern is immediately preceded by an exclamation point, then the line will be excluded from the source (i.e., commented out).


There are two ways in SonarQube to specify the list of ACUCOBOL-GT flags to be used in order to preprocess the source code. The first option is to define a list of global flags which will be used to preprocess all source files. This can be done in the Administration > General Settings > COBOL > Preprocessor.


COBOL analysis offers rules which target embedded SQL statements and require the analyzer to have knowledge of the database catalog (E.G. the primary key column(s) of a given table). These rules will raise issues only if the database catalog is provided to the analysis. For the moment, this is available only for IBM DB2 (z/OS) catalogs, and the catalog must be provided via a set of CSV ("Comma Separated Values") files.


sonar.cobol.sql.catalog.csv.path should define a directory that contains 8 CSV files. Each of these CSV files contains data for a specific DB2 catalog table and is named after it. The following table lists the required files and their respective mandatory columns. Additional columns may be listed, but will be ignored:

3a8082e126
Reply all
Reply to author
Forward
0 new messages