Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Re: LARGEDATA compiler directive

282 views
Skip to first unread message

Keith Dick

unread,
Sep 20, 2016, 7:27:19 AM9/20/16
to
mgman...@gmail.com wrote:
> I am hitting the error **** BINDER ERROR 53 **** No space left for stack after data block allocation when compiling a COBOL program is Tandem server, previously we just move the variables to Extended-Storage but now, all variables are in Extended-Storage, is there other way to fix this issue?

If you can switch to the native mode COBOL compiler (probably ecobol unless you have a very old system), that would eliminate the problem. If the program is pure COBOL85, that should be a fairly easy job. If the program calls any of your own TAL or C code, you would also have to switch those called procedures to native mode. And some of the calls to utility functions provided with COBOL85 have to be changed a bit when switching to native mode. Usually, the job is not very hard, though you might have an unusual case.

If you believe you cannot easily switch to native mode, take the last version of your program that did not run into this problem, list the memory map, and try to figure out what all is using space in the data segment below the 32K boundary. Perhaps you will find something taking space there that you were not aware of and can change.

I believe that COBOL uses space below 32K to hold pointers to the variables that are placed in extended storage. If so, and if you can put many extended storage variables under a single 01 level, that might conserve some space to get you past this problem, but you would be working on the edge of running out of space again soon, and it would be wise to seek a better long-term solution, such as converting to native mode.

Of course, there also is the approach of splitting the program into two or more programs. If it is a typical context-free Pathway server, it should not be a very difficult problem to take some of the functions offered by this program and move them to another program, though finding all the places that requests are sent and changing them might be a lot of work. You might avoid changing the requesters by introducing a new program that you use for the original serverclass, which looks at the request and uses SERVERCLASS_SEND to send the request on the to correct one of the two new serverclasses formed by splitting the original program. I don't think that is the best way to solve the problem in the long run -- switching to native mode would be better.

mizigel

unread,
Sep 21, 2016, 12:12:36 AM9/21/16
to
Thanks Keith for re-posting my inquiry and for the response.

We were able to compile the program without any error by adding LARGEDATA as a compiler directive but upon program execution, we are having issues reading files. Any idea why?

I tried putting ?LARGEDATA on the source code instead (and removing the EXTENDED-STORAGE SECTION) but now hitting **** BINDER ERROR 19 **** Data block cannot be allocated. A lot of subroutines are binded to the program and I did apply the same procedure on the subroutines related to the recent modification that we perform but it didn't help.

May i know how can i list the memory map and i'll try to check if there's something that we can immediately apply to solve the issue.

Thank you very much.

Keith Dick

unread,
Sep 21, 2016, 5:13:02 AM9/21/16
to
Error 19 is just another possible message you can get when trying to build a program that is too large to fit into the memory space of the old TNS architecture. If you don't split that program or convert it to native mode, you are going to continue to run into similar problems.

I see the manual says that the LARGEDATA directive must be used with ENV COMMON. If you are not using ENV COMMON, perhaps that is what is causing the file reading problem, but that is just a guess. If you are using ENV COMMON, then I do not have a idea why the program is having file reading problems. Perhaps if you described what file reading problem it is having, I might be able to make a guess, but I think the chances are low, even then.

Probably the easiest way to see the memory map is to use the BIND command:

LIST LOC DATA * FROM x
or
LIST LOC DATA * FROM x, BRIEF

where x is the name of your runnable program file. The second form, with the BRIEF option, makes the lines shorter so they fit on a terminal line without wrapping, but that omits the information about which source file the data was defined in. If you can make your terminal display 132 characters wide, you can use the first form and get all the information in the most readable way.

The above BIND command lists only the data blocks, which seem to be what is giving you problems now. Since you said the program contains many subprograms, I imagine each subprogram is small enough that placing them in the code space available in the TNS architecture won't give your problems, so you probably do not need to look at the code layout.

mizigel

unread,
Sep 21, 2016, 5:43:06 AM9/21/16
to
Not sure if I am declaring it correctly on the compiler but here's how it goes:

RUN $SYSTEM.SYSTEM.COBOL85 /IN <SOURCE-FILE>,OUT $S.#OUT,PRI 100/<OBJECT-FILE>;NOLIST;SYMBOLS;LESS-CODE 1;LARGEDATA;ENV COMMON;HIGHPIN
; OPTIMIZE 2; SETTOG 2

If I put it before the ENV COMMON, it returns ** Error 48 ** Improper context for this directive

if I put it after the ENV COMMON, it will be compiled without error but will have issue on reading a file. It always returns a not SUCCESSFUL-I-O

Keith Dick

unread,
Sep 21, 2016, 9:42:30 AM9/21/16
to
Keep in mind that I have not used COBOL85 in many years, but ENV COMMON is the correct syntax. I don't know about the ordering, but what you describe sounds consistent with what the manual says.

"not SUCCESSFUL-I-O" is not specific at all. Do you not get an error message from COBOL that includes some description of what COBOL found wrong plus perhaps a Guardian file error number? Do you even know exactly what the statement that gets the error is? Can you run the program in Inspect, put a breakpoint just before the statement that fails, and check the open files. Then step past the statement that gets the error and see whether the open file has a Guardian error associated with it (I think the Inspect command FILES * or FILES * DETAIL will show you this). If your program is using DECLARATIVES to catch and process file errors, you may have to put a breakpoint there to be able to display the condition after the error.

If all the subprograms you mention are COBOL and you don't have any TAL, C, or other subprograms, I think your time would be much better spent getting the program compiled with the native mode COBOL. Is there a specific reason you cannot at least try that?

Keith Dick

unread,
Sep 21, 2016, 10:48:08 AM9/21/16
to
Oh, in case you do decide to try to use native mode COBOL, you must compile all of the subprograms with the native mode compiler too. That should be obvious, but some people have not realized that, so I want to mention it.

Also, if you try native mode, you need to be careful not to accidently get duplicate copies of subprograms into your object file. For TNS programs using Binder for managing the object files, unneeded procedures were not included, but the way native object files are managed is different and you need to change some ?SEARCH directives to ?CONSULT directives, leaving ?SEARCH mostly to the main program or to the linker. Also, if you use object files containing many subprograms as libraries, from which you want to include only the referenced subprograms, you need to change those to use .a files. It isn't hard, once someone explains it, but the manual does not explain it. If you want to try native mode, I can help guide you through the common missteps.

I realize that there might be reasons you cannot use native mode, but if you can, it probably is the over-all best solution to your memory size problems.

mizigel

unread,
Sep 28, 2016, 5:03:02 AM9/28/16
to
Thanks for all the suggestion Keith, really appreciate it. The reason we can't just convert to native COBOL is because we also have subroutines written in TAL and C. For now, we have managed to resolve the issue by commenting out some variables and paragraphs that are no longer in use and planning to split the program for the permanent solution.

Keith Dick

unread,
Sep 28, 2016, 5:38:55 AM9/28/16
to
Just in case you are not aware of this: There are native mode versions of TAL and C. It usually would not be a big job to get them to compile your TAL and C code, but it usually does take a little more effort than just compiling with the native mode compiler. How much effort depends on exactly what those subprograms do. I can't guess how that work would compare with finding and changing all the requester code to send messages to the correct one of the two servers when you split it into two servers. You'll have to decide that.

Whichever way you go, good luck with it!

Dave

unread,
Sep 28, 2016, 10:18:56 AM9/28/16
to
Changing a large application to native mode can be a bear. Even if the Tal migrates easily, which I find called modules often do (as opposed to the old nonstop programs), you now have to maintain release environments for both code 100 and whichever native mode objects you are supporting.

Once you build the deployment environment it isn't too bad but it does take a bit of effort to set up initially. Plus if you are delivering a product to customers they will want to know what has changed and why it has changed when it was working fine. Also, if they build from delivered source they will need to implement the new multi-path build streams.

wbreidbach

unread,
Sep 29, 2016, 5:03:48 AM9/29/16
to
From my experience it is not a big deal to maintain TAL and EPTAL. There is a compiler directive and you can easily do it like this:

?IFNOT ptal
?inspect,symbols
?runnamed
?HIGHPIN
?ENDIF ptal

Of course you can use "?if ptal" for statements only meant for the ptal compiler. I do not know if there are similar options for COBOL and C but I think this information should be available in the manuals.
So there is no need to maintain 2 versions of the sourcecode, just hanele the little differences like above.

Dave

unread,
Sep 29, 2016, 8:30:50 AM9/29/16
to
I was not indicating that maintaining two versions is necessary but was indicating that you need to manage two versions of objects. This may not be a big deal for code developed for internal use, but if you deliver code to customers you are increasing their tasks too and, not surprisingly, they rarely appreciate the extra work.

Customers who contract to receive source code and build on their site will want a clear explanation of "Why did this change?". For our TNS NonStop code where files are managed through FCBs, liberal use of registers, STACK, CODE and STORE statements, and cleaning up all of the pointer references that Tal allows but epTal does not, this amounts to more than a couple of hundred changes per program. And that's a lot of explaining to do!

wbreidbach

unread,
Sep 29, 2016, 8:49:43 AM9/29/16
to
Ok, I understand, that is not that easy.
And indeed, if you are delivering software to customers, it makes things much more complicated.

Keith Dick

unread,
Sep 29, 2016, 9:39:28 AM9/29/16
to
Well, a lot, maybe even most, of the code changes that make TAL compile under pTAL or epTAL, will compile with TAL (you might have to turn on an option to make it accept the new syntax -- I don't recall right now), so there does not have to be as much conditional compilation as you might otherwise think.

If someone is looking over your shoulder at the source code, yes, you may have to explain a lot of changes in the source if the TAL code made a lot of use of those low-level constructs. Assuming that they accept that converting to native mode is desirable, I imagine they would be receptive.

As for maintaining two sets of object files, the end-user customer who just receives the object files does not have to deal with two sets of object files, do they? The vendor just sends them the appropriate set of object files for their system type. Keeping track of which customer gets which kind of object file is a little extra tracking on the vendor's part, but does not seem like such a large problem to me. I guess it depends on how organized the vendor is. Of course, if the customer has a mix of system types, they might have to keep track of two sets of object files, but not on any one system.

Yes, there can be extra work beyond just updating the code, depending on the exact situation. Insuring that you never run out of address space in the future might be worth whatever the extra effort there is.
0 new messages