a z/OS port in the future

1,757 views
Skip to first unread message

Bill O'Farrell

unread,
Aug 24, 2016, 8:31:15 PM8/24/16
to golang-dev
Now that go is available for Linux on z (i.e. s390x), I'd like to give everyone a heads-up that we hope in the next few months to have ready a port for z/OS. z/OS is the modern 64-bit mainframe operating system that has a lineage that goes back to MVS, first released in 1974. One of its biggest claims to fame is its stability. Some users haven't had any unplanned downtime in literally decades, which is probably why more than 90% of Fortune 500 companies run z/OS. It would be great to get go running on this robust and (for business at least) ubiquitous platform. We've had great support from the community in the porting to Linux on z, so I wanted to run this by everyone early in the process. It won't be ready in time for 1.8, but 1.9 could be a potential target. Comments appreciated.

Matthew Dempsky

unread,
Aug 24, 2016, 9:31:54 PM8/24/16
to Bill O'Farrell, golang-dev
Neat.

On a scale from Windows to Solaris, how POSIX-y is z/OS?

What's the long-term binary compatibility story like?  Is there a stable kernel and/or userland ABI we can/should rely on?

Does GCC run on z/OS?  Is there anything that will make supporting cgo extra exciting?

Are you thinking GOOS=zos or something else?

On Wed, Aug 24, 2016 at 5:31 PM, Bill O'Farrell <billo...@gmail.com> wrote:
Now that go is available for Linux on z (i.e. s390x), I'd like to give everyone a heads-up that we hope in the next few months to have ready a port for z/OS. z/OS is the modern 64-bit mainframe operating system that has a lineage that goes back to MVS, first released in 1974. One of its biggest claims to fame is its stability. Some users haven't had any unplanned downtime in literally decades, which is probably why more than 90% of Fortune 500 companies run z/OS. It would be great to get go running on this robust and (for business at least) ubiquitous platform. We've had great support from the community in the porting to Linux on z, so I wanted to run this by everyone early in the process. It won't be ready in time for 1.8, but 1.9 could be a potential target. Comments appreciated.

--
You received this message because you are subscribed to the Google Groups "golang-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Rob Pike

unread,
Aug 24, 2016, 10:35:30 PM8/24/16
to Matthew Dempsky, Bill O'Farrell, golang-dev
GOOS=zos is a good name for a band.

-rob

Dave Cheney

unread,
Aug 24, 2016, 10:38:11 PM8/24/16
to Rob Pike, Matthew Dempsky, Bill O'Farrell, golang-dev
If GOOS is pronounced, "goose", does this mean zos is pronounced "zeus"?
>>> email to golang-dev+...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "golang-dev" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to golang-dev+...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-dev+...@googlegroups.com.

Keith Randall

unread,
Aug 24, 2016, 10:39:01 PM8/24/16
to Dave Cheney, Rob Pike, Matthew Dempsky, Bill O'Farrell, golang-dev
The "os" is a stutter.  GOOS=z.



>>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "golang-dev" group.
>> To unsubscribe from this group and stop receiving emails from it, send an

>> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an

> For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "golang-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+unsubscribe@googlegroups.com.

Brad Fitzpatrick

unread,
Aug 24, 2016, 10:46:30 PM8/24/16
to Keith Randall, Dave Cheney, Rob Pike, Matthew Dempsky, Bill O'Farrell, golang-dev
SGTM

Matthew Dempsky

unread,
Aug 24, 2016, 11:11:59 PM8/24/16
to Keith Randall, Dave Cheney, Rob Pike, Bill O'Farrell, golang-dev
On Wed, Aug 24, 2016 at 7:38 PM, Keith Randall <k...@google.com> wrote:
The "os" is a stutter.  GOOS=z.

We should rename GOOS=darwin to GOOS=x.

Andrew Gerrand

unread,
Aug 24, 2016, 11:13:40 PM8/24/16
to Matthew Dempsky, Keith Randall, Dave Cheney, Rob Pike, Bill O'Farrell, golang-dev
But they just renamed OS X to "macOS".

So GOOS=mac ?

--

Brad Fitzpatrick

unread,
Aug 24, 2016, 11:14:15 PM8/24/16
to Matthew Dempsky, Rob Pike, Dave Cheney, Keith Randall, golang-dev, Bill O'Farrell

Please. That naming was sooo last month.

It's macOS now. GOOS=mac.


--

Matthew Dempsky

unread,
Aug 25, 2016, 12:17:51 AM8/25/16
to Andrew Gerrand, Keith Randall, Dave Cheney, Rob Pike, Bill O'Farrell, golang-dev
As long as the iPhone port is GOOS=i.

John McKown

unread,
Aug 25, 2016, 11:14:18 AM8/25/16
to golan...@googlegroups.com
On Wed, Aug 24, 2016 at 8:31 PM, 'Matthew Dempsky' via golang-dev <golan...@googlegroups.com> wrote:
Neat.

On a scale from Windows to Solaris, how POSIX-y is z/OS?

​z/OS is _supposed_ to be 100% POSIX compliant. I am not a POSIX expert, but if a customer finds a variance, they can request a fix. IBM is fairly good about such with z/OS (aka "the flagship OS for the z series" according to POK). In addition, z/OS UNIX is UNIX 95 branded by the Open Group: http://www.opengroup.org/openbrand/register/brand3601.htm

 

What's the long-term binary compatibility story like?  Is there a stable kernel and/or userland ABI we can/should rely on?

​Backward compatibility is ​excellent. I've worked on IBM z/OS since it came out - and on its predecessors (OS/390, MVS/ESA, MVS/XA, MVS,) going back to 1979. Yes, I'm one old person (63). Now, those systems were not POSIX compatible, but an executable program that was compiled back in the 1980s for MVS will most likely still run z/OS without recompilation (but current compilers may not compile old source, if the language definition has changed like COBOL has). The IBM z/OS people worship at the shrine of backward compatibility. Even when it causes major disruptions in supporting new facilities. They'll actually "dual path" at times to support old customer programs.

 

Does GCC run on z/OS?  Is there anything that will make supporting cgo extra exciting?

​Ah! Now here's the fly in the ointment. IBM does not support any of the GNU tool chain on z/OS. As far as I know, the FSF does not support the z/OS system. There is a software company, Rocket Software, which has ported some of the GNU utilities  Home page for this: http://www.rocketsoftware.com/ported-tools . I don't work for them, but I have a friend who does & I support their products on z/OS (I'm a z/OS system programmer).


The single biggest PITA is going to be the character set. The native IBM z/OS character set is EBCDIC. And not just one variant. Historic z/OS has used CP-037. z/OS UNIX uses IBM-1047. They are generally compatible, but with some different code points for things such as []{} and other "special" characters. This requires some adjustments if a person is using a 3270 emulator. The reason this is a PITA is that, from what I can tell, "go" is designed to use UNICODE only and likely won't work with EBCDIC. Many coding assumptions made in C with ASCII (UNICODE) are not true in EBCDIC. Case in point: The mappings of the Cntrl-<letter> in EBCDIC are not contiguous like they are in ASCII. And their code points don't collate in the same order. So you can't map from a <letter> to a Ctrl-<letter> using a simple formula such as <letter>&0x1f . Another weirdness is that there is a "gap" in the code points between I & J and R & S as well as "i" & "j" and "r" and "s". EBCDIC is a very weird encoding if you come from the ASCII world. It makes sense only if you come (as z/OS originally does) from the "Hollerith card" (aka "punched card")  world. This was IBM original world. When they make the original S/360 (hardware progenitor of current z series), the decided to be compatible with the equipment their current customers had. Bad decision, at least in today's world.

​I hope this old man's blathering has been of some use.​


--
Klein bottle for rent -- inquire within.

Maranatha! <><
John McKown

mun...@ca.ibm.com

unread,
Aug 25, 2016, 11:24:01 AM8/25/16
to golang-dev, billo...@gmail.com

On a scale from Windows to Solaris, how POSIX-y is z/OS?

Modern language runtimes are typically built on top of the z/OS Language Environment (LE for short). LE provides POSIX functions. So towards the right hand side of your scale I think.

What's the long-term binary compatibility story like?  Is there a stable kernel and/or userland ABI we can/should rely on?

Very long term :) LE provides a way to get at POSIX functions via a userland ABI, so we can use that for "system calls" and don't have to use cgo necessarily.


Does GCC run on z/OS?  Is there anything that will make supporting cgo extra exciting?

No GCC... The C compiler on z/OS is IBM's XL C. One exciting (?) thing is that XPLINK (the newest linkage convention on z/OS) stores the stack pointer in R4, rather than R15 like ELF does. It's not an enormous problem, but does mean that the Go ABI (at least as it currently stands) will be a bit less compatible with the system linkage convention than it is on Linux.  But at least the stack grows downwards in XPLINK...

Are you thinking GOOS=zos or something else?

GOOS=zos is my preference. I'm also happy with GOOS=z but I can imagine problems resulting from files suffixed with "_z" only compiling on z/OS all of a sudden... The libuv port sometimes refers to z/OS as os390, so that would be a sensible fallback if there is a painful naming conflict. Unfortunately GOOS=os390 isn't quite as good a name for a band :(
To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+...@googlegroups.com.

Matthew Dempsky

unread,
Aug 25, 2016, 11:31:30 AM8/25/16
to mun...@ca.ibm.com, golang-dev, Bill O'Farrell
On Thu, Aug 25, 2016 at 8:23 AM, <mun...@ca.ibm.com> wrote:
Very long term :) LE provides a way to get at POSIX functions via a userland ABI, so we can use that for "system calls" and don't have to use cgo necessarily.

Sorry, to be clear: is it the kernel ABI or userland ABI that have long-term support, or both? 

Though I suppose if you're planning on only using the userland ABI like we do on Windows and Solaris (which is fine IMO), only userland ABI stability matters.

No GCC... The C compiler on z/OS is IBM's XL C.

Does XL C support the GCC extensions used by cgo?  Or will we need to extend cgo to have an XL C output mode?

Matthew Dempsky

unread,
Aug 25, 2016, 11:35:13 AM8/25/16
to John McKown, golang-dev
On Thu, Aug 25, 2016 at 5:40 AM, John McKown <john.arch...@gmail.com> wrote:
The single biggest PITA is going to be the character set. The native IBM z/OS character set is EBCDIC. And not just one variant. Historic z/OS has used CP-037. z/OS UNIX uses IBM-1047. They are generally compatible, but with some different code points for things such as []{} and other "special" characters. This requires some adjustments if a person is using a 3270 emulator. The reason this is a PITA is that, from what I can tell, "go" is designed to use UNICODE only and likely won't work with EBCDIC. Many coding assumptions made in C with ASCII (UNICODE) are not true in EBCDIC. Case in point: The mappings of the Cntrl-<letter> in EBCDIC are not contiguous like they are in ASCII. And their code points don't collate in the same order. So you can't map from a <letter> to a Ctrl-<letter> using a simple formula such as <letter>&0x1f . Another weirdness is that there is a "gap" in the code points between I & J and R & S as well as "i" & "j" and "r" and "s". EBCDIC is a very weird encoding if you come from the ASCII world. It makes sense only if you come (as z/OS originally does) from the "Hollerith card" (aka "punched card")  world. This was IBM original world. When they make the original S/360 (hardware progenitor of current z series), the decided to be compatible with the equipment their current customers had. Bad decision, at least in today's world.

The Go spec says source files are UTF-8.  How do we reconcile this on z/OS if the rest of the OS expects text files to be EBCDIC?

On the upside, Brad already has a CL to implement trigraph support.

David Chase

unread,
Aug 25, 2016, 11:46:11 AM8/25/16
to Matthew Dempsky, mun...@ca.ibm.com, golang-dev, Bill O'Farrell
1979, PL/C, SVS.
Cards and lineprinters.
Can I still get an abend 0C4?

Did most of my SVS, MVS, and VM/CMS hacking in PL/1 and BCPL.
Seems like EBCDIC could be a problem.
Could we just translate everything on the way in and out?


--
You received this message because you are subscribed to the Google Groups "golang-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+unsubscribe@googlegroups.com.

Brad Fitzpatrick

unread,
Aug 25, 2016, 11:50:45 AM8/25/16
to Matthew Dempsky, John McKown, golang-dev
Let me know if I should dust off https://golang.org/cl/21400


andrey mirtchovski

unread,
Aug 25, 2016, 11:53:52 AM8/25/16
to Brad Fitzpatrick, Matthew Dempsky, John McKown, golang-dev
> Let me know if I should dust off https://golang.org/cl/21400

date checks out.

John McKown

unread,
Aug 25, 2016, 11:55:51 AM8/25/16
to golan...@googlegroups.com
​The main editor in z/OS (which runs under TSO, not on a UNIX shell) has an option (which can be selected on the display panel) to edit ASCII files. There is also a, rather complicated, auto-conversion facility (called "Enhanced ASCII") which can be implemented on z/OS. That facility allows a file to be "tagged" as being encoded in a specific character set; such as IBM-1047 or ISO8859-1 or a lot of others). If you're curious: https://www.ibm.com/support/knowledgecenter/SSLTBW_2.1.0/com.ibm.zos.v2r1.bpxb200/uenha.htm https://www.ibm.com/support/knowledgecenter/SSLTBW_2.1.0/com.ibm.zos.v2r1.bpxa400/bpxug293.htm or a PDF called the "Porting Guide" at http://www-03.ibm.com/systems/resources/servers_eserver_zseries_zos_unix_pdf_docs_portbk_v1r9.pdf


 

On the upside, Brad already has a CL to implement trigraph support.

mun...@ca.ibm.com

unread,
Aug 25, 2016, 12:37:43 PM8/25/16
to golang-dev, john.arch...@gmail.com
The Go spec says source files are UTF-8.  How do we reconcile this on z/OS if the rest of the OS expects text files to be EBCDIC?

LE supports an ASCII mode (i.e. it provides ASCII versions of all the POSIX functions that have string arguments) which means that things like file names, host names and so on can be specified in ASCII. Obviously this does impose constraints on the range of characters that can be used for these types of things. Terminal I/O can also be automagically converted. Raw data written to files/sockets will need to be in whatever character encoding the receiver expects. The only place EBCDIC really becomes a necessity in the standard library is when reading/writing system specific files (think /etc/services) and parsing the output of external programs, so there might need to be some shims/helpers for that kind of thing (which seems to be fairly rare). Oh, and linker symbols need to be converted, but linking will be very z/OS specific anyway.  I could be wrong but I think practically speaking this will end up similar to windows which I believe uses UTF-16 for system files.

As for the Go source files themselves, they are defined to be in UTF-8 so editors on z/OS will just need to switch character sets for those files. It's not too bad, filesystem tags can be be used or they can be edited on a shared file system which is what I tend to do.

Bill O'Farrell

unread,
Aug 25, 2016, 3:32:40 PM8/25/16
to golang-dev, mun...@ca.ibm.com, billo...@gmail.com
At this point were investigating xlC in the context of CGO. Needs investigation, and there may be other options which are more compatible (but no, not gcc).

Rob Pike

unread,
Aug 25, 2016, 4:46:46 PM8/25/16
to Bill O'Farrell, golang-dev, Michael Munday
Does zOS support 1401 emulation mode?

-rob


--
You received this message because you are subscribed to the Google Groups "golang-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+unsubscribe@googlegroups.com.

Bill O'Farrell

unread,
Aug 25, 2016, 5:26:43 PM8/25/16
to golang-dev, billo...@gmail.com, mun...@ca.ibm.com
Nope. As far as I know that went out with system 370.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+...@googlegroups.com.

Rob Pike

unread,
Aug 25, 2016, 5:27:33 PM8/25/16
to Bill O'Farrell, golang-dev, Michael Munday
Awwww. Sniff.

-rob


To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+unsubscribe@googlegroups.com.

Michael Jones

unread,
Aug 25, 2016, 7:31:34 PM8/25/16
to Rob Pike, Bill O'Farrell, golang-dev, Michael Munday

I’ve had the privilege of mentorship from both Bo Evans and Fred Brooks. Feel great admiration for IBM’s hardware team and many aspects of the mainframe software groups—channel controllers, RACF, VM/370 and beyond, John Cocke, Hursley house, Mike Cowlishaw, RTP, Henry Rich, etc. All magical.

 

One comment on EBCDIC’s glyph arrangement…it never made sense to me until I got the fold-out pocket reference card that showed the character layout on a 16x16 grid. In that format you can see that all of the logical groupings of symbols are tightly packed rectangles separated by whitespace. Only on this “virtual {hex}x{hex} cross-product typewriter, did it have some sense…

 

 

…but even this structure is made invisible by typical columnar layouts. You can also understand here the great woes of EBCDIC:

 

The difficult to comprehend gap in rectangle structure before the ‘s’ and ‘S’.

 

The debris of punctuation strewn down the left column and into the gap before the esses.

 

The 118 characters that are arranged to occupy more than the seven low-order bits of the byte, which Fred Brooks of the S/360 & OS/360 team chose to be 8 bits long before.

 

The typical at the time but now quaint notion that every special control specification needed its own symbol (“in band signaling”) rather than an escape mechanism and then subsequent normal characters to describe the control. This made logic decoding simpler in devices of yore, but wastes symbols on uses that are abandoned before the assignment is standardized. The choice was not wrong for the times, but the lesson is one for the ages.

 

EBCDIC has the interesting property that you can go from lower case to upper case with a constant offset, but cannot get from one letter to the next with x+1 or test for alphabetical with ‘a’ <= x <= ‘z’ || ‘A’ <= x <= ‘Z’.

 

Michael

Lucio De Re

unread,
Aug 25, 2016, 11:46:08 PM8/25/16
to Michael Jones, Rob Pike, Bill O'Farrell, golang-dev, Michael Munday
Given that upper case ought to have been a font change all along (I
think Russian/Cyrillic does that), rather than a distinct set of
symbols, none of EBCDIC, ASCII or UNICODE got it right. Maybe Seymour
Craye's Fieldata did :-). Or was the DoD actually responsible for
that.

Nowhere is there the necessary elbow room to fix that, is there?

Lucio.


On 8/26/16, Michael Jones <michae...@gmail.com> wrote:
> I’ve had the privilege of mentorship from both Bo Evans and Fred Brooks.
> Feel great admiration for IBM’s hardware team and many aspects of the
> mainframe software groups—channel controllers, RACF, VM/370 and beyond, John
> Cocke, Hursley house, Mike Cowlishaw, RTP, Henry Rich, etc. All magical.
>
>
>
> One comment on EBCDIC’s glyph arrangement…it never made sense to me until I
> got the fold-out pocket reference card that showed the character layout on a
> 16x16 grid. In that format you can see that all of the logical groupings of
> symbols are tightly packed rectangles separated by whitespace. Only on this
> “virtual {hex}x{hex} cross-product typewriter, did it have some sense…
>
>
>
>
>
Lucio De Re
2 Piet Retief St
Kestell (Eastern Free State)
9860 South Africa

Ph.: +27 58 653 1433
Cell: +27 83 251 5824
FAX: +27 58 653 1435

Rob Pike

unread,
Sep 9, 2016, 8:39:23 PM9/9/16
to Bill O'Farrell, golang-dev, Michael Munday
I would like to understand how you plan to approach the EBCDIC question. Go is defined to have source code in UTF-8-encoded Unicode. That can be potentially be subset for zOS to just ASCII. But I don't believe it's reasonable, for example, to create a compiler variant that accepts some EBCDIC variant (braces anyone?) for its source code.

So that's easy, Go code on zOS is ASCII or maybe, one hopes, full UTF-8 and Unicode using some editor mode.

But as for programs themselves, they must work on EBCDIC well, and the plan for that is much harder to see. There are many libraries and even language primitives in Go that assume UTF-8, as the spec says so. You can't even define an EBCDIC rune in the source code except as an integer that puns to a small-valued Unicode code point.

What do you plan to do? This question needs early discussion.

-rob

mikioh...@gmail.com

unread,
Sep 9, 2016, 8:52:39 PM9/9/16
to golang-dev
i have no experience with z/os or mvs, but it looks interesting, especially high availability, logical partitioning features on packet networking. if you have a plan to provide a package that controls hipersockets/interlinks/direct link/phy io, i look forward to seeing (uh, reviewing if there's a spare time) it.

mun...@ca.ibm.com

unread,
Sep 9, 2016, 10:08:15 PM9/9/16
to golang-dev, billo...@gmail.com, mun...@ca.ibm.com
So that's easy, Go code on zOS is ASCII or maybe, one hopes, full UTF-8 and Unicode using some editor mode.

Yes, the idea is to use UTF-8 with the limitation that strings passed to system calls (filenames, hostnames etc.) will be restricted to ASCII in order to take advantage of the automatic conversion facilities.


But as for programs themselves, they must work on EBCDIC

We'll need to provide some sort of conversion library. It's not clear yet (to me anyway) whether it should live in the standard library or somewhere else (x/text maybe?).  Either way inside Go there will need to be a conversion facility available for tests and the linker but personally I'm happy for that to live in an internal package.

I'm hoping (maybe I'm being naive) that if a developer wants to interact with an EBCDIC program or file they will just need to use a shim implementing the io.Reader and io.Writer interfaces. In exactly the same way they would if they were interacting with some other file/stream encoding (e.g. gzip). Third-party libraries might not always work "out of the box". However this is true of Linux and Windows too. In my opinion the UTF-8-centric nature of Go might well be very useful for providing web interfaces and other external communications.

mun...@ca.ibm.com

unread,
Sep 9, 2016, 10:20:17 PM9/9/16
to golang-dev, mun...@ca.ibm.com, billo...@gmail.com
Sorry, to be clear: is it the kernel ABI or userland ABI that have long-term support, or both?

Binaries linked against LE (the userland ABI) should be compatible with future versions of LE:

http://www.ibm.com/support/knowledgecenter/SSLTBW_2.2.0/com.ibm.zos.v2r2.ceea600/olmc.htm

I don't know if there will be much need to interact with the kernel directly but I believe the compatibility guarantees are similar.


On Thursday, August 25, 2016 at 11:31:30 AM UTC-4, Matthew Dempsky wrote:

john...@gmail.com

unread,
Sep 11, 2016, 1:55:44 AM9/11/16
to golang-dev, mun...@ca.ibm.com, billo...@gmail.com
The operating system ABI (aka "kernel ABI" although the term kernel isn't used) has ultra-long term support. At least some programs that were written in the late 60s using it still run without recompilation. The LE interface sort of has long-term support, but that's modified by its being the COBOL, C and C++ language libraries. IBM does enhance it from release to release, which they can do since they also own the compilers that use it. I'd worry that a call you  depend on suddenly does weird things in a new release because none of the latest compilers use it any more.

John Roth

john...@gmail.com

unread,
Sep 11, 2016, 1:55:49 AM9/11/16
to golang-dev
I happened to see this thread, so here’s a few notes. I’m in my 70s, and I’ve worked with the OS/360 line since PCP, MFT and MVT, hacking on the beast back when it was open source and you could do that. (I also know what happened to Option 4, aka VMS, but that’s only of hysterical interest.)

A couple of minor points before getting into the meat. One of almost no interest but was mentioned in the thread is that 1401 emulation was never officially supported by any operating system - it was always a standalone hardware option on the 360/30 and 360/40. I think there were a couple of heavily modified versions of DOS that supported it on a 360/40. There are quite competent 1401 interpreters that work today, so if playing with a simulated 1401 rocks your boat, have at it.

There’s a reason that the Language Environment doesn’t use R15 - system calls use R0, R1 and return a condition code in R15, which means that using R15 for the stack would have to save and restore it around each and every system call.

On EBCDIC (Extended Binary Coded Decimal Interchange Code). It’s an 8-bit version of BCDIC (Binary Coded Decimal Interchange Code) that was used on earlier IBM systems; that in turn was an extension of the coding used on punched card systems. (And yes, I worked on some of those earlier systems.) On the 6-bit encoding, the vacant character in front of the letter S contained the slash. The reason for the weird layout is that the designers decided to use the high-order bit to distinguish alphanumeric characters from special characters. That made some common character-manipulation tasks relatively easy - in Assembler. Once you realize that, EBCDIC is simply BCDIC with some twiddling in the two high order bits.

Stability isn’t the reason that companies have stuck with it. Modern server operating systems are just as stable. The reason people have stuck with it, despite the problems and that today it’s a real outlier in modern operating systems is COBOL, and specifically the difficulty in converting legacy programming that are central to business operations together with the file formats they require.

The last major gig I had before retiring was on a mixed Unix and mainframe team, and I can personally testify that the Unix people simply didn’t comprehend the mainframe side. It’s too different. That problem, by itself, will almost certainly relegate a zOS port of GO to toy status - the people doing the port most likely have no idea what the business side has to deal with. Operations and system administration, somewhat. Business, no.

Let’s start out with data types. Anything that deals with files formatted to work with COBOL has to deal with fixed length 8-bit EBCDIC character fields, as well as fixed point decimal arithmetic. If you expect Go to be anything other than a toy on that system, it has to handle those two data types. Now, if you look at the Wikipedia article on COBOL, it seems like there are alternatives. In actual practice, that is real programs doing real work, there aren’t. Those new-fangled data types simply haven't been adopted to any great extent.

Fixed-length 8-bit character fields are the reason that UTF-8 encoding is a non-starter. If a field is supposed to be 30 characters, it’s 30 bytes and that’s it. No more, no less. There’s no room for multi-byte characters, and there’s no silliness about the string being variable length ending with a null character. It’s got 30 characters, padded on the right with blanks.

Fixed decimal means the field is composed of decimal digits with an assumed decimal point that’s part of the data type. The field can be packed two digits per byte. This has hardware support meaning that if you expect any speed out of it a GO compiler that supports it will have to generate the appropriate instructions. Otherwise you’re stuck with format conversions to and from the disk record formats, and you can expect disk formats will have packed decimal fields. Packed fields can be as long as 31 decimal digits, and since a decimal digit is somewhat over 3 bits of information, it can be too long to put in a 63-bit integer. That’s not actually a practical issue, fortunately, and at least some versions of the zSystem have decimal floating point in hardware. Of course, that means the compiler is going to have to generate the correct instructions.

Now for file types. Unix is based on the stream of bytes concept. zOS is based on a record and block concept. If I want to read a record from a sequential file, I ask for the next record. How many bytes do I ask for? I don’t. Doing application programming, I simply don’t care how big the record is. As long as it maps into the correct struct (FD in COBOL, and COBOL will take care of the record length for me), I’m good with it. If the records are variable length, I don’t even know how big it’s going to be - that length is kept in the first two bytes in binary. Records are not delimited by any form of record delimiter.

Now for the tool-chain. On that last gig, major business systems were written in macro Assembler. I hope nobody dislocated their jaw on that one! They may have converted it by now - I certainly hope so! - but the key here is that zOS programmers that are familiar with Assembler will not be happy with the assembler that comes with the GO toolchain. They’ll expect that the existing High Level Assembler integrates properly with GO programs. That is not going to be trivial.

With that as background, it should be fairly obvious why “essential” tools like the GNU utilities haven’t been ported officially, and why UTF-8 hasn’t taken over from the older code page character sets like it’s doing on UNIXy systems. It’s completely pointless. The scripting language that comes with TSO (Time Sharing Option) (which is not a UNIX shell) has no resemblance to UNIX shell scripting languages. Batch jobs are run using something called JCL (Job Control Language - of which the less said the better). If you’re going to go to the trouble of converting that mess, you might as well go all the way to a real UNIXy system and frankly, a large proportion of companies running zOS would like to, but the cost of conversion is way too high.

Back on Language Environment. Since it’s a user-land facility, you have to deal with a unix-style stack, which means that calls to and from LE are most likely going to have to do the same kind of stack shifting you see with CGO. It’s also not compatible with the POSIX environment.

I’ve never dealt with the POSIX environment; it may have solutions for some of the problems. If it doesn’t, I’d suggest going straight to the SVC call interface to the operating system.

Turning GO into a reasonable tool for existing zOS shops would be an interesting experience, for some value of interesting.

John Roth

Lucio De Re

unread,
Sep 11, 2016, 4:01:18 AM9/11/16
to john...@gmail.com, golang-dev
Nice summary, John.

I'd like to add a teeny item of information that I have never seen
highlighted. Back in 387 days, BCD calculation were part of the FPU
instruction set. In fact, one version of Turbo Pascal allowed one to
pick which format floating point computations would take place in.

Of course, that is still there. Whether there's any scope outside of
COBOL to use the BCD FPU capabilities of the i86 chips is debatable. I
presume that neither MIPS nor ARM provide for it, in any event. PPC,
maybe?

Lucio.

Bill O'Farrell

unread,
Sep 12, 2016, 4:00:52 PM9/12/16
to golang-dev, john...@gmail.com
A couple of clarifications. I don't think anybody thinks that go will be supplanting Cobol on z/OS. Rather it will be interacting with existing "systems of record." there are important applications written in go (ex: HyperLedger and Docker) that would require a robust, complete, go implementation on z/OS. In that sense Go will certainly not be a toy. With CGO it could interface with other major applications (DB2, CICS) and things written in HLASM.

Note that UTF8 isn't a problem on z/OS. Files can be tagged as UTF8, and, can be converted as necessary.

john...@gmail.com

unread,
Sep 12, 2016, 6:40:10 PM9/12/16
to golang-dev, john...@gmail.com


On Monday, September 12, 2016 at 2:00:52 PM UTC-6, Bill O'Farrell wrote:
A couple of clarifications. I don't think anybody thinks that go will be supplanting Cobol on z/OS. Rather it will be interacting with existing "systems of record." there are important applications written in go (ex: HyperLedger and Docker) that would require a robust, complete, go implementation on z/OS. In that sense Go will certainly not be a toy. With CGO it could interface with other major applications (DB2, CICS) and things written in HLASM.

Note that UTF8 isn't a problem on z/OS. Files can be tagged as UTF8, and, can be converted as necessary.


I'm pretty sure IBM has a set of data conversion utilities - at least, they certainly used to. Feed them with a data description and they'll convert just about anything to just about anything else. Whether using them is less painful than doing it yourself is a significantly different question, to which there isn't a clear, one-size-fits-all answer. I'm definitely in the do-it-yourself camp; there are cogent arguments on the other side, of course.

This begins to make sense.  What you seem to be saying is that there's this neat system with oodles of industry bigwigs signed on that runs under Linux, and you think that porting it to zOS would be a good idea. Since it's written in Go, that's the first thing to port. Docker, of course, is an operating system utility, not an application framework. HyperLedger is an interesting concept if the financial industry and the financial regulators actually buy into it.

Being old enough that I've earned the right to be a bit cynical, I can say that I've seen these industry consortia pursuing the next big thing come and go without actually changing anything. All that the list of Big Names signed up to HyperLedger means to me is that a lot of companies are willing to devote some resources to it to make sure that they don't get left out in the cold if it takes off. Of course, sometimes one of these ideas does manage to catch fire and change some small part of the world, and I'm not going to indulge in prophecy.

I'd think that CGO is a requirement; you're not going to interface with Language Environment without being able to shift into an environment that supports C and C++, not to mention COBOL and PL/1. You can, of course, build the run-time library using the OS interfaces without needing to do that, and you're going to have to do a lot of that anyway: Language Environment does not provide I/O services to the languages it supports. Those are in the specific languages' run-time libraries.

John Roth

mike.gro...@gmail.com

unread,
Sep 15, 2017, 9:15:27 AM9/15/17
to golang-dev
Hi there,

is there any news regarding porting golang to z/os? I can't wait to use my favorite language on Z. :)

simone....@gmail.com

unread,
Feb 8, 2018, 10:22:13 AM2/8/18
to golang-dev
Still no news on the z/OS port of Go. Did IBM leave the porting?
Reply all
Reply to author
Forward
0 new messages