Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Mainframes Are Having a Moment (ieee.org)
57 points by jbkavungal on April 18, 2020 | hide | past | favorite | 64 comments


I used to work at an environmental law firm in downtown Manhattan and accounting and other core financial systems were running on a VAX OpenVMS mainframe, and the document creation and storage was on PCs over a Token Ring, Banyan VINES network. The DOD was using VINES at the time. Memories of playing DOOM in 1994 downloaded from an FTP server somewhere while working the night shift to make backups on huge tape cartridges. The VAX system was pretty reliable and easy to deal with IIRC. A good thing from a hardware/help desk angle is that you didn't really need to troubleshoot terminals; they broke (keys or monitor) and required replacing. The mainframe was in the climate-controlled room, all in one place for any hardware servicing. I think Token Ring primarily was replaced because cheap, twisted-pair wiring made Ethernet less expensive. Token Ring had these twist-on bulky connectors. In a way we almost went there with Chromebooks basically being pared-down PCs and being served by a virtual mainframe, the cloud ("Spenser Confidential" movie "Give me the cloud scene" is hilarious).


VAX's were minicomputers, not mainframes.


https://en.wikipedia.org/wiki/VAX_9000 "The VAX 9000, code named Aridus or Aquarius, was a family of supercomputer and mainframe computers developed and manufactured by Digital Equipment Corporation (DEC) using processors implementing the VAX instruction set architecture (ISA)."


I never heard of that model. I suspect it probably wasn't very successful because DEC started to move to the Alpha architecture not many years later.

I worked with a lot of VAXen back in the day but I never heard anyone refer to them as "mainframes".


Aye, not common and for many will be a surprise as mention VAX and the DecVAX workstations and as you say, midrange setups spring to mind. I did some C work upon them late 80's and was very impressed and some profiling tool that was a whole new experience of joy.

Will add, you could win a lot of proposition bets that Dec made a VAX supercomputer.


VAX 9000 was quickly superseded by the NVAX processor. It was Ken Olsen’s final failure at DEC to expend tremendous resources on an ECL mainframe only to have a single chip equivalent by the time it finally shipped.


You're correct, although they did frequently have some of the hallmarks of "mainframes" like raised floors/cooling.

It's amusing to look up "mainframe" on wikipedia and see what appears to be compact server racks. Shouldn't a mainframe take up a room, or even a whole floor?


I lived thru that era and in all that time, still managed to never encounter "Banyan VINES", and it was always some network driver that many knew of as What the heck is that crazy thing. To which somebody would say, oh some banks use it.

Though encountering Token Ring at a company in early 2000, that was...interesting. Though dare say, that somewhere out there in the world, a token ring network still exists and is running even as I type. Legacy sure does have some strange corners when it comes to many walks of life.


Datapoint's ARCnet (token bus) similarly also had to make place for Ethernet [0], while initially less expensive. An interesting networking technology still used in industrial fieldbus applications (wind turbines & robots for example), its deterministic nature making it well-suited for realtime requirements. Flexible wiring topologies and good signal resilience too.

It was used as a basis for Datapoint's minicomputer clustering technology (called ARCNET too, programmed using DATABUS), and later VAXcluster [1] as I just found out.

[0] https://en.wikipedia.org/wiki/ARCNET

[1] https://en.wikipedia.org/wiki/History_of_computer_clusters


VMS's command line is... verbose. It has nice completion and is well documented, though. And you can see where the '/' style of parameters came from.

e.g.

    DIR/SINCE="1-FEB-1994 17:00"



Mainframes have been having a moment. Or, perhaps more accurately, certain enterprises are finally waking up to the fact that firing all those mainframers wasn't the best idea for the long-term. The current problem isn't the lack of interest of availability of training; both would be there if it weren't for the fact that enterprises are more interested in having their code written by consulting companies in India than by domestic employees.


It is not only lay offs it is also retirements. And the pressure is mounting with baby boomers gone soon. I was once in a discussion taking over the maintenance of a system and the picture was grim. The crew running the system was so thin it was hard to imagine them having the time handing it over. Besides they barely understood all the details as the system had been build by people 30 years ago all gone a long time ago. A big bowl of spaghetti in a core part of their enterprise. We decided not to try to take this on - too much risk.


I was at a talk about mainframe security recently where the state of play with one company was, there were three users in their admins group for the mainframe, two had retired, and the third was past retirement age...

When you combine retirements with decades of under-documented code doing critical functions, it's not a recipe for good long term success...


I dunno; rewriting it in some shit tier EC2 FunNewLang framework that changes every year doesn't sound like a recipe for long term success either.

When I look at how mainframes work, and how "clouds" work, I wonder at how imbecilic IBM management (or one of their competitors) must have been to not capture that value in the first place.


There is a middle ground between those two extremes of course.

I'm not surprised at the mainframe's lack of success at all, the barriers to entry are extreme, meaning there's an inevitable lack of activity.

I've worked in environments which had mainframes deployed and even as an employee it was inordinately difficult to get any access to them.

Combine that with IBM's "enterprise" sales process and it's not a mystery that they lost the fight to attract newer systems...



IMO the hard part isn't learning COBOL -- that's just another programming language. It's learning the whole operating environment that comes with it -- mainframes. Stuff like JCL, EBCDIC, PDS. I did this for a year or so back in 1999. It felt very ancient back then. If you invest your time in gaining expertise with these technologies, you are not doing your career any favors.


Depends on your point of reference. If you buy into the ecology you could build a sane career where experience actually meant something instead of being level set every two years by the new shiny tech stack that's going to save the industry.


I think this is just an IBM PR piece. It ticks all the boxes: quotes from IBM customers, quotes from IBM staff, mentions the "Master the Mainframe" contest. You can tell because when it says "mainframe" it doesn't mean "any large computer", it means "Something descended from the IBM 360".


Yes, because that's what the word "mainframe" means today, simply because all other machine architectures that were considered mainframes are now extinct.

I'm not saying it's not IBM PR, though.


Unisys's ClearPath series are considered to be mainframes (or so says Wikipedia). The most recent ClearPath Dorado model was introduced in 2015.

As for the article: This may be what Paul Graham called a "submarine".


Unisys represents two different mainframe families - those descending from Univac/Sperry 1100 and those descending from the Burroughs mainframes. Both are purely emulated on x64 these days. Personally I think the latter are especially interesting, much more so than IBM!

IBM is really going with whatever positive publicity they can muster trying to obscure the profoundly awful age discrimination that they’re guilty of. It’s brutally ironic that the same people who are preaching “young people need to do mainframe” are the same who are aggressively ditching their own senior, experienced, expensive mainframe people.


There are more surviving mainframe families than just IBM and Unisys:

- Groupe Bull still supports GCOS mainframes (descended from General Electric GECOS)

- NEC still supports ACOS mainframes (also descended from General Electric GECOS, but a different fork from Bull's) – only sold/used in Japan now, although in past decades NEC did attempt to sell them outside of Japan (not sure if anyone bought them though)

- Fujitsu still supports ICL VME mainframes (still used by some UK government agencies) and BS2000/OSD (still used in Germany mainly)

- I think some Fujitsu MSP and Hitachi VOS3 systems survive in Japan. (These are less than 100% compatible forks of IBM MVS.) They also used to exist in some other countries–e.g. here in Australia, although I doubt any are left here.


I notice that you repeatedly say "still supports". Do any of them still manufacture and sell those lines?


Fujitsu announced new GS21 mainframe models in 2018 [1]. (GS21 is partially IBM compatible at hardware and software level, mainly survives in Japan.)

And new BS2000 mainframe models in October 2019 [2]. (BS2000 is partially IBM compatible at instruction set level, but the OS is incompatible with any IBM OS; low end models are actually x86 servers running Linux with software emulation of the 390 instructions, while high end models have S/390-compatible CPUs.)

Atos (who own Groupe Bull) uploaded a brochure to their website in October last year [3] advertising their GCOS7/GCOS8 mainframes. The mainframes are actually Intel Xeon Windows servers running the legacy mainframe architecture under a software emulator. However, while in principle the software emulator could run on any Windows server, they only license it to run on their own hardware.

[1] https://www.fujitsu.com/global/about/resources/news/press-re...

[2] https://www.fujitsu.com/emeia/about/resources/news/press-rel...

[3] https://atos.net/wp-content/uploads/2019/10/BullSequana_M_BR...


It doesn't sound like any of that amounts to a still living architecture like the Z series with its own silicon still under active development. I wouldn't consider mere software emulation or an IBM compatible offshoot to represent an extant independent mainframe architecture.


The surge in demand is being driven almost entirely by state unemployment systems. Most of these systems haven't seen a surge in unemployment claims since the 2009 recession and haven't had to change the underlying rules governing eligibility in even longer. Once the systems are patched and/or the surge in new claims subsides, 90% of this sudden demand for "Cobalt programmers" will evaporate. It's one thing for retired mainframe programmers to come out of retirement and work as consultants for a few months, it's quite another to start changing CS curricula nationwide along with launching mainframe bootcamps for career changers. I'd urge caution.


And at such low pay, too:

> As recently as this week, jobs boards such as Indeed and Dice.com listed hundreds or in some cases thousands of openings for mainframe positions at all levels. Advertised pay ranges from $30 to $35 an hour for a junior mainframe developer to well over $150,000 a year for a mainframe database administration manager.

If you are talented and patient enough to do software archeology which will affect millions of people, I think you can make (a lot) more elsewhere.


Whenever you see "urgent need" in headlines it's too late to profit from this crisis.

If anything learn COBOL in about 10 years for the next crisis.


There were pre-crisis efforts to address this. NJ for example awarded a contract to Electronic Data Systems (now HP) which fell by the wayside [1]. Given the complexities of total rewrites compared to patchwork, I'd bet there will still be demand in the future.

[1] https://www.nj.com/politics/2014/11/nj_ends_118_million_cont...


Maybe the 2038 problem?


To be fair, Linux kernel devs have been working hard to solve the problem for years, with progress coming in every new kernel version. The works required at the kernel upstream is almost completed [0]. Now the ball moves to libc, and the progress is optimistic.

Indeed, what happens to those already deployed legacy systems and programs are another story.

[0] https://lwn.net/Articles/776435/


> The surge in demand is being driven almost entirely by state unemployment systems. Most of these systems haven't seen a surge in unemployment claims since the 2009 recession and haven't had to change the underlying rules governing eligibility in even longer. Once the systems are patched and/or the surge in new claims subsides, 90% of this sudden demand for "Cobalt programmers" will evaporate.

I generally agree but I also believe that this whole exercise might be an impetus for some states to migrate these services to more modern languages to prevent this type of crisis in the future.


If you had to chose an environment or programming language that you expect to survive, in regards to talent pool + tech availability for 30+ years today, what would you choose?


Java, with a lot of apache commons libraries and postgrees databases. They've both been around 25,34 years respectively and will survive probably about the same into the future as long as java stays sensible and slows the rate at which is has spawned features like c++


Both C++ and Java are pretty obvious choices aren't they? Both have probably billion lines of code or more investments into them and are the backbone of a huge number of very successful companies.


> C++

Really? :(


1. Formally defined semantics

2. Simple (LR1) grammar

3. Small, as a language

4. Static types (static need not be inflexible. Scala is static but very flexible. Perhaps overly so but whatever)

5. No undefined behaviour

6. All dumb holes such as buffer overflow always checked for

7. Good supporting tools (vague I agree but necessary)

8. Suitable, well-designed library

9. Not repeat not designed by the government which will make an utter mess of it all.

10. And pigs will soar


C + Linux. That seems to be the most promising tech with regards to longevity that exists today.


> C + Linux. That seems to be the most promising tech with regards to longevity that exists today.

It really depends on the application. C & Linux will literally be around forever, but C isn't the best language for a line-of-business application like an unemployment system. Java would be a better fit for something like that.


Just curious, what's your reasoning for thinking C wouldn't be a good fit? I'd think either would be fine choices given their ubiquity. (Not a C or Java programmer, just like to learn!)


C, compared to Java, is a poor fit for a multitude of reasons, for example:

- undefined behaviour

- non-portable binaries

- lack of rich and powerful standard library

- manual memory management requires developers of higher skill and discipline


Is Cobalt an obscure dialect of Cobol?


I suspect that's tongue in cheek humor.


If your out of the loop, a state governor (I believe New Jersey) called cobol, cobalt.


COBOL is living proof of the power of incumbency.

Once technology is entrenched, it becomes self-sufficient and very, very difficult to replace. I remember when mainframe/COBOL apps were being re-written for the client/server era, often using tools like Tuxedo/c++ for the backend and Visual Basic for the front-end. Where are those re-writes now? (Probably replaced a couple of times again, where the COBOL-remainers are still in place.)

I think the days of higher-paying COBOL gigs are still ahead of us. When the greybeards actually go away, there will be a strong need to fill.


COBOL might just be software properly amortized.


Mainframes - the original cloud computers.

I started on mini-mainframes but then worked mostly with networks of PCs. The old mainframes guys used to say 'just you wait the pendulum will swing back to centralized computing'. In the age of cloud computing they were right, and those mainframes are still there too.



Unemployment applications are fairly small structured text records requiring some rather simple logic to process. I believe a single server built on a most powerful x86 CPU available today running an MS SQL Server instance (or Postrges perhaps, but I don't have experience with it) is more than enough to process all the unemployment applications on the whole fereakin planet. Why do they insist on using an ancient mainframe? That sounds fun but bizarrely irrational.


is the inertia of rewriting these systems in a modern stack so great that it's cheaper to keep finding COBOL specialists forever? why not just sit down and rewrite these systems?


In many cases (e.g. banks) these systems have been running for decades, they're often badly documented and have had a lot of fixes applied for issues that arise.

Re-writing that, is going to be a nightmare, as you've got no spec. to work from and mistakes may not show up immediately (think monthly or annual payment processes)

It's a very high risk endevour with little immediate reward for the team doing it. If it goes perfectly, no-one notices, if it goes badly it could seriously impact your whole company.


Same logic apply to climate change issues.

There is little immediate reward so let this dumpster fire explode on our kids...

It doesn't necessarily mean that it's a good logic. We desperately need more long term thinking.


Counterpoint: a system that runs for 20, 30, 40 years without any serious downtime is the definition of long term thinking.


except of course where the system is ossified and you can no longer make changes for fear of affecting that lack of downtime.

In the organizations I've seen with mainframes, what happened was a panoply of supporting systems sprung up where new requirements existed as everyone was afraid of making significant changes to the mainframe environment.


It has been done many times, sometimes successfully. Other times it ends up costing hundreds of millions over budget before the effort is abandoned and there is nothing to show for it.


As late as 2014, at least, the US federal government was still processing its retirement claims on paper, in a decently large and rather nice-looking cave in Pennsylvania: https://www.dailymail.co.uk/news/article-2587305/Crazy-Cave-... .


COBOL is unstructured. It doesn't even have functions. When you've got some million+ line tangled mass that has all kinds of undocumented behavior, it's really better just to add more duct tape and punt it off to the poor bastard who actually has to fix it in 20 years.


COBOL for IBM mainframe running z/OS does have subroutines. (I don't know if you're making a distinction between procedures and functions.)


Since 1985 that COBOL can be structured, and since 2002 object oriented as well, with the latest ISO update being done in 2014

Besides compiling to native code, there are compilers that target JVM and CLR as well.


> there are compilers that target JVM and CLR

So could you wrap it and slowly add new features and replace pieces with Java code over time, using JVM for interop?


Yes, it is possible, for example check Visual Cobol from Micro Focus.

https://www.microfocus.com/en-us/products/visual-cobol/overv...


True, but no one is using OO Cobol, since it's all ancient legacy stuff.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: