Mainframes were … different. Today we have devices that have orders of magnitude more processing power than the old mainframes. The old mainframes were big machines that performed business tasks for large organisations and that cost millions of dollars. The phone in your pocket shows you your email, let’s you Facebook or tweet your friends where ever you might be and what ever you might be doing. You can even use it to phone people!
Arguably we fritter away most of the enormous amount of processing power and storage that our hand held devices provide. We watch videos on them, yes, even porn, and play endless silly games on them. And cats. Cats have taken over the Internet and thereby our connected devices.
The fundamental concept of the mainframe is one single powerful(!) computer with all devices, such as card readers, terminals (screen and keyboard, no mouse), printers, tape units and other devices, more or less directly and permanently connected to the computer.
Interestingly, when you typed something into a terminal using the keyboard, it wasn’t sent immediately to the mainframe but was recorded in a buffer in the terminal. When one of a number of keys was hit the whole buffer was sent to the mainframe. These special keys were “Return”, which is similar to the “Enter” key, a “PF Key”, which is similar to a function key, and a few others.
This meant that you could type and edit stuff at your terminal and it would only be sent when you were finished. That is different from the model used by PCs and other modern devices, where every single key press that occurs is sent to the target computer, including typos and correction to typos.
Of course, when you press a key on your PC keyboard, the computer that is the target is the one that the keyboard is connected to, and what you type in goes into a buffer, but the principle still applies. The effect is more obvious if a lot of people are connected to a multi-user computer and are using it heavily, when the response to hitting a key takes a second or so to echo back to the screen.
Multi-user computers are not common these days as they are not trivial to set up, and computers and networks have become so fast that it is generally easier to change the model and access applications over the network rather than use the direct connect model that was used in the early days.
A lot of the features of modern computing devices originated in mainframes. Mainframes originally ran one job or task at a time, but soon they became powerful enough to run many jobs at the same time. Mainframe operating systems were soon written to take advantage of this ability, but to achieve the ability to run multiple jobs, the operating systems had to be able to “park” a running job while another job got a slice of the processor.
To do that the operating system had to save the state of the process, especially the memory usage. This was cleverly achieved by virtualising memory usage – the job or task would think that it was accessing this bit of memory but the memory manager would make it use that bit of memory instead. The job or task didn’t know.
For instance the job or task might try to read memory location #ff00b0d0 (don’t worry about what this means) and the memory manager would serve up #ffccbod0 instead. Then a moment of so later another job or task might try to read that memory location. It would expect to find its own data there not the first task’s data, and the memory manager would this time serve up, say, #ffbbb0d0.
The key point is that the two tasks or jobs access the same address, but the address is not real, it is what is known as a virtual address, and the memory manager directs the request to different real addresses. This allows all sorts of cunning wheezes – a job or task can address more memory than the machine has installed and memory locations that have not been used in a while can be copied out to disk storage allowing the tasks in the machine to collectively use more memory than physically exists!
Exactly the same thing happens in your phone, your tablet, or your PC. Many tasks are running at the same time, using memory and processors as if these resources were dedicated to all the tasks. (Actually not all tasks are running at the same time – only as many tasks as there are processors in the processor chip can be running at the same time, but the processors are switched between task so fast it appears as if they are. The same is also true of mainframes).
Of course, there’s a downside to the mainframe model and that is that if the mainframe goes down, everyone is affected. In the early days of the PC era, every PC was independent and if it went down (which they often did) only one or two people, those who actually used the computer were directly affected. So if the Payroll computer crashed it didn’t affect Human Resources.
Soon though the ability to connect all the computers over a network became possible and computing once again became centralised. Things have changed, but the corporate server or servers now fulfil the role that once belonged to the mainframe.
All the advantages of centralisation have been realised again. Technical facets of the operation of computers have been removed from those whose job was not primarily computing, much to the relief of most them I’d suspect. Backups and technical updates are performed by those whose expertise is in those fields, rather than by reluctant amateurs in the field.
However, the downside is that a centralised computing facility is never as flexible as the end users would like it to be and, somewhat ironically, that as an outage of the old mainframe used to affect many people, so will an outage of a server or servers in the current milieu.