In the dawn of computing computers were essentially rooms full of racks and racks of circuits connected by mazes of cables. The circuits were formed out of electronic valves, relays, solenoids and other electronic and magnetic components, with not a single transistor to be seen, as semiconductors had not then been invented.
To reprogram such computers one often needed a soldering iron and an intensive knowledge of every part of the computer and how the parts interacted. From all accounts such machines were fickle, sometimes working sometimes not.
Since they were not housed in sterile environments or encased in a metal or plastic shell, foreign bodies could and did find their way into them and cause them to fail. Hence the concept of the computer bug. Computer pioneer Grace Hopper reported a real bug (actually a moth) in a computer and it made a great joke, but from the context of the report the term already existed.
As we know computer technology rapidly improved, and computers rapidly shrank, became more reliable, and bugs mostly retreated to the software. I don’t know what the architecture of the early room fillers was, but the architecture of most computers these days, even tablets and phones, is based on a single architecture.
This architecture is based on buses, and there is often only one. A bus is like a data highway, and data is placed on this highway and read off it by various other computer circuits such as the CPU (of which more later). To ensure that data is placed on the bus when safe, every circuit in the computer references a single system clock.
The bus acts much like the pass in a restaurant. Orders are placed on it, and data is also placed on it, much like orders are placed through the pass and meals come the other way in a restaurant. Unlike the restaurant’s pass however, there is no clear distinction between orders and data and the bus doesn’t have two sides corresponding to the kitchen and the front of house in a restaurant.
Attached to the bus are the other computer components. As a minimum, there is a CPU, and there is memory. The CPU is the bit that performs the calculations, or the data moves, or whatever. It is important to realise that the CPU itself has no memory of what has been done, and what must be done in the future. It doesn’t know what data is to be worked on either.
All that stuff is held in the memory, data and program. Memory is mostly changeable, and can contain data and program. There is no distinction in memory between the two.
The CPU looks on the bus for what is to be done next. Suppose the instruction is to load data from the bus to a register. A register is a temporary storage area in the CPU. The CPU does this and then looks for the next instruction which might be to load more data from the bus to another register, and then it might get an instruction to add the two registers and place the result in a third register. Finally it gets told to place the results from the third register onto the bus.
I was not entirely correct when I said that there was only one bus in a computer. Other chips have interfaces on the main bus, but have interfaces on other buses too. An example would be the video chip, which has to interface to both the main bus and the display unit. Another example is the keyboard. A computer is not much use without input and output!
The architecture that I’ve described is incorporated in almost all devices that have some “intelligence”. Your washing machine almost certainly has it, and as I said above so do your tablets and phones. Your intelligent TV probably does, and even your stove/range may do. These days we are surrounded by this technology.
The above is pretty much accurate, though I may have glossed and elided some facts. Although the technology has advanced tremendously over the years, the underlying architecture is still based around the bus concept, with a single clock synchronizing operations.
Within the computer chips themselves, the clock is of prime importance as it ensures that data is in the right place at the right time. Internally a computer chip is a bit like a train set, in that strings of digits flow through the chip, passing through gates which merge and split the bits of the train to perform the calculations. All possible tracks within the chip have be traversable within a clock cycle.
Clockless chips may some day address the on-chip restrictions, though the article I cite was from 2001. I’m more interested in the off-chip restrictions, the ones that spring from the necessity to synchronise the use of the bus. This pretty much defines how computers work and limit their speed.
One possibility is to ditch the bus concept and replace it with a network concept little bits of computing power could be distributed throughout the computer and could either be signalled with the data and the instructions to process the data, or maybe the computing could be distributed to many computational units and the result could then be assessed and the majority taken as the “right” answer. The instructions could be dispensed with if the computational unit only does one task.
The computational units themselves could be ephemeral too, being formed and unformed as required. This would lead to the “program” and “computation” being distributed across the device as well as the data. Data would be ephemeral too, fading away over time, being reinforced if necessary by reading and writing, much like early computer memory was refresh on each cycle of the clock.
What would such a computer look like? Well, I’d imagine that it would look something like the mass of grey matter between your ears. Data would exist in the device as an echo, much like our memories do, and processing would be distributed through the device much like our brains seem to work. Like the brain it is likely that such a computing device would be grown, and likely some structures would be mostly dedicated to certain tasks, as in the brain.
One big advantage that I see for such “devices” is that it should be very easy to interface them to the brain, as they would work on similar principles. It does mean though that we would be unlikely to be able to download one of these devices to a conventional computer, just as the contents of a brain could never be downloaded to a conventional computer.
On the other hand, the contents of a brain could conceivable be downloaded to a device like I have tried to describe.