Democracy – maybe we should try it sometime

We in the rest of the world are watching the run up to the Presidential elections in the USA in November. It has now been decided who the two main contenders will be, Hilary Clinton for the Democrats and Donald Trump for the Republicans. In the USA, there are no significant other parties, so it is very highly likely that the next President of the United States will be one of these two people.

An extraordinary fact is that many US citizens dislike both candidates, with one Republican commentator saying that people might be choosing the lesser of the two evils. Trump is seen as brash and unversed in politics and Clinton is seen as being untrustworthy.

US President Bill Clinton (center with hand up...

US President Bill Clinton (center with hand up), first lady Hillary Rodham Clinton to right of photo; their daughter Chelsea Clinton to left. On procession in public. The President, First Lady, and Chelsea on parade down Pennsylvannia Avenue on Inauguration day, January 20, 1997. (Photo credit: Wikipedia)

So, how did the US voter get left with a choice between two unpopular candidates? The US has a candidate selection process which is complex and unwieldy. A special subset of voters vote on the candidates who present themselves, and the sequential nature of the selection process turns the selection into a horse race, with candidates vying to “collect” the delegates in each state and achieve a threshold which means that they cannot be beaten by other candidates.

Each state selects the candidates using a different method, and there are different numbers of “delegates” in different states. Of course a mining state may and often does prefer a different candidate to the candidate preferred by a farming state. Commentators try to out guess each other in predicting the results, state by state.

Map of number of electoral votes by state afte...

Map of number of electoral votes by state after redistricting from the 2000 census. Modified by User:Theshibboleth for the font to be consistent with electoral maps. Edited with Inkscape. Reuploaded by User:King of Hearts to correct spelling (vs. Image:Electorial map.svg). (Photo credit: Wikipedia)

The wonder of the system is that it often throws up a candidate who has a reasonable amount of public support, in spite of the complexity of the process. This time however, it appears that the selection process has thrown up two candidates who don’t appear to appeal to the electorate. The voters do indeed have to select “the lesser of the two evils”.

In most democracies around the world things tend to be simpler. A candidate puts him or herself up for election, and he or she gets voted for or not as people choose. Of course, a candidate who aligns with a party needs to get the party’s approval, and cannot stand under the party banner without it.

English: National Awami Party (Muzaffar) banne...

English: National Awami Party (Muzaffar) banner at opposition rally, Dhaka. (Photo credit: Wikipedia)

However the above describes the process for electing a local representative. A presidential election raises extra problems. For instance, in the US the president is always a member of a political party. In countries where the president is preferred to be independent and outside of party politics issues arise in his/her selection.

If the president is elected directly by the populace the potential candidates will need to campaign countrywide when seeking election, and this will be expensive. The candidate therefore has to be very rich, sponsored by some organisation or be aligned with a party. The last two options work against the requirement for the president to be independent, and the first option restricts the field to those who have a large amount of money, which may be unacceptable or not achievable in a poor country.

English: Seal of the Executive Office of the P...

English: Seal of the Executive Office of the President of the United States (Photo credit: Wikipedia)

Many countries, including India and the US have got around this by using an electoral college system, though few systems could be as convoluted as the US method of selecting a candidate to stand for election as president. Such a system uses the fact that the population has already elected individuals to government, and uses the already elected individuals to decide on the candidates for president or select the president directly.

While this means that the most powerful political parties select and maybe elect the president, the representatives are doing what they are elected for, which is to make decisions on behalf of their voters. The elected representatives often select someone who may not be the most preferred by the grass roots electorate, but generally the selected person is not too disliked.

English: Sheep pasture The sheep have eaten th...

English: Sheep pasture The sheep have eaten the grass down to the roots, and must appreciate the fodder put out for them in these wheeled feeders. (Photo credit: Wikipedia)

In the case of the latest selection process for US president, the Democrats have selected Hilary Clinton as the most likely Democrat candidate to win, and while this is true, polls show that there is a lack of trust in her at the grass roots level. It is unlikely that this factor will weigh too heavily with the voter come the election, though.

The Republicans have selected Donald Trump, in spite of the belief early in the process that he stood no chance. The Republicans believe that he is the best choice of prevailing over Hilary Clinton, but many people dislike his brashness. On the other hand, many people like his approach to some of the issues that are hot topics in the US, such as immigration and the threat of terrorism. The real issue is whether or not his solutions to such topics are reasonable or will be effective.

While the people get to vote for the person that they want to be president, the process seems to me to not be overly democratic. The sheer number of people in the US and in most other countries means that direct election of a president is never going to be possible. There is always going to be a distance between the President and the populace and this dilutes democracy. As it is, voters can only vote for a few candidates in the election. They have little say in who gets selected to stand.

How much does this dilute democracy? Hmm, good question, Cliff! It depends. Given that “representative democracy” like in many country puts distance between the electorate and the elected, if people do not like the candidates very much, this could reduce voter turn out at the election as people decide not to vote for either of them. If, however, enough people on one side hate the opposition candidate strongly enough this may encourage them to turn out and vote. The president is likely to be elected by the vote of only a few of those allowed to vote.

Italiano: Diluizione-Concentrazione

Italiano: Diluizione-Concentrazione (Photo credit: Wikipedia)

It is likely that the apathy effect is going to override the hate effect, in my opinion, and voter numbers are likely to drop. If only a small number turn out to vote, then has either candidate got a real mandate? Not really, I suggest.

The US form of diluted democracy means that only a favoured few get to stand for president. Up until Obama, all previous presidents back to the early days have been rich white men. Standing in an election for president of the US costs billions of dollars and few people are able to afford the price.

English: Total public debt outstanding, United...

English: Total public debt outstanding, United States, 1993-2011 (billions U.S $) Français : Dette publique totale des États-Unis en milliards de dollars, prix courants, 1993-2011 (Photo credit: Wikipedia)

So we have a rich businessman vying to become the most powerful man in the world, and making ridiculous promises, like a wall between the US and Mexico, and the wife of a previous president vying to become the first female president. While Hilary Clinton is not enormously rich, she is much richer than most of us, and she and Bill Clinton have powerful friends.

US-Mexico border barrier near Monument Road, S...

US-Mexico border barrier near Monument Road, San Diego, California, USA, looking into Tijuana, Baja California, Mexico. Commonsing ok. knoodelhed 17:53, 4 September 2007 (UTC) (Photo credit: Wikipedia)

Posted in General, Miscellaneous, Politics | Tagged , , , , , , , , , | Leave a comment

Updates to software

It’s obviously a good thing for bugs to be fixed. Software should function correctly and without exposing the user to security issues, and updates to fixes for this reason are essential.

Unfortunately this sometimes, one might say often, has a negative impact on the user. The user may know of the bug and have a workaround for it, and fixing the bug may cause the cause issues with the workaround.

Not to mention the fact that fixing one bug may result in the appearance of another or bring its existence to the notice of the user. No software can ever be considered to be completely bug free, in spite of the advanced tools which are available to test software.

When I was learning to program, back in the time of the dinosaurs, we were told to give our program and the specs to one of our fellow students to test. We called it “Idiot Testing”. The results were mind blowing. A tester would make assumptions, put in data that they consider valid, but which you, as the program writer had not considered, or maybe you considered it “obvious” that putting in certain types or values of data would not work.

Almost every time the tester would break the program somehow, which was the whole point, really. So we’d fix up our programs and give them up to be tested again. And the testers would break them again.

We were taught and quickly learned the advantage of sanitising the inputs to our programs. This meant we had to take the data as input by the tester and incorporate in our programs routines to try to ensure that the data didn’t break the program.

So we’d write our routines to validate the data, and we’d return an error message to the tester. We’d think that we were providing clear reasons in the messages to the tester, but the messages could still confuse the testers.

For example if the message said “The input must be between -1 and 1”, the tester might try putting in “A” or “1/4”. This usually happened when the purpose of the program was not clearly defined and described, not because of any denseness or sheer caprice on the part of the tester.

Then we’d update the programs again, taking into account what we had learned from the tester’s responses, and hopefully there would be more success with the updated program.

This seems to be more of an issue in mobile software, I believe, as many programs out there are written by a single person working alone, and I know that by the time I finish a program I’m heartily sick of it, and I write programs for myself as intended user. A person may upload a mobile app, with plenty of obvious bugs, and may never update it. It becomes abandonware, which may lead a general disillusion with mobile software as being buggy and never fixed.

When a developer does start to work on his program, and starts to fix the bugs, this takes time and effort. Meanwhile users may keep reporting issues with the published version. The developer has a dilemma. Does he/she drop his work on a particular bug to identify and fix the possible new bug? Or does he/she finish working on the current bug and eventually release a new version which would still contain the old bug?

Once the programmer starts on a new release, adding new features and improvements over the original version, bug notification and fixing acquires a new layer of complexity, one which a single developer may find impossible to handle, so he or she might abandon the software rather than take on the complexities of bug management.

Other times teams form or businesses take up the software, and bug management and fixing become formalised, but updates still need to be supplied to the users. From the user perspective, updates are more regular and fixes may be supplied in the updates if they are lucky.

Updates have had a bad reputation in the past. In the early days of computing operating systems (such as Windows) could become unbootable after an upgrade if the user was unlucky. This generally could be tracked down to issues in the driver software that controlled the various attached or builtin devices on the computer.

Things are now a lot better. Drivers are written to be more robust with respect to operating systems upgrades, and operating systems have become better at handling issues with hardware drivers. It is rare these days for an upgrade to render a system completely unbootable, though an upgrade can still cause issues occasionally.

Users have become used to preforming upgrades to systems and software, and in some cases they, by default, do not have a choice whether or not to upgrade. They do not, in most cases, know exactly what upgrades have gone on to their computers and do not known what fixes are included in these upgrades.

Software updates are often seen by users as a necessary evil. There are reasons for updates though as they may well close security loopholes in software, or they may enhance the functionality of software. Just don’t expect an early fix for that annoying bug though, as the developers will almost certainly have different priorities to you. If it isn’t in this update, maybe it wasn’t serious enough to make it. Hopefully it will be in the next update, which will be along soon!

Posted in Computing, Internet, Miscellaneous | Tagged , , , , | Leave a comment



Coding is a strange process. Sometimes you start with a blank space, fill it with symbols and numbers, and eventually a program appears at the end. Other times you take your or someone else’s work and modify it, changing it, correcting it, or extending it.

The medium that you do this in can be varied. It could be as simple as a command line, a special “Integrated Development Environment” or “IDE” or it could be a fancy drag and drop within a special graphical programming application such as “Scratch“. It could even be within another application such as a spreadsheet or database program. I’ve tried all of these.


BasictoPHP - Integrated Development Environment

The thing that is common to all these programming environments is that they run inside another program – the command line version (obviously enough) requires that the command line program, which receives the key presses necessary to build the new program and interprets them, must be running, and the command line program itself runs in another program.

Which itself runs in yet another program, and so on. So, is it programs all the way down? Well, no. One is tempted to say “of course not”, but it is not immediately apparent what happens “down there”.

Hawaiian Green Sea Turtle

What happens down there is that the software merges into the hardware. At the lowest software level the programs do things like read or write data values in specific bits of hardware, and move or copy data values from one place to another. One effect of a write, move or copy might be to cause the hardware to add two numbers together.

Also, the instruction may cause the hardware to select the next instruction to be executed depending on the data being executed. It may default to the next sequential instruction, or it may execute an instruction found elsewhere.

MCS650x Instruction Set

An instruction is just a number, an entity with a specific pattern within the computer. It has a location in the hardware, and is executed by being moved to another location in the hardware. The pattern is usually “binary code” or a string of ones and zeroes.

In the hardware component called a CPU, there are several locations which are used for specific purposes. Data may be found there or it may be copied there. At certain times the data will be executed or processed. Whatever the purpose of the data, it will travel as a train of zeroes and ones though the hardware, splitting, merging and being transformed by the hardware. It may also set signals and block or release other data in the CPU.

Acorn 2MHz6502CPUA

The designers of the CPU hardware have to design this “train system” so that the correct result is achieved when an instruction is processed. Their tools are simple logic circuits which do things like merge two incoming trains of zeroes or ones or split one train into two or maybe replace all the zeroes by ones and vice versa. I think that it is fairly accurate to say that the CPU designers write a program using physical switches and wires in the hardware itself.

So we have reached the bottom and it is not programs, but logic gates, and there are many layers of programming above that to enable us to write “Hello World” on our monitor devices. It’s an elegant if complex system.

Of course we can’t program in logic gates to achieve the “Hello World” objective. We have many layers of programs to help us. But how do the various layers of programs work?

Hello World App

The designers of the CPUs hardware program the device to perform certain actions when a special code is dropped into a special location. There are only 100 to 200 special codes that a CPU recognises and they are patterns of zeroes and ones as described above.

Obviously it would be tedious and error prone to actually code those special codes (and the associated data locations, known as addresses) directly into the computer, so small programs were written in the special codes to recognise mnemonics for the codes and these were then used to write more complex programs to automatically create the strings of codes and addresses necessary to create the lowest level code.

This process is known as boot-strapping, as ever more complex programs are built, culminating in what are known as high level languages, where little or no knowledge of the hardware is required. When a new type of machine comes along, using a different type of hardware, it is even possible to write the programs at a high level on different hardware so that the software can be “ported” to the new system.

Lighthouse at Port Adelaide

The highest level of programs are the ones that actually do the work. These programs may be something like a browser which fetches data and displays it for the user, but a browser is created by a programmer using another program called a compiler. A compiler’s function is to create other programs for the end user.

However to write or modify a compiler you need another program, or maybe a suite of programs. Code is usually written in a human readable form called “source code”. An editor program is needed to read, modify and write the source code. A compiler is needed to change the human readable code to machine executable code and a linker is usually required to add all the bits of code together and make it executable.


All these programs have their own source code, their compiler and linkers, and it may seen as if we have an issue with all programs requiring their own source code and so on. It seems that we have an infinite regress again. But once we have an editor, a compiler and a linker we can compile any program we like, and we don’t need to know the details of the hardware.

And what is more those programs, editor, compiler and linker, can created using an existing compiler, editor and linker on another different machine and simply transferred to the new one. In some ways every compiler, editor and linker program can trace its ancestry back to a prototype written at the dawn of the computer age.

IMGP1181 Colossus

Posted in Computing, General, Maths, Miscellaneous | Tagged , , , , , | Leave a comment

Religious matters

English: Christadelphian Meeting Room, Napton ...

English: Christadelphian Meeting Room, Napton This Christadelphian chapel stands on the corner of Howcombe Lane in Napton. (Photo credit: Wikipedia)

Seen on the signboard of a Christadelphian Church : “Seminar: Brexit and Bible Prophecy”. What?? Anyway, that started me thinking about religion again.

In the days that religion was developing as a means of understanding the world, when natural occurrences like storms and earthquakes were hypothesised to be caused by supernatural agencies, such as spirits and gods, the details didn’t matter too much to people.

English: Cains Folly Landslide (2) Very active...

English: Cains Folly Landslide (2) Very active landslide, Greensand sitting on Lias. (Photo credit: Wikipedia)

If your neighbour believed an evil spirit caused a landslide, it didn’t matter too much if he thought that the spirit was male, while you categorised it as female, and your other neighbour didn’t assign the spirit a gender at all.

Eventually problems arose with this approach. When Johnny arrived home with a bloody nose because he had insisted that the spirit was female and Nigel next door had been told that it was male, issues arose. Nigel always was a bit of a bully, as was his dad.

Bloody nose 1

Bloody nose 1 (Photo credit: Wikipedia)

The tribe as a whole would, over time, discuss the matter and come up with a consensus. The landslide djinn had to be female as it didn’t actually try to kill anyone, but made work for the men, who had to clear the slide from the track.

As time passed, the original idea of the evil spirit would become embedded in a mythos or body of myths, as the spirit’s role and actions are extended upon, firstly by grandparents telling kids scary stories to keep the kids awake at night, then embedded into the structure of the society as the adults, more or less jokingly at first, try to appease the wrathful spirits.

Dance of the Lord of Death, Paro

Dance of the Lord of Death, Paro (Photo credit: Wikipedia)

Eventually people starting taking the stories seriously. A whole structure of myths and stories got inflated into a cosmology and a rationale for the way things were. Johnny’s and Nigels’ descendants took all the stories and hypotheses and treated them as if that was the way things were, and to some extent they were correct.

Except that the daemon that started the rock slide was called gravity and it was not a active being with human characteristics but a force of nature, impassive and impartial.


Lightning. (Photo credit: Wikipedia)

Having experienced the scientific revolution, most societies on Earth these days recognise that earthquakes and landslides are not caused by malevolent supernatural beings, but by the forces of nature, but this has to be taught to kids.

As they grow up they believe in fairies and Father Christmas, but they soon learn to distinguish truth and fact. They may well believe in these beings for the benefit of adults and the possibility of presents and money for some time, but their belief in these beings is ambivalent. Eventually their belief is fake, and everyone knows that. It becomes a game.

Oberon, Titania and Puck with Fairies Dancing

Oberon, Titania and Puck with Fairies Dancing (Photo credit: Wikipedia)

Without any knowledge of science, our ancestors did the best that they could, and make the best guesses as to causes of phenomenon using the tools that they had at the time – myths and stories, based around being of unlimited power and dominion.

With the advent of writing, these myths and stories could be written down. The writings did not change, so the views of people were now tied to these fixed stories. A class of people arose who existed for the single purpose of understanding the writings and even interceding with the supernatural beings.

Illustration from a collection of myths.

Illustration from a collection of myths. (Photo credit: Wikipedia)

Some of the sages, magicians and priests would have been wise individuals who, fundamentally, did not believe the myths and stories in the writings, but who could see an opportunity, but the vast majority of the religious officials would have really believe the religious corpus.

When two culture came into contact there would have been a mismatch in the religious beliefs. Since the supernatural beings were, in general, born from disasters, such as floods and landslides, it would not do to offend them.

Brisbane City Floods

Brisbane City Floods (Photo credit: Wikipedia)

But the guy from the city over there believed that the seas came from the salt tears of the goddess, while you knew that the seas arose when the god split the rocks and the seas sprang from the depths of the earth.

What to do about this? Well, in most cases the traders or travellers would have no problem with this, most people being practical in nature, but when the priests heard, well all hell would break loose.

Priest with cross at Lalibela

Priest with cross at Lalibela (Photo credit: Wikipedia)

At the very least, some people would travel to other lands to try to persuade the inhabitants of their errors, and they would either succeed of fail. If they failed, they could be cast out or, possibly, put to death in various horrible ways.

If the missionaries were put to death, why then that would escalate things and war could be the end result. After all, yours was the one true religion and we can’t have heathens looping off the heads of true believers can we?

A group of believers

A group of believers (Photo credit: Wikipedia)

So we get religious wars, crusades and jihads. Remember, although we cannot really conceive it these days, religion was the only explanation people had of the world. Science would be along in a few centuries. In this rational and largely atheistic world that we live in, we can’t really understand the fundamental belief in religion that used to prevail.

We teach religion as a subject in schools, like maths or geography. It’s largely been dissociated from feelings and even belief. This is why in the Western nominally Christian world we are uneasy when people believe deeply in religion. It seems to us like a sort of throwback to more ignorant times.

Religion is still strong in the rest of the world, though it does appear to be waning in influence. From our less religious point of view, the rabid followers of Islam seem insane and wrong, and it is hard for us to understand them at all. More moderate Muslims probably think that the so-called “radicals” are wrong, and are horrified by their actions, just as Westerners who are nominally Christian are horrified by the actions of the Klu Klux Klan or other extreme Christian cults.

Religions can and do exist side by side in many societies, but it is an odd situation. So long as people keep their views to themselves and practice their religion discreetly people get along. But if someone believes that their religion is the only true religion and that others are going to burn in hell or whatever, then that person would consider themselves to be justified in trying to save the others from themselves, by force if necessary. Or maybe that person believes that their deity requires them to force others to believe, and the same applies.

Maybe this is not the end of the story. Science is an explanation of the world, observation based. It is possible, though unlikely in my view, that this world view is as misguided as religion is misguided. Maybe our descendants may look on science as we look on religion, as necessary, but ultimately wrong headed view of life.

Science and Religion are portrayed to be in ha...

Science and Religion are portrayed to be in harmony in the Tiffany window Education (1890). (Photo credit: Wikipedia)




Posted in General, Miscellaneous, Philosophy, Politics, Religion, Science, Society | Tagged , , , , , , , , , | Leave a comment

Britain’s exit from the European Union

In recent days we have seen Great Britain vote to withdraw from the European Union. While it is a significant event in itself, it perhaps points to a global trend of fragmentation, with large countries or unions splitting into smaller countries. These smaller countries are often ethnically different from other component countries that made up the original country.

The European Union (EU) started in 1951 as the European Coal and Steel Community which gradually extended its remit to cover almost every aspect of community in Europe. The UK was not part of the original member states but partially joined in 1973. In 1975 there was a referendum on whether or not the UK should leave the EEC or (then) Common Market. The vote was to remain part of the EEC.


By Eec2016 (Own work) [CC BY-SA 4.0 (, via Wikimedia Commons

It’s fair to say that the 1975 referendum was a non event. People of course did not know what the future would bring and the aims and purposes of the EU were, I believe, not understood. I saw no particular benefit and I was proved correct by events. (I’ve just realised the pun hidden in that – in fact the vote was not ‘non’ but ‘oui’).

Would trade between member countries have suffered if the UK had not voted in 1975 to continue to be part of the EEC? It’s impossible to say. Looking through the list, there is nothing there that really strongly calls out to me, and most of the items could have been achieved regardless of whether or not the UK remained or not.

EU Referendum Results 2016

By Brythones (Own work) [CC BY-SA 4.0], via Wikimedia Commons

From the perspective of countries outside of the EU, the EU is a disadvantage. The EU has a big hand in all trade agreements, and countries like Australia and New Zealand can’t target their traditional markets in the UK.

One of the big advantages of the EU is supposed to be the freer travel between member countries. This sounds great on paper, but passports are mostly still needed when people travel between countries, even though visa are not needed. While there is closer cooperation between member states on matters like drug trafficking, this will be offset to some extent by the freer travel between states.


Illegal drugs

Some people claim that the freedom of travel between member countries means that immigrants find it easier to travel between member countries and from the UK’s point of view this is all bad. An immigrant could obtain a passport in one country and immediately be able to travel to the UK for example.

It’s difficult to quantify some of the so-called advantages. For instance, being part of the EU supposedly provides greater influence in world affairs. However the leaders of countries outside the EU do not in practise seem to meet with the leaders of the EU, instead meeting with representatives of the individual countries, and to outside countries, the EU typically appears to be a barrier to trade because of the huge amount of bureaucracy that surrounds anything to do with the EU.



When the UK removes itself from the EU, it will be able to deal directly with non-EU countries once more. Since the UK is one of the largest economies in the world, ranking sixth in GDP, it should have no difficulty forging favourable trade links with other countries. Even trade with EU countries should not be affected too much – as someone said, Mercedes Benz will still want to sell their cars into the UK.

If the split of the UK from the EU goes ahead as it seems likely to do, this may result in other countries deciding to exit. This is not surprising of course, but this referendum may ultimately result in the dissolution of the EU back into member states.

Ballot Box

Ballot Box

This follows a trend which seems to be gathering pace. In 1991 the former Soviet Union dissolved into its constituent states. In 1993 Czechoslovakia spilt into two states. In 2014 Scotland narrowly voted against independence from the United Kingdom. Potentially the USA could split into separate countries, with the biggest state, Texas, being the most likely to secede from the union. China, is a huge country and is another candidate for potential division.

The EU is a huge bureaucracy and even the Pope has warned that the rules and regulations are onerous. While there are many euro-myths, it can’t be denied that the EU rules and regulations tend to be wordy and overbearing, and it seems that they do not replace local rules and regulations but add to them.

No Dogs in Inn

Rules and regulations

For instance, I was looking at Directive 2000/13/EC which relates to the labelling of foodstuffs. It runs to 36 pages and there are 9 amendments and one correction to the document. It is full of references and cross-reference and exceptions and special cases. One of the paragraph reads, in full, “Ingredients shall be listed in accordance with this Article and Annexes I, II, III and IIIa”.

Much of this verbiage is designed to protect the end consumer of course, and this is good, but I can’t imagine that the local butcher, or even a supermarket butcher, has read all the regulations relating to the way he labels his merchandise. Yet a provider can be in trouble if he/she doesn’t comply with these regulations as enforced and possibly modified by member governments.

Food labelling

Food labelling

So, I think that Britain has done the right thing to start its withdrawal from the EU. It will cost a lot. Billions, over a number of years, but the price will be worth it. Scotland may decamp, but there were signs that that alliance was under strain anyway.

It’s a miracle though, that they decided to leave, as many people seem to be having second thoughts, even calling for a new referendum on the subject, with more than 2.5 million people signing a petition to hold one.  I can foresee a time when the 14th referendum on the subject is held and the question will be “Come on people! Make up your minds! Do we really, really want to exit the EU, or not? Please let’s make this the last time, OK?”


By Cafe cafes Cafe cafes (Own work) [CC BY-SA 4.0], via Wikimedia Commons

There is a distinct note of concern in the comments of the man in the street about the result of the referendum. One guy admits to have voted “Leave”, but says that he didn’t think his vote would matter, and that he is now very worried. I think that this is mere nerves and the burden of having made a scary decision, but I believe that they got it right. Others are happy with their decision.



Posted in General, Miscellaneous, Politics, Society | Tagged , , , , , , , , , , , , , | Leave a comment

What’s the probability?



Transparent die

We can do a lot with probability and statistics. If we consider the case of a tossed die, we know that it will result in a six about one time in six in the die is not biassed in any way. A die that turns up six one time in six, and the other numbers also one time in six, we call a “fair” die.

We know that at any particular throw the chance of a six coming up is one in six, but what if the last six throws have all been sixes? We might become suspicious that the die is not after all a fair one.



The probability of six sixes in a row is one in six to the power of six or one in 46656. That’s really not that improbable if the die is fair. The probability of the next throw of the die, if it is a fair one, is still one in six, and the stream of sixes does not mean that a non six is any more probable in the near future.

The “expected value” of the throw of a fair die is 3.5. This means that if you throw the die a large numbers of time, add up the shown values and divide by the number of throws, the average will be close to three and a half. The larger the number of throws the more likely the measured average will be to 3.5.


Crap table

This leads to a paradoxical situation. Suppose that by chance the first 100 throws of a fair die average 3.3. That is, the die has shown more than the expected number of low numbers. Many gamblers erroneously think that the die is more likely to favour the higher numbers in the future, so that the average will get closer to 3.5 over a much larger number of throws. In other words, the future average will favour the higher numbers to offset the lower numbers in the past.

In fact, the “expected value” for the next 999,900 is still 3.5, and there is no favouring of the higher numbers at all. (In fact the “expected value” of the next single throw, and the next 100 throws is also 3.5).


Pile of cash

If, as is likely, the average for the 999,900 throws is pretty close to 3.5, the average for the 1,000,000 throws is going to be almost indistinguishable from the average for 999,900. The 999,900 throws don’t compensate for the variation in the first 100 throws – they overwhelm them. A fair die, and the Universe, have no memory of the previous throws.

But hang on a minute. The Universe appears to be deterministic. I believe that it is deterministic, but I’ve argued that elsewhere. How does that square with all the stuff about chance and probability?



Given the shape of the die, its trajectory from the hand to the table, given all the extra little factors like any local draughts, variations in temperature, gravity, viscosity of the air and so on, it is theoretically possible, if we knew all the affecting factors, that, given enough computing power, we could presumably calculate what the die would show on each throw.

It’s much easier of course to toss the die and read the value from the top of the cube, but that doesn’t change anything. If we knew all the details we could theoretically calculate the die value without actually throwing it.



The difficulty is that we cannot know all the minute details of each throw. Maybe the throwers hand is slightly wetter than the time before because he/she has wagered more than he/she ought to on the fall of the die.

There are a myriad of small factors which go into a throw and only six possible outcomes. With a fair die and a fair throw, the small factors average out over a large number of throws. We can’t even be sure what factors affect the outcome – for instance, if the die is held with the six on top on each throw, is this likely to affect the result? Probably not.

Einstein's equation

E = mc2

So while we can argue that when the die is thrown that deterministic laws result in the number that comes up top on the die, we always rely on probability and statistics to inform us of the result of throwing the die multiple times.

In spite the seemingly random string of numbers from one to six that throwing the die produces, there appears to be no randomness in the cause of the string of results from throwing the die.



The apparent randomness appears to be the result of variations in the starting conditions, such as how the die is held for throwing and how it hits the table and even the elastic properties of the die and the table.

Of course there may be some effects from the quantum level of the Universe. In the macro world the die shows only one number at a time. In the quantum world a quantum die may show 99% one, 0.8% two, 0.11% three… etc all adding up to 100%. We look at the die in the macro world and see a one, or a two, or a three… but the result is not predictable from the initial conditions.



Over a large number of trials, however, it is very likely that these quantum effects cancel out at the macro level. In maybe one in a very large number of trials the outcome is not the most likely outcome, and this or similar probabilities apply to all the numbers on the die. The effect is for the quantum effects to be averaged out. (Caveat: I’m not quantum expert, and the above argument may be invalid.)

In other cases, however, where the quantum effects do not cancel out, then the results will be unpredictable. One possibility is the case of weather prediction. Weather prediction is a notoriously difficult problem, weather forecasters are often castigated if they get it wrong.



So is weather prediction inherently impossible because of such quantum level unpredictability? It’s actually hard to gauge. Certainly weather prediction has improved over the years, so that if you are told by the weather man to pack a raincoat, then it is advisable to do so.

However, now and then, forecasters get it dramatically wrong. But I suspect that that is more to do with limited understanding of the weather systems than any quantum unpredictability.






Posted in Computing, General, Maths, Miscellaneous, Philosophy | Tagged , , , , , , , , , | 1 Comment

Computer to Brain, Brain to Computer

In the dawn of computing computers were essentially rooms full of racks and racks of circuits connected by mazes of cables. The circuits were formed out of electronic valves, relays, solenoids and other electronic and magnetic components, with not a single transistor to be seen, as semiconductors had not then been invented.

To reprogram such computers one often needed a soldering iron and an intensive knowledge of every part of the computer and how the parts interacted. From all accounts such machines were fickle, sometimes working sometimes not.

English: "U.S. Army Photo", from M. ...

English: “U.S. Army Photo”, from M. Weik, “The ENIAC Story” A technician changes a tube. Caption reads “Replacing a bad tube meant checking among ENIAC’s 19,000 possibilities.” Center: Possibly John Holberton (Photo credit: Wikipedia)

Since they were not housed in sterile environments or encased in a metal or plastic shell, foreign bodies could and did find their way into them and cause them to fail. Hence the concept of the computer bug. Computer pioneer Grace Hopper reported a real bug (actually a moth) in a computer and it made a great joke, but from the context of the report the term already existed.

As we know computer technology rapidly improved, and computers rapidly shrank, became more reliable, and bugs mostly retreated to the software. I don’t know what the architecture of the early room fillers was, but the architecture of most computers these days, even tablets and phones, is based on a single architecture.

This architecture is based on buses, and there is often only one. A bus is like a data highway, and data is placed on this highway and read off it by various other computer circuits such as the CPU (of which more later). To ensure that data is placed on the bus when safe, every circuit in the computer references a single system clock.

English: A Chennai MTC Volvo bus in front of t...

English: A Chennai MTC Volvo bus in front of the Royapettah clock tower, Chennai, India. (Photo credit: Wikipedia)

The bus acts much like the pass in a restaurant. Orders are placed on it, and data is also placed on it, much like orders are placed through the pass and meals come the other way in a restaurant. Unlike the restaurant’s pass however, there is no clear distinction between orders and data and the bus doesn’t have two sides corresponding to the kitchen and the front of house in a restaurant.

Attached to the bus are the other computer components. As a minimum, there is a CPU, and there is memory. The CPU is the bit that performs the calculations, or the data moves, or whatever. It is important to realise that the CPU itself has no memory of what has been done, and what must be done in the future. It doesn’t know what data is to be worked on either.

The ZX81 PCB. The circuits are (from left to r...

The ZX81 PCB. The circuits are (from left to right) ULA, Z80 CPU, 8 Kb ROM and two memory curcuits making up 1 Kb RAM. (Photo credit: Wikipedia)

All that stuff is held in the memory, data and program. Memory is mostly changeable, and can contain data and program. There is no distinction in memory between the two.

The CPU looks on the bus for what is to be done next. Suppose the instruction is to load data from the bus to a register. A register is a temporary storage area in the CPU. The CPU does this and then looks for the next instruction which might be to load more data from the bus to another register, and then it might get an instruction to add the two registers and place the result in a third register. Finally it gets told to place the results from the third register onto the bus.

English: Simplified diagram of a computer syst...

English: Simplified diagram of a computer system implemented with a single system bus. This modular organization was popular in the 1970s and 1980s. (Photo credit: Wikipedia)

I was not entirely correct when I said that there was only one bus in a computer. Other chips have interfaces on the main bus, but have interfaces on other buses too. An example would be the video chip, which has to interface to both the main bus and the display unit. Another example is the keyboard. A computer is not much use without input and output!

The architecture that I’ve described is incorporated in almost all devices that have some “intelligence”. Your washing machine almost certainly has it, and as I said above so do your tablets and phones. Your intelligent TV probably does, and even your stove/range may do. These days we are surrounded by this technology.

The microcontroller on the right of this USB f...

The microcontroller on the right of this USB flash drive is controlled with embedded firmware. (Photo credit: Wikipedia)

The above is pretty much accurate, though I may have glossed and elided some facts. Although the technology has advanced tremendously over the years, the underlying architecture is still based around the bus concept, with a single clock synchronizing operations.

Within the computer chips themselves, the clock is of prime importance as it ensures that data is in the right place at the right time. Internally a computer chip is a bit like a train set, in that strings of digits flow through the chip, passing through gates which merge and split the bits of the train to perform the calculations. All possible tracks within the chip have be traversable within a clock cycle.

English: Chips & Technologies Super 386

English: Chips & Technologies Super 386 (Photo credit: Wikipedia)

Clockless chips may some day address the on-chip restrictions, though the article I cite was from 2001. I’m more interested in the off-chip restrictions, the ones that spring from the necessity to synchronise the use of the bus. This pretty much defines how computers work and limit their speed.

One possibility is to ditch the bus concept and replace it with a network concept little bits of computing power could be distributed throughout the computer and could either be signalled with the data and the instructions to process the data, or maybe the computing could be distributed to many computational units and the result could then be assessed and the majority taken as the “right” answer. The instructions could be dispensed with if the computational unit only does one task.

Network Computing Devices NCD-88k X terminal, ...

Network Computing Devices NCD-88k X terminal, back ports. (Photo credit: Wikipedia)

The computational units themselves could be ephemeral too, being formed and unformed as required. This would lead to the “program” and “computation” being distributed across the device as well as the data. Data would be ephemeral too, fading away over time, being reinforced if necessary by reading and writing, much like early computer memory was refresh on each cycle of the clock.

What would such a computer look like? Well, I’d imagine that it would look something like the mass of grey matter between your ears. Data would exist in the device as an echo, much like our memories do, and processing would be distributed through the device much like our brains seem to work. Like the brain it is likely that such a computing device would be grown, and likely some structures would be mostly dedicated to certain tasks, as in the brain.

One big advantage that I see for such “devices” is that it should be very easy to interface them to the brain, as they would work on similar principles. It does mean though that we would be unlikely to be able to download one of these devices to a conventional computer, just as the contents of a brain could never be downloaded to a conventional computer.

On the other hand, the contents of a brain could conceivable be downloaded to a device like I have tried to describe.

Posted in Computing, Internet, Miscellaneous, Philosophy, Science | Tagged , , , , , , , , , | Leave a comment