## Sinistralism

English: Photo of the sinistral shell of Achatina fulica. Locality: Mauritius. (Photo credit: Wikipedia)

At one time banks and post offices used to tie down the pen that they provided for people to sign things, such as cheques, with a piece of string or chain to prevent customers from stealing the pens. These days you are more likely to have a bunch of pens (with the bank’s logo) pressed on you.

Anyway, I was always discomforted by these tied down pens as I am left-handed and the string or chain was always fixed to the right of the counter and rarely was long enough for me to easily sign my cheques with left hand. As a result I was cramped up and twisted round as I manoeuvred the cheque book closer to the right hand side.

Double Helix (Photo credit: Wikipedia)

This is only one of the many times that my sinistralism has left me disadvantaged. Corkscrews and scissors, even power tools, wood screws and nuts and bolts are all designed for the majority who are right-handed. (I believe that the correct term would be dextralist). Can openers are the things that I find most tricky to use.

However, sinistralists living in a dextral world soon learn to use right handed tools, to at least some level of competence. There are tools made specifically for left-handed people and it is quite funny to give one to a right-hander. They don’t have a clue! This is because they have not had to learn to use tools with a sinistral configuration, whereas a sinistral has at least a level of competence with right-handed tools since they are everywhere.

English: A modern pair of MMA Gloves Open Fingered (Photo credit: Wikipedia)

The Universe obviously has a basic chirality or handedness which is a bit odd to say the least. It’s interesting to wonder why chirality exists. Is it just so that we can easily remove corks from bottles of wine? “Thanks, Deity, that just what we needed. Will you take a glass?”

Chirality makes it hard for some creatures. Imagine that you are a mollusc with  a left-coiled shell and all the other mollusc are right-coiled. You’d find partners to be rare. Of course, it can’t make reproduction too difficult or the left-coiled molluscs would become extinct within a generation or so. (That’s probably too simplistic, but let’s not quibble).

Bromochlorofluoromethane as an example of a chiral molecule. (Photo credit: Wikipedia)

Chirality allows us to have streams of traffic travelling both ways on a single road, but I’m sure that Deity would have considered that and arranged for some way that we could duplex our highways. Allowing a way for solid matter to inter-penetrate would be one, but perhaps “both directions” implies chirality anyway and in a non-chiral universe there would only ever be one way.

In some universes (and maybe in this one for all I know) there may be cases of even more complex situations. One reflection of a chiral object changes it into another object that can’t be superimposed on the original object. A second reflection does allow it to be superimposed on the original. Imagine a universe where three or more reflections are need to allow the superimposition! In that universe the motorways and autobahns would need at least three carriageways.

English: A40 at Whitchurch looking eastwards View of both carriageways of the A40 looking eastwards from a vantage point on the road bridge in Whitchurch. (Photo credit: Wikipedia)

The spin of some sub-atomic particles has an interesting spin characteristic. The linked Wikipedia article says in one place:

However, if s is a half-integer, the values of m are also all half-integers, giving (−1)2m = −1 for all m, and hence upon rotation by 2π the state picks up a minus sign.

Rotation through 2π is a rotation through 360 degrees. So a particle with a spin of ½ turns into its mirror reflection on one full rotation. At least I think that’s what that means. Two full rotations and it is back to its original orientation (or at least, its original spin value).

Spin-Flip (Photo credit: Wikipedia)

In the macro world two reflections of an object create an image that can be superimposed on the original. That means that if you look at the reflection of your reflection you see yourself as others see you rather than your mirror image. You are so much more used to seeing your reflection that it can be difficult to comb your hair while looking at a doubly reflected image of yourself.

Chiral objects are not symmetrical. Reflections of symmetrical objects can be superimposed on the original object however (after being moved and turned). Symmetry and asymmetry are part of the fabric of the universe that we live in and I think that having both allows for much more complexity in our universe than would be possible in a universe without the symmetry/asymmetry dichotomy.

Volvo fire engine of the “Bedfordshire and Luton Fire & Rescue Service”, with “FIRE” in mirror writing (“ERIF”) (Photo credit: Wikipedia)

One of the great questions of philosophy is “Why does anything exist?” or “Why does something exist rather than nothing?” The real answer to this question is that no one knows and no one is likely to know. I suppose the Deity might if the Deity exists Itself. Given that the universe exists why is does it embody the concept of left and right? Again, no one knows or is likely to know, or so it appears at this time.

One reason may be that if asymmetry did not exist one could probably not travel from A to B. If one started from A to travel to B in a universe without symmetry and hence without asymmetry, and someone else was travelling in the opposite direction then to pass one another the two travellers would have to pass on one side or the other of the other traveller.

English: line art drawing demonstrating asymmetric. Suomi: Piirrustus, joka demonstroi symmetrisen ja epäsymmetrisen eron. (Photo credit: Wikipedia)

It seems to me that the concept of “side” implies the concepts of “left” and “right”. Asymmetry comes in because, to either traveller, one passes to the left or right of the oncoming traveller who passes to the right or left of you. Symmetry comes in because, to pass one another both traveller has to keep to their left or their right to successfully pass one another.

Of course, that is only true for our universe. It is conceivable that a universe could exist where symmetry and asymmetry do not exist, but it is a universe which we would, most likely, be unable to conceive of, except in the broadest terms.

 View image | gettyimages.com 

## Speed

 View image | gettyimages.com 

(Posted late again! Whoops!)

Every time I write my 1,000 words it is a challenge but sometimes it is more of a challenge. As I’ve said before, sometimes I know roughly what I want to include, while other times I pick a topic and go for it. This post is one of the latter.

Speed. Anyone who has been on the Internet since the early days knows about speed. When I hear people complaining about the speed of their connection I quietly laugh as I consider the days of dial-up, and of 2400 baud modems. A megabit download could take half an hour to an hour if the connection held up that long.

A Telia SurfinBird 56k modem, made by Telia. They often came with a Internet package from the company in the late 1990s. (Photo credit: Wikipedia)

As an aside, the “skreee, Kaboinga, boinga, boinga, skeeee…” of a dial up modem connecting induces nostalgia in me, though I’d not go back to those days! Today’s Internet is sometimes called the Information Superhighway. The old dial up Internet was much like a dirt road. With potholes.

When one takes a journey, say from one end of the country to the other, one sets out on local roads, which may or may not be congested, then one travels over the Motorways, or the Interstates, or the Autobahns. Then one travels on the local roads at the far end. Any of these may be congested, but the local roads are most likely to be slower to traverse.

English: An automobile on the sweeping curves of the Autobahn with view of the countryside. (Photo credit: Wikipedia)

The same is true of the Internet. Your local ISP is equivalent to the local roads at your end of the trip, and the target website or whatever you are connecting to is in their ISP’s local network.

If we delve a little further, it is evident that the copper or fibre that makes up the Internet is not really a factor in the speed of the Internet. Signals in the fibre travel at light speed. Typically this is less than light speed in empty space, but it is close to it. Signals in copper travel slower than this, but still at a significant percentage of the speed of light. (I’d put a link in here, but the subject is complex and I found no clear explanation. YMMV).

English: Fibre Optic cables sign at Exe Water Bridge Over the last few years fibre optic cables for TV and phones have been laid along many rural roads. Usually the indication of their presence is the presence of manhole covers at intervals along the road. Here it is evident that the cable had to go beside the road at the bridge – and probably under the river – so the sign is a warning to other utilities who may dig up the roadside. (Photo credit: Wikipedia)

The real reasons that fibre is preferred over copper is the huge bandwidth and the much smaller attenuation in fibre cables. Bandwidth is often described in terms of how many lanes a highway has. Obviously the more lanes the more traffic a highway can handle. Attenuation is interesting. It’s as if your car starts out in New York in pristine condition, but deteriorates en route to Los Angeles, until it arrives at its destination looking like a bucket of bolts, if it makes it that far.

Whether or not the packets are transmitted by fibre or copper, the signal must somehow be loaded onto the cable, and this takes significant time. The packet of data is placed into a register on the network connector by the computer and a special chip translates that to a stream of bits on the wire, or pulses of light in the fibre.

 View image | gettyimages.com 

These pulses then whiz off onto the network. However they don’t travel all the way in one hop. Your computer connects to a modem device that sends the signal to your ISP, where the signals hit a router. This device looks and a whole packet of data, then send it off again towards its destination. It doesn’t get to its destination in one hop, and there may be a dozen or more hops before it gets there.

At the beginning and the end of each and every hop there is a device that grabs the packet of data off the wire, decides where to send it and puts it on another wire. These devices are called ‘routers’, a term which many people will have heard. As you can imagine, each hop adds a delay (or latency) to the packet of data. These delays are quite small but they add up.

English: Avaya ERS 8600. (Photo credit: Wikipedia)

So the Information Superhighway doesn’t look that flash after all. Sometimes I wonder how data actually gets through at all. It’s as if there was a multi-lane highway across the country (the world even), but it is studded with interchanges which take significant time to traverse, increasing the time taken for the trip – light would take nanoseconds to get from here to Sydney if we had line of sight, but over the Internet it takes milliseconds.

Satellites are even worse. To reach a geostationary communications satellite and return takes of the order of half a second, an age in computing terms. Of course most communication satellites are lower than that so the delay is not as much as half a second, but it is still significant.

 View image | gettyimages.com 

The advent of streaming services has exacerbated the problem. The basic issue is twofold. To use the motorway analogy, there are many more cars on the road and as a result the interchanges are becoming crowded resulting in congestion. In general the motorways themselves are fast and free-flowing, but the interchanges have not caught up.

An ingenious partial solution is to strategically place machines around the world which effectively distribute the stuff that people want to receive, so that the same content is available locally. It’s like taking all the copies of the pictures in a gallery in Los Angeles and storing the copies in New York, Miami, Washington and so on. Rather than having to go all the way to Los Angeles, a New York viewer sees the picture locally.

In the diagram shown, we see an “Akamaized” website; this simply means that certain content within the website (usually media objects such as audio, graphics, animation, video) will not point to servers owned by the original website, in this case ACME, but to servers owned by Akamai. It is important to note that even though the domain name is the same, namely http://www.acme.com and image.acme.com, the ip address (server) that image.acme.com points to is actually owned by Akamai and not ACME. (Photo credit: Wikipedia)

So the sources of delay to your streaming the latest version of TV shows are many. The first possibility is your own setup. Maybe your network and modem are not up to the task. Secondly there is the telecom network. Tricky stuff happens between you and your ISP which if the province of the telcos. I don’t know the ins and outs of it, but in some cases switching a couple of connection in the roadside cabinet or in the exchange helps.

Then there is your ISP. ISP will be keeping a close eye on the traffic through their part of the network, but the rapid rise of the streaming services has caught them a little bit unawares, and some are scrambling to keep up. Then there is the Internet backbone. It is unlikely that there are issues here. Finally there is the target ISP’s network and the target site’s network and the site itself. Any of these could cause issues, but they are way beyond the control of the end user and his/her ISP.

 View image | gettyimages.com 

Speeds on the Internet are phenomenal when compared to the early days. Things are much more complex these days. It is amazing what can be achieved, and those of us who have experienced the early days are less likely to whinge about speed issues as we remember that it was like!

 View image | gettyimages.com 

## Documentation

Script Installer documentation page (Photo credit: Wikipedia)

Documentation. The “D word” to programmers. In an ideal world programs would document themselves, but this is not an ideal world, though some programmers have attempted to write programs to automatically document program for them. I wonder what the documentation is like for such programs?

gcc unter kde (Photo credit: Wikipedia)

To be sure if you write a program for yourself and expect that no one else will ever look at it, then documentation, if any, is up to you. I find myself leaving little notes in my code to remind my future self why I coded something in a particular way.

Such informal documentation can be amusing and maybe frustrating at times. When reading someone else’s informal documentation such as “I’m not sure how this works” or “I don’t remember why I did this but it is necessary”. More frequently there will be comments like “Finish this bit later” or the even more cryptic “Fix this bit later”. Why? What is wrong with it? Who knows?

English: A bug in mathJax (Photo credit: Wikipedia)

The problem with such informal in code documentation is that you have to think what the person reading the code will want to know at this stage. Add to this the fact that when adding the comments the programmer is probably focussing on what he/she will be coding next, the comments are likely to be terse.

Add to this the fact that code may be changed but the comments often are note. The comment says “increment month number” while the code actually decrements it. Duh! A variable called “end_of_month” is inexplicably used as an index to an array or something.

English: Program Hello World Česky: Program Hello World (Photo credit: Wikipedia)

Anyone who has ever done any programming to a level deeper than the usual beginner’s “Hello World!” program will know that each and every programmer has tricks which they use when coding, and that such tricks get passed from programmer to programmer with the result that a newcomer looking at code may be bamboozled by it. The comments in the code won’t help much.

 View image | gettyimages.com 

Of course such programming tricks may be specific to the programming language used. While the same task may be achieved by similar means at a high level, the lower level of code will be significantly different. While that may seem to impose another barrier to understanding, I’ve found that it is usually reasonably easy to work out what is going on in a program, even if you don’t “speak” that particular language, and the comments may even help!

While internal documentation is hit and miss, external documentation is often even more problematic. If the programmer is forced to write documents about his programs, you will probably find that the external documentation is incomplete, inaccurate or so terse it is of little help in understanding the program.

English: Diagram of the mechanism of using perl modules. Deutsch: Diagramm des Mechanismus der Verwendung von Perl-Modulen (Photo credit: Wikipedia)

In my experience each programmer will document his/her programs differently. Programmers like to program so will spend the least possible amount of time on documentation. He/she will only include what he/she thinks is important, and of course, the programmer is employed to program, so he/she might get dragged away to write some code and conveniently forgets to return to the documentation.

If the programmer is at all interested in the documentation, and some are, he/she will no doubt organise it as he/she thinks fit. Using a template or model might help in this respect, but the programmer may add too much detail to the documentation – a flowchart may spread to several pages or more, and such flowcharts be confusing and the source of much frustrated page turning.

Lava lamp flowchart (Photo credit: Wikipedia)

Of course there are standards for documentation, but perhaps the best documentation of a program would be to specify the inputs and specify the outputs and then a description of how the one becomes the other at a high level. As I mentioned above a programmer will probably give too much detail of how inputs become outputs.

Documentation tends to “decay” over time, as changes are made to the program and rarely is the documentation revisited, so the users of the program may need to fill in the gaps – “Oh yes, the documentation says that need to provide the data in that format, but in fact that was changed two years ago, and we now need the data in this format”.

Legacy of the Ancients (Photo credit: Wikipedia)

The problem is worse if the programmer has moved on and gone to work elsewhere. Programmers tend to focus on the job in hand, to write the program to do the job required and then move on to the next programming task, so such code comments as there are will be written at the time that the programmer is writing them. Such comments are likely notes to the programmer him/herself about the issues at the time that the program is being written.

 View image | gettyimages.com 

So you get comments like “Create the app object” when the programmer wants a way to collect the relevant information about the data he/she is processing. Very often that is all that one gets from the programmer! No indication about why the object is needed or what it comprises. The programmer knows, but he/she doesn’t feel the need to share the information, because he/she doesn’t think about the next person to pick up the code.

English: Picture of an ancient pipe documenting the foundation of student fraternity Guestphalia Bonn (Photo credit: Wikipedia)

I don’t want to give the impression that I think that documentation is a bad thing. I’m just pondering the topic and giving a few ideas on why documentation, as a rule, sucks. As you can imagine, this was sparked by some bad/missing documentation that I was looking for.

Open source software is particularly bad at this as the programmer has an urge to get his program out there and no equal urge to document it. After all, a user can look at the code, can’t he/she? Of he/she could look at the code, but it is tricky to do so for large programs which will probably be split into dozens of smaller ones, and the user has to be at least a passable programmer him/herself to make sense of it. Few users are.

Screenshot of the open source JAVA game Ninja Quest X (I am one of the programmers) (Photo credit: Wikipedia)

So I go looking for documentation for version 3.2 of something and find only incomplete documentation for version 2.7 of it.  I also know that big changes occurred in the move from the second version of the program to the third undocumented version. Ah well, there’s always the forums. Hopefully there will be others who have gone through the pain of migration from the second version to the third version and who can fill in the gaps in documentation too.

Parse tree of Python code with inset tokenization (Photo credit: Wikipedia)

## Philosophy and Science

 View image | gettyimages.com 

Philosophy can be described, not altogether accurately, as the things that science can’t address. With the modern urge to compartmentalise things, we designate some problems as philosophy and science, and conveniently ignore the fuzzy boundary between the two disciplines.

The ancient Greek philosophers didn’t appear to distinguish much between philosophy and science as such, and the term “Natural Philosophy” described the whole field before the advent of science. The Scientific Revolution of Newton, Leibniz and the rest had the effect of splitting natural philosophy into science and philosophy.

Statue of Isaac Newton at the Oxford University Museum of Natural History. Note apple. (Photo credit: Wikipedia)

Science is (theoretically at least) build on observations. You can’t seriously believe a theory that contradicts the facts, although there is a get-out clause. You can believe such a theory if you have an explanation as to why it doesn’t fit the facts, which amounts to having an extended theory that includes a bit that contains the explanation for the discrepancy.

Philosophy however, is intended to go beyond the facts. Way beyond the facts. Philosophy asks question for example about the scientific method and why it works, and why it works so well. It asks why things are the way they are and other so called “deep” questions.

 View image | gettyimages.com 

One of the questions that Greek philosopher/scientists considered was what everything is made of. Some of them thought that it was made up four elements and some people still do. Democritus had a theory that everything was made up of small indivisible particles, and this atomic theory is a very good explanation of the way things work at a chemical level.

Democritus and his fellow philosopher/scientists had, it is true, some evidence to go and to be fair so did those who preferred the four elements theory, but the idea was more philosophical in nature rather than scientific, I feel. While it was evident that while many substances could be broken down into their components by chemical method, some could not.

Antoine Lavoisier developed the theory of combustion as a chemical reaction with oxygen (Photo credit: Wikipedia)

So Democritus would have looked at a lump of sulphur, for example, and considered it to be made up of many atoms of sulphur. The competing theory of the four elements however can’t easily explain the irreducible nature of sulphur.

My point here is that while these theories explained some of the properties of matter, the early philosopher/scientists were not too interested in experimentation, so these theories remained philosophical theories. It was not until the Scientific Revolution arrived that these theories were actually tested, albeit indirectly and the science of chemistry took off.

Model for the Three Superior Planets and Venus from Georg von Peuerbach, Theoricae novae planetarum. Image enhanced for legibility. The abbreviations in the center of the diagram read: C[entrum] æquantis (Center of the equant) C[entrum] deferentis (Center of the deferent) C[entrum] mundi (Center of the world) (Photo credit: Wikipedia)

Before that, chemical knowledge was very run by recipes and instructions. Once scientists realised the implications of atomic theory, they could predict chemical reactions and even weigh atoms, or at least assign masses to atoms, and atomic theory moved from philosophy to science.

That’s not such a big change as you might think. Philosophy says “I’ve got some vague ideas about atoms”. Science says “Based on observations, your theory seems good and I can express your vague ideas more concretely in these equations. Things behave as if real atoms exist and that they behave that way”. Science cannot say that things really are that way, or that atoms really exist as such.

English: Adenine_chemical_structure + atoms numbers (Photo credit: Wikipedia)

Indeed, when scientists took a closer look at these atom things they found some issues. For instance the (relative) masses of the atoms are mostly pretty close to integers. Hydrogen’s mass is about 1, Helium’s is about 4, and Lithium’s is about 7. So far so tidy. But Chlorine’s mass is measured as not being far from 35.5.

This can be resolved if atoms contain constituent particles which cannot be added or removed by chemical reactions. A Chlorine atom behaves as if it were made up of 17 positive particles and 18 or 19 uncharged particles of more or less the same mass. If you measure the average mass of a bunch of Chlorine atoms, it will come out at 35.5 (ish). Problem solved.

English: Chlorine gas (Photo credit: Wikipedia)

Except that it has not been solved. Democritus’s atoms (it means “indivisibles”) are made up of something else. The philosophical problem is still there. If atoms are not indivisible, what are their component particles made of? The current answer seems to be that they are made of little twists of energy and probability. I wouldn’t put money on that being the absolute last word on it though. Some people think that they are made up of vibrating strings.

All through history philosophy has been raising issues without any regard for whether or not the issues can be solved, or even put to the test. Science has been taking issues at the edges of philosophy and bringing some light to them. Philosophy has been taking issues at the edge of science and conjecturing on them. Often such conjectures are taken back by science and moulded into theory again. Very often the philosophers who conjecture are the scientists who theorise, as in famous scientists like Einstein, Schroedinger and Hawking.

:The Black Hole, Los Alamos (Photo credit: Wikipedia)

The end result is that the realm of philosophy is reduced somewhat in some places and the realm of science is expanded to cover those areas. But the expansion of science suggests new areas for philosophy. To explain some of the features of quantum mechanics some people suggest that there are many “worlds” or universes rather than just the one familiar to us.

This is really in the realm of philosophy as it is, as yet, unsupported by any evidence (that I know of, anyway). There are philosophers/scientists on both sides of the argument so the issue is nowhere near settled and the “many worlds interpretation” of quantum mechanics is only one of many interpretations. The problem is that quantum mechanics is not intuitively understandable.

Diagram of one interpretation of the Nine Worlds of Norse Mythology. (Photo credit: Wikipedia)

The “many worlds interpretation” at least so far the Wikipedia article goes, views reality as a many branched tree. This seems unlikely as probabilities are rarely as binary as a branched tree. Probability is a continuum, like space or time, and it is likely that any event is represented on a dimension of space, time, and probability.

I don’t know if such a possibility makes sense in terms of the equations, so that means that I am practising philosophy and not science! Nevertheless, I like the idea.

Displacement of a continuum body, from a reference configuration to the current configuration. Continuum mechanics. (Photo credit: Wikipedia)

## Television

 View image | gettyimages.com 

Television as a medium is less than one hundred years old, yet in the sense of a broadcast over radio waves, it seems doomed as the rise of “streaming” sites takes over the role of providing the entertainment traditionally provided by broadcast television.

My first recollection of television was watching the televising of the coronation of Queen Elizabeth the Second. I can’t say that I was particularly interested at the time, but I do remember that what seemed a large number of people (probably 20 or so, kids and adults) crowded on one side of the room while the television across the room showed its flickering images on its nine inch screen.

Coronation of Queen Elizabeth II X (Photo credit: Wikipedia)

I remember when at a later date my father brought home our first television. It was a large brown cabinet with a tiny noticeably curved screen. When it was set up properly and working, it displayed a black and white image on a screen which was smaller than the screen of an iPad.

The scan lines on the screen were easily visible, and the stability of the circuits that generated the scan were unstable, so the picture would flicker and roll from top to bottom and tear from left to right. Then someone would have to jump up and twiddle some knobs on the rear of the set to adjust it back into stability, or near stability.

 View image | gettyimages.com 

To start with many people did not have aerials on their roofs. For one thing, television was new, and secondly the aerials were huge. They were generally large constructions, either in an X shape or in a H shape several feet in length. Most people started with an internal aerial, the so-called “rabbit’s ear” aerials.

These were small, low down and generally didn’t work too well as they were nowhere near comparable to the wavelength of the transmitted signal. Nevertheless they enabled people to, in most cases, get some sort of a picture on their new televisions.

 View image | gettyimages.com 

The trouble was that with a weak signal and unstable circuits, the person leaning over the television to tune it more often than not affected the circuits and signal. With the rest of the family yelling instructions and with a clear(-ish) picture on the screen, it only took the movement of the person tuning the set away from the set for the picture to be lost again.

Of course soon everyone had an aerial on the roof, and the aerials shrunk in size as television was moved to higher frequencies, and as the technology improved. The classic shape of a television receiver aerial consists of a bristly device, sometimes with smallish mesh reflector, one dipole and several reflectors and directors, which pretty obviously points towards a television broadcast station.

Nederlands: Zelfgemaakte schets Yagi antenne (Photo credit: Wikipedia)

Many tower sprung up on the tops of convenient hills to provide the necessary coverage and it is a rare place these days when the terrain or other problems prevent the reception of a television signal. Even then, coverage could probably be obtained by usage of satellite technology.

However, after several decades of dominance the end of the broadcast network looks like it is in sight. The beginning of the end was probably signalled by the Video Cassette Recorder, which enabled people to record programs for viewing later. People were no longer tied to the schedule of a broadcaster, and if they wanted to watch something that was not on the schedule, they went to a store and hired it.

English: TOSHIBA STEREO VIDEO CASSETTE RECORDER 日本語: 東芝製VHSビデオデッキ (Photo credit: Wikipedia)

The video cassette stores appear to be going to have an even shorter lifetime than television itself. Of course most of them have switched to DVD as the medium but that doesn’t make a significant difference.

What does make a difference is the Internet. Most people are now connected to the Internet in one way or another, and that is where they are getting a major part of their entertainment, music, news, films, games, and also that is increasingly where they are getting their TV-style entertainment, what would otherwise be called “TV series”.

English: Intertitle from the The CW television program Nikita (Photo credit: Wikipedia)

TV companies produce these popular series, an example of which would be “The Big Bang Theory”. This show has run for years and is still very popular on television, but it also available for download (legitimately) from one or more companies that are set up expressly for the purpose of providing these series online, on the Internet.

In countries at the end of the world, like here, it takes months or even years for the latest episodes to be broadcast here. If they ever are. So more and more people are downloading the episodes directly from the US, either legitimately or illegitimately.

This obviously hits at the revenues of the companies that make these costly shows, so, equally obviously they are trying to prevent this drain on their revenues. The trouble is that there is no simple way of ensuring that those who download these programs are paying for the service. If they are paying and the supplier is legitimate then presumably the supplier will be paying the show producers.

Once an episode is downloaded, then it is out of the control of the show’s producers. The recipient’s ethics determine if he will share it around to his friends or keep it to himself. If thousands of people (legitimately) download it, then presumably some of the less ethical will then share it on, and it soon becomes available everywhere for free.

icon for Japanese File-sharing program perfect dark. (Photo credit: Wikipedia)

It will at some stage reach a point where broadcasting a television program is no longer economic. The producers will have to primarily distribute their programs via the Internet and somehow limit or discourage the sharing of the programs around. That would mean the end of TV broadcasting as we know it.

We are not anywhere near that situation yet, and the program production companies will have to come up with a new economic model that allows them to make a profit on the shows without broadcasting them over radio waves. The more able companies will survive, although they may be considerably smaller. TV actors will only be able to demand much smaller salaries, and budgets will be tighter.

English: Captioned with “Professor A.W.H (Bill) Phillips with Phillip’s Machine.” Phillips was an LSE economist known for the Phillips curve and he developed MONIAC, the analog computer, shown here, that modeled economic theory with water flows. (Photo credit: Wikipedia)

Another factor that the program production companies will have to take into account would be loss of advertising revenue. Losing advertisers can scuttle a television show, so this is not a minor factor.

Whatever happens in the long term, as I said above, a new economic model is necessary. I’ve no idea what this will look like, but I foresee the big shows moving to the Internet in a big way.

SeeSaw (Internet television) (Photo credit: Wikipedia)

Broadcast TV will continue for some time, I think, as there are people who would resist moving away from it, but it is likely to be much reduced, with less new content and more reruns. It may be that the broadcast TV may be reduced to a shop window, with viewers seeing the previews and buying a series with a push of a button on their smart TVs.

 View image | gettyimages.com 

## Cooking

cooked in this case. I’d like to try the raw version even though this was good as is. (Photo credit: Wikipedia)

Most people have a hand in food preparation at some time in the day. Even those who subsist on “instant meals” will at least zap it in the microwave for the necessary amount of time. Some people however cook intricate dishes, for their own amusement or for friends and families.

Most people eat cooked food although there is somewhat of a fad for raw food at the present time. All sorts of diets are also touted as having some sort of benefit for the food conscious, all of which seem bizarre when one considers that many, many people around the world are starving.

 View image | gettyimages.com 

Cooking can be described as applied chemistry, as the aim of cooking is to change the food being cooked by treating it with heat in one way or another. All the methods treatment are given names, like “boiling”, or “baking” or “roasting”. In the distant past no doubt such treatments were hit and miss, but these days, with temperature controlled ovens and ingredients which are pretty much consistent, a reasonable result can be achieved by most people.

Chemistry Is What We Are (Photo credit: Wikipedia)

I’d guess that the first method of cooking was to hold a piece of meat over a fire until the outside was charred and much of the inside was cooked. However, human ingenuity soon led to spit roasting and other cooking methods. A humorous account of the accidental discovery of roasting a pig was penned by Charles Lamb. In the account the discovery came as a result of an accidental setting fire to a pig sty, and consequently, as the idea of roast pork spread this led to a rash of pig sty fires, until some sage discovered that houses and sties did not need to be burnt down and it was sufficient to hang the pig over a fire.

English: Slow-roasting pig on a rotisserie (Photo credit: Wikipedia)

I’d suspect that while roasting may have been invented quite early by humans, cooking in water would have come along a lot later as more technology is needed to boil anything. That is, a container would be needed and while coconut shells and mollusc shells can contain a little water, and folded leaves would do at a pinch, when humans invented pottery, the art and science of cooking was advanced immensely.

Although the foods that we eat can pretty much all be eaten raw, most people would find cooked food much more attractive. Cooked food smells nice. The texture of cooked food is different from the texture of raw food. I expect cookery experts are taught the chemical reactions that happen in cooking, but I suspect that cooking breaks down the carbohydrates, the fats, and the proteins in the food to simpler components and that we find it easier to digest these simpler chemicals.

 View image | gettyimages.com 

Maybe. That doesn’t explain why cooked food smells so much nicer than raw food. If food is left to break down by itself it smells awful, rotten, and with a few exceptions we don’t eat food that has started to decay.

Maybe the organisms that rot food produce different simpler components, or maybe the organisms produce by products that humans dislike. Other carnivores don’t seem to mind eating carrion and maybe a rotting carcass smells good to them.

Carrion Crow (Corvus corone) (Photo credit: Wikipedia)

The rules of cooking, the recipes have no doubt been developed by trial an error. It is likely that the knowledge was passed from cook to cook as an aural tradition initially. After all, cooking is likely to have started a long time before reading and writing were invented. Since accurate measurements were unlikely to be obtainable, much of the lore or cooking would have vague and a new cook would have to learn by cooking.

However, once the printing press was invented, after all the bibles and clerical documents had been printed, I would not be surprised to learn that the next book to be printed would have been a cook book. I’ve no evidence for this at all though!

English: Fanny Farmer Cookbook 1996 edition Français : Livre de la cuisson Fanny Farmer 1996 (Photo credit: Wikipedia)

Cooking changes the texture of meat and vegetables, making them softer and easier to eat. Connective tissues in particular are released making a steak for example a lot more edible. Something similar happens to root vegetables, swedes, turnips, carrots and parsnips. These vegetables can be mashed or creamed once they are cooked, something that cannot be done to the rather solid uncooked vegetables.

Cooking is optional for some foods – berries and fruits for example. Apples can be enjoyed while raw when they have a pleasant crunch, or cooked in a pie, when they are sweet and smooth. Babies in particular love the sweet smoothness of cooked apple and for many of them puréed fruits or vegetables are their first “solid” foods.

 View image | gettyimages.com 

Chicken eggs are cooked and eaten in many different ways. The white of an egg is made partly of albumen and when this is cooked it changes from translucent, almost transparent, to an opaque white. Almost everyone will have seen this happen, when an egg is cracked into a frying pan and cooked until the clear “white” of the egg turns to opaque white of the cooked egg.

Many other items when cooked change colour to some extent, but the white of the egg is most apparent. When you pair that with bread which is slightly carbonised on the outside, covered in the coagulated fat from cow’s milk (butter) and you have a common breakfast dish – fried eggs on toast.

English: Two slices of electrically toasted white bread on a white plate (Photo credit: Wikipedia)

There’s a whole other type of cooking – baking – that relies at least partly on a chemical reaction between an alkali (baking soda or sodium bicarbonate) and an acid (often “cream of tartar” which is weakly acidic). When the two are mixed in the presence of water, carbon dioxide gas is given off, leading to gas bubbles in the dough. When the dough is cooked the bubbles are trapped inside the stiffening dough, give the baked cake the typical spongy texture.

Some cooking utilises biological reactions. When yeast, a fungus, is placed into a liquid containing sugar, it metabolises the sugar, releasing carbon dioxide, and creates alcohol. In bread making this alcohol is baked off, but it may add to the attractiveness of the smell of newly baked bread. In brewing the alcohol is the main point of the exercise, so it is retained. It may even be enhanced by distillation.

 View image | gettyimages.com 

I’ve just touched on a few highlights as regards the mechanisms of cooking (and brewing!), but I’ve come to realise as I have been writing this that there are many, many other points of interest in this subject. The subject itself has a name and that name is “Molecular Gastronomy”. A grand name for a grand subject.

 View image | gettyimages.com 

## Crime and Punishment

The staircase at the National Museum of Crime & Punishment (Photo credit: Wikipedia)

I’m convinced that most people go through life making few real choices. Oh, we all can all look back and say “Oh, decided to do so-and-so”, but I’m convinced that we didn’t make a choice in the sense of sitting down, making list, considering alternatives, options and consequences. I guess that the nearest that we would come to doing that would be when we are budgeting, or deciding where to go on holiday.

No, our “choices” are driven by needs (“We need to go to the mall to buy….”) or desires (“Let’s eat at the Peppermill today. I had a great omelette there last week!”). Someone comes up with a need or desire and we go along with it or we don’t.

 View image | gettyimages.com 

My point is that there is a thing, which I believe doesn’t exist, called “Free Will” which allows a free choice between alternatives. There is a philosophical war going on between the believers in “Free Will” and those believing in “Predestination” for millennia.

It seems to me that the closer you look at the Free Will/Free Choice thing, the more you discover the reasons that people make the choices that they do. The more reasons, obviously the less “free” the choice will be, and the more you dig the more reasons you find and the less free the choice becomes. I contend that eventually, the room for freedom of choice shrinks to nothing.

 View image | gettyimages.com 

An interesting test would be to put people into a box with a screen and two buttons, and not give them any instructions except “Go into the box and sit down”. Maybe play them some elevator music to set the tone. When you pull them out after 10 minutes or so, they will have pushed zero, one or two buttons. If you then say, in a neutral tone, “You pushed zero, one, two buttons”, they will immediately begin to tell you their justifications for their action or actions.

Justification are not reasons. People often something like “Well, you left me in there with no instructions. Buttons are for pushing, So I thought that I would push one and see what happens” or “Nobody told me to push the buttons, so I didn’t”.

Traffic light aid for the blind, Herzliya, Israel (Photo credit: Wikipedia)

These statements say little about the reasons for the person’s actions or inactions. The reasons that they press or don’t press the buttons relate more to a person’s character and state of mind at the time than the justifications given. For instance, the person may be a rule follower, and without rules, would do nothing. Another person may be a rule breaker and, without rules, feels free to do whatever they wish. We all are a mix of both types of course.

People don’t think “I’m a rule-breaker, I’ll push a button”, so they can’t really claim this as a reason for their choice, and they can’t be said to have made a free choice if constrained by this innate or learned facet of their behaviour.

 View image | gettyimages.com 

Some people believe that, in spite of the postulated fact that there is only one possible outcome when a choice is made, that a choice has in fact been made, since if the circumstances had been different a different choice would have been made.

This seems to me to be dodging the question. (It’s not “begging the question” in the strict usage of the phrase). I look at it like this: if we were to roll back time to before the moment that a choice was supposedly made, such as the point when the door of the box closed, and we let time roll forward again, could anything different happen. It is my contention that since all other factors remain the same, that the same thing would definitely happen.

תרשים כללי של פנופטיקון, מבנה הטרוטופי (פוקו) (Photo credit: Wikipedia)

Which brings me to the point of this post, which is, how do we justify meting out punishment for a crime, when the criminal was unable to choose not to commit it. Take away the concept of free will and punishment of the criminal seems cruel, unnecessary and unethical at the first glance. Wikipedia gives four justifications for punishment.

Justifications for punishment include retribution, deterrence, rehabilitation, and incapacitation.

Of those justifications the first, retribution, is problematic in a predestined world. The criminal could not have not committed the crime, so revenge or retribution loses most of its point.

Image of “Dawn: Luther at Erfurt” which depicts Martin Luther discovering the doctrine of Justification by Faith. (Photo credit: Wikipedia)

However, retribution is rolled up into deterrence. If other criminals see what happens to the criminal in question, they will possibly be less likely to commit similar crimes. In other words the reluctance to suffer the consequences becomes part of their character which results in them not doing similar. When the chance to commit a similar crime arises then this factor becomes part of the character and they do not do it.

Similarly the criminal in question will be deterred (one hopes) from committing the crime again. He will hopefully be rehabilitated and the punishment for his current crime will influence him when the possibility of committing a similar crimes turns up. The punishment is in his memory and is a part of his personality and could be a reason for not committing the crime in the future. He may claim, in the future, that he “chose” not to repeat his crime, but in fact he could not chose to do it because of his personality and his memory of his punishment.

“A Dream of Crime & Punishment”, engraving by J.J. Grandville. As reproduced in “Harper’s Magazine” shortly after Grandville’s death in 1847. “It is the dream of an assasin overcome by remose” (Photo credit: Wikipedia)

If the punishment results in a prison sentence then of course he cannot commit the crime or similar crimes. Wikipedia uses the term “incapacitated” and indeed that is so if he is imprisoned. An execution is a pretty final way of “incapacitating” a criminal and for many justice systems it it the ultimate punishment for severe crimes.

In the past in many countries, the criminal was tortured before execution, a process which horrifies us these days, but which seemed justified at the time. It at least some of these cases the intent was “drive out” evil influences.

Evil Twin (Photo credit: Wikipedia)

The past crimes of others and their subsequent treatment, whatever it was, also serves to warn and influence others who might also have otherwise committed similar crimes. So even in a predestined world punishment would have a deterrent effect on others as it will influence others. In fact the only difference between a universe which allows for free will (somehow) and a predestined universe is the idea of “blame”.

The “free will” universe blames the wrongdoer, but the predestined universe doesn’t as the wrongdoer could not do otherwise than he did. There are still reasons for punishment in a deterministic universe in spite of that.

Incompatiblists agree that determinism leaves no room for free will. As a result, they reject one or both. (Photo credit: Wikipedia)