On a planet packed with diverse life forms human intelligence is the unique attribute which sets us apart. We have no competition. Among the mammals, we are the only creatures who have developed the skills to use the resources that surround us to change the environment and the way in which we live. Sometimes this has been achieved by individual skill and sometimes by collective effort and combined intellect. Now, for the first time ever, we appear to be challenged in our primacy by a new competitor of our own devising - AI – or artificial intelligence.

A recent article by Marilyn Sheridan asked whether AI was capable of taking over the world. But we should take a step back from this question and define the argument and examine the facts so as to place this easy sounding acronym in perspective because we are in danger of taking its literal meaning the wrong way. Fashion and ignorance should not be allowed to lay false claims on a technology which appears to have so much importance in the future of the human race.

The story should start in the world before electronic computers when data collection and analysis was done by real people collecting information, storing this data in filing cabinets to be placed on an indexed card system which could then be accessed to provide answers on particular fields. Police forces, manufacturers and libraries were some of the early users. Analysis toward a result in the age of paper was time and labour dependent and this antlike activity had its limits.

The story then moves on to Bletchley Park in 1939 and the requirement to break German wartime codes within a timescale where the information could useful. The amount of signals traffic, using multiple and sometime unrelated coding systems, was massive and, although the traffic could be recorded by human monitors it required hundreds of people to filter the sources before passing the incomprehensible letter sequences to code breakers. The German mechanical coding machines and cipher books were designed to encrypt any message without repeating the same letter like for like. The possible complications were in the billions. The process of decryption was a vast crossword puzzle with no clues.

The main decoding system involved trying to work back to reproduce the mechanical settings of the originating machine in a tedious process often relying on luck or intuition. Within this endless dark forest of machine generated code, breaking often relied on a machine defect, finding operator errors, repeats of words and sequences or simple mistakes to relate a similar match in another message. This was pretty much beyond human effort. In 1939 two Polish mathematicians developed a relay/mechanical machine called a Bomba to more rapidly analyse the coded messages and when war broke out this first ‘computer’ of sorts was gifted to the Bletchley Park code breakers who developed it and built their first Bombe in March 1940. It was an instant game changer allowing the English to read a variety of messages in useful time.

The next step involved the creation of an even better and faster machine using electronic valves and punch tape which was semi-programmable. This machine, code-named Colossus, became operational in February 1944. The world’s first electronic data analytical computer. The relevance of this event to the world we know today is that Colossus could relieve humans of the drudgery of sifting through data to find and relate code sequences to achieve a result. The machine did not create the method to crack a code. The ‘cracking’ was acheived by human code breakers and input to Collossus through a punch taped taped ‘programme’ which managed the sequencing and filtering of the information or data. Colossus was less ‘intelligent’ than a modern washing machine. The development of what we now call computers remained at this level for decades. Analytical machines which could process and filter vast amounts of data in ever quicker and more complex ways using four basic office processes namely - Data (the filing cabinet) - Programme (the task) - Analysis (the card index) - Output (the report). When the task finished the machine stopped. This is what Colossus did and what computers do today albeit in a much more sophisticated way. Without a programme, data and a task they don’t function – or can they?

Because modern computers can have access to immense amounts of data, don’t forget and never make mistakes if programmed and maintained correctly, the tasks they perform within a timescale can be made ever more complex and useful. But until proven otherwise they are simply data processors dependent and tied to the programmers who generate their chores towards a result, however complex the task may appear.

One of the most publicised tests given to the emerging computers in the ‘80’s was chess. This centuries old game is complex and well documented in terms of the many moves and gambits humans have evolved to achieve dominance and success. The possible moves and countermoves are seemingly endless making a human player vulnerable to tiredness or simple lack of recall and connectivity when playing against an analytical machine which absorbs and does not forget information or get tired or become threatened by failure. A modern supercomputer can be programmed to analyse every move and countermove possible. It does not have to ‘think’ – just process and analyse, but because chess players are regarded as ‘intelligent’ the audience may place that description on a dumb, relentless analytical machine. A more interesting gaming experiment was carried out by an AI company called Deep Mind who selected the ancient Chinese game Go to publicize their skills in the new era of ‘artificial intelligence’. In December 2017 the company arranged to play head-to-head with the Go World Champion

The programmers equipped their machine with an analysis of 6,000 Go openings from 230,000 human games together with their winning ratios and 10,000,000 simulations from expert players to back up its analytical ability.

What we are looking at here is a huge analytical, relational database which handles information and options to home in on an optimal solution – in this case to an ancient Chinese board game with black and white stones with which the players try to define and ‘capture’ territory and purported to be much more difficult than chess. Does the machine have to be clever or simply thorough? In the Alphago trial a single human Master was up against a dedicated Go computer and a programming team of over 20 humans supporting and tweaking a machine equipped with a programme designed to look at every move option. After winning two games the computer lost the third because with move 78 the machine proved fallible and lost. The reason was simple. The frustrated human made a move so dumb and illogical that it could not be understood within the context of the high level logic for which the computer had been programmed. Deep Mind commented.

“Before move 78, AlphaGo was leading throughout the game, but Lee's move caused the program's computing powers to be diverted and confused. Huang explained that AlphaGo's policy network of finding the most accurate move order and continuation did not precisely guide AlphaGo to make the correct continuation after move 78, since its value network did not determine Lee's 78th move as being the most likely, and therefore when the move was made AlphaGo could not make the right adjustment to the logical continuation.”

This makes Alphago an analytical machine not a thinking machine. It depends on its programmers and, if they failed, it became lost. The human did well.

So, how would the legendary ‘person in the street’ define AI? Perhaps he/she would describe artificial intelligence in terms of machines that can think like humans. Is this possible because computers are limited and work with past and current data? In my agnostic view words like inventive, unpredictable, impulsive, random, erratic, original, intuition, sixth sense, wisdom, foresight, guesswork, morality, teamwork, enlightened self interest, emotion, humour and wit are all human traits and will never be part of machine AI. But I do believe that machine based expert systems programmed to process information from large databases are useful tools especially in medicine and law. The latter to ensure even handed application without the bias of wealth or social position. So, while computers can excel at processing vast amounts of data and performing specific tasks, the realm of human creativity and intuition is unparalleled. The human touch, with all its quirks and qualities adds a unique flavour to our existence with the magic that comes from human imagination and the nuances of our experiences.

It's the blend of the human and the expert system that seems to hold the most promise for the future. For this to be used and appreciated properly we need a more realistic description. I think that the misleading acronym AI should be replaced with the term Expert Data Analysis (EDA) to alert the human race to the true role of computers as analytical machines in the service of humans.

There are real dangers in believing that AI can exist because this can lead to humans complacently leaving important decisions to machines that are in fact based on programmes written by humans who can be somewhat unreliable if not downright whacky and dangerous. We already live under surveillance from multiple sources. Security and speed cameras watch our every move while our phones and cards capture information about our location and spending. The social and communication media can be accessed to give in depth information about our personal lives. All this data can be collected and quickly collated and analysed by the new computer power; and, of course, it’s all for our own safety and security. This dystopian scenario was predicted in George Orwell’s 1949 book 1984. Gilliam again waved the warning flag in his classic film Brazil. Kubrick’s film Dr Strangelove ends with the unstoppable Doomsday machine destroying the world because unreliable humans cannot be relied upon to react quickly and logically to the ultimate threat – even if the humans recognise that the threat is not real.

One feature of modern life is the computer dependency in aircraft flight control systems. They have sometimes proven fallible, requiring human logic and intervention to save a life threatening situation – by landing in the Hudson River. In 2018/9 the Boeing 737Max killed 346 people because “As an automated corrective measure, the MCAS was given full authority to bring the aircraft nose down, and could not be overridden by pilot resistance.” The MCAS programme, of which the pilots were unaware, was doing its job with data from a faulty stall warning system. The real fault lay with Boeing management and its implementation of the flight programme which cut the pilots out of the loop. Across the board do you want AI to do that? Here lies the danger of depending on AI or anything like it.

In my own field of offshore sailing we used an onboard computer to give us an optimal path through the forecast weather. If the digitised isobars were not closed the computer failed. If we filled in with our own solution the data was changed and the prediction was wrong. Rather like sheep in a pen where the gate is left open the computer became confused and unpredictable. Move 78. If we allow computers to wander off and make decisions ‘off programme’ what will they do? Where will their logic come from? Is human unpredictability safer than machine unpredictability?

If we could create an “AI” computer that can generate its own objectives from a world wide database of all news and knowledge, will it be able to judge and filter fake news, false images or make sense of the convoluted illogical processes within political and economic disinformation to make judgements in the way that a cynical informed human is able to do. Captain Kirk needed Mr Spock and vice versa. But the Captain had the power of command over the unimaginative logic of Spock. This is the situation with so called AI. The lesser, but more realistic, Expert Data Analysis (EDA) concept of computers as a tool for mankind has to prevail if we are going to maintain our control, status and freedoms. AI could see us as a virus – what then?

For fun, ask ChatGBT whether it sees AI as a system capable of taking over decision making from mankind.

Chris Freer - Is a retired engineer living in the Algarve. He has published books on yacht design, philosophy and aerodynamics. He has been involved with computers and programming in all forms since 1977.

Knowledge is a database but it’s what you do with knowledge that counts.