While AI may be one of the biggest deals in science, technology, and popular culture today, it’s actually far from a new concept with the earliest ideas of artificial beings and human-like machines stretching back to ancient Greece.

Nevertheless, the application of increasingly intelligent machines to utilize the influx of digital information and data is accelerating faster than ever.

With advances in the fields of computing and neuroscience further closing the gap between science fiction and reality, it can be helpful to take a moment to see how we got where we are today with AI and some of the key moments in the quest for developing true machine intelligence.

A (Very) Brief Timeline of Artificial Intelligence

1842

Modern computing is born
The roots of all modern computing can arguably be traced back to the work of Ada Lovelace, often thought of as the world’s first programmer, following her development of an algorithm for calculating Bernoulli numbers on a theoretical computing machine which had been proposed (but never actually built) by the English polymath Charles Babbage. In her notes on the proposed machine (which was known as the Analytical Engine), Lovelace became one of the earliest critics of what would come to be known in later years as artificial intelligence in her ‘objection’ of Babbage’s machine in which she stated:

“The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths.”

1948

Beginnings of information theory are explored
Often thought of as the father of the information age, Claude Shannon published a paper known as The Mathematical Theory of Communication in which he laid the groundwork for what we now think of as the field of information theory. While the details of information theory are incredibly complex, the basic premise is that all information contained in the universe can be represented in binary form. The implications of this theory in the development of AI were huge and meant that the approach to developing artificial intelligence could be looked at from the perspective of simulating the way the human brain works in an attempt to replicate the processes using technology.

1950

Turing Test is outlined
The brilliant British mathematician Alan Turing publishes a paper called Computing Machinery and Intelligence in which he set out the parameters for differentiating and detecting true vs imitated intelligence. In the first section of the paper, Turing stated that if a machine acted as intelligently as a human then it effectively is as intelligent as a human. For example, if you ever found/find yourself chatting to an AI representative on an online live chat but don’t actually know that the other correspondent isn’t a human being, then the bot (or machine) can be said to have passed the Turing test and, under this definition, can be considered intelligent.

1951

First artificial neural network is created
Marvin Minsky and Dean Edmunds built the Stochastic Neural Analog Reinforcement Calculator (SNARC). Using 3000 vacuum tubes, SNARC simulated a network of 40 neurons, effectively making it the first artificial neural network.

1956

Foundations for artificial intelligence as a discipline are laid
What we think of as artificial intelligence began back in 1956 when Dartmouth professor John McCarthy invited a small group of scientists to get together over the summer and talk about ways to make machines do cool things. “We think that a significant advance can be made,” McCarthy wrote, “if a carefully selected group of scientists work on it together for a summer.” This workshop laid the foundation for an academic field for people who dreamed about intelligent machines. In the beginning, work focused on solving abstract math and logic problems but it quickly became apparent that computers were able to perform more human-style tasks.

1957

Perceptron is developed
A two-layer computer learning network is used in the Perceptron, a primitive artificial neural network capable of pattern recognition developed by Frank Rosenblatt.

1962

A computer is programmed to play checkers
One of the pioneers of AI was Arthur Samuel who created computer programs that learned to play checkers. He also coined the term ‘machine learning’. In 1962, Samuel’s checkers-playing program scored a win against a master checkers player.

1965

The chatbot Eliza is created
Joseph Weizenbaum created Eliza, an early natural language processing computer program (aka chatbot). Eliza posed as a psychotherapist by paraphrasing the questions asked of it. Initially created as a way of showing the superficiality of communication between man and machine, Weizenbaum was surprised to discover how many people attributed human-like feelings to Eliza.

Eliza Chatbot

1972

Rise of expert systems
Researchers at Stanford University developed MYCIN, an expert system which was able to help identify the bacteria behind serious infections and recommend suitable antibiotics for treatment.

1975

The first system to automate decision-making and problem-solving processes is created
A program called Dendral replicated the way chemists identified unknown organic molecules through interpreting mass-spectrometry data. Dendral was the first system to automate decision-making and problem-solving processes and also produced the first computer-generated discoveries to be published in a refereed journal.

1981

Fifth Generation Computer project is launched
The Fifth Generation Computer project was launched by the Ministry of International Trade and Industry in Japan. With a budget of $850 million, the project aimed to develop computers capable of language translation, picture interpretation, and even human level reasoning.

1986

Introduction of back-propagation
In their paper “Learning representations by back-propagating errors”, David Rumelhart, Geoffrey Hinton, and Ronald Williams laid the foundations for back-propagation as a new learning procedure.

1987

The first ‘self-driving’ car is debuted in Germany
Scientist Ernst Dickmanns kitted out a Mercedes van with two cameras and some computers which then drove itself along a German highway at 90 kilometers per hour (55 mph). While not comparable to the self-driving vehicles under development currently, this initial experiment was undoubtedly a sign of things to come.

1997

A computer beats world chess champion Garry Kasparov
IBM’s computer Deep Blue defeated world chess champion Garry Kasparov. Kasparov demanded a rematch but IBM declined.

2000

Robot able to recognize human emotions is developed
The robot Kismet is developed by Cynthia Breazeal of MIT. It is capable of recognizing and simulating human emotions.

Kismet Robot

2004

The Pentagon launches a robot car race challenge
The Pentagon launched its first Darpa Grand Challenge, a race for robot cars in the Mojave Desert that kick-started the current self-driving car craze.

2011

A landmark in visual recognition is reached
The German Traffic Sign Recognition competition was won by a convolutional neural network which scored an accuracy of 99.46%, compared to 99.22% by humans.

2012

Deep learning renaissance
Advances in deep learning make speech and image recognition significantly more accurate, prompting a renaissance in the field fueled by new corporate interest.

2016

Computer beats world champion Go player
AlphaGo defeated Lee Sedol, the world champion player of the board game Go, doing “a very human thing even better than a human.”

2017

Deep learning system beat Ms. Pac-Man on Atari 2600
Using deep learning, AI company Maluuba created a system that learned how to reach Ms. Pac-Man’s maximum point value of 999,900.

Artificial Intelligence – The Story so far

While the idea of developing a non-human machine intelligence had existed conceptually in one form or another for arguably centuries, it was really only in the mid 20th century that technology advanced to a point at which the practicalities of developing the idea became a reality.

Alongside the rapid growth in computational sciences and technology that were taking place concurrently, the history of artificial intelligence can be seen to really begin during the 50 years preceding the turn of the millennium. This time undoubtedly laid the groundwork for the field as we know it today.

Origins of Modern AI

When talking about the history of artificial intelligence, the story looks like a broad arc of developments culminating in John McCarthy’s 1956 Dartmouth summer research project when the actual term ‘AI’ was coined. This period laid the foundations for what we think of today as modern artificial intelligence and is the starting point for all of the work that’s been carried out since.

McCarthy is widely viewed as one of the founding fathers of artificial intelligence and prior to the Dartmouth research project,

In a proposal drafted a year before the Dartmouth research project took place, McCarthy, alongside Claude Shannon (of information theory fame), Marvin Minsky (a pioneer in computational neural nets), and Nathaniel Rochester (who designed IBM’s first commercial scientific computer), outlined seven aspects of artificial intelligence which had to be solved in order to resolve the problem of creating true AI.

The aspects that required solving were:

  1. Automatic Computers – To simulate the higher functions of the human brain
  2. How Can a Computer be Programmed to Use a Language – To address the issue of what we now call natural language processing
  3. Neuron Nets – To arrange artificial neurons so they can form concepts
  4. Theory of the Size of a Calculation – To measure the complexity of a problem
  5. Self-Improvement – To gain a level of intelligence that allows for activities of recursive self-improvement
  6. Abstractions – To break complex topics down and abstract them using sensory and other forms of data
  7. Randomness and Creativity – To ‘inject’ a degree of randomness (guided by intuition) to encourage ‘creative’ as opposed to simply ‘competent’ thinking.

In the years after the Dartmouth conference, artificial intelligence enjoyed a period of rapid uptake and interest, helped by simultaneously explosive growth in computing which took place in the second half of the 20th century.

Progress in both computer hardware and software, alongside the burgeoning field of computer science, allowed for the development of algorithms based around AI theories which could now be tested on increasingly capable platforms.

As optimism around the field continued to grow, Marvin Minsky famously stated in 1967 that he believed:

“Within a generation… the problem of creating an artificial intelligence will substantially be solved.”

While we now know this was an overly-optimistic prediction, it’s easy to see why this was the prevailing view at the time given early successes in the field.

Cool Down

One of the biggest problems with over-promising on expectations is that when the results fail to deliver on the hype initially generated people inevitably lose interest. This is exactly what happened in the mid-70s, when optimistic expectations of what AI would deliver fell short of the overly ambitious promises. Disillusionment over the actual results and progress made led to a significant decrease in both interest and funding.

There are many reasons for this, but some of the major factors included:

  • The relative infancy of computer science – The field was so young that it simply wasn’t known which AI problems could eventually be solved and which ones couldn’t.
  • The constraints of hardware –  The actual technology and performance of the equipment available restricted what could be done. Inadequate computational power and memory constraints significantly reducing the potential to advance further.
  • The absence of data – In the 20th century there were no substantial  sources of data (compared to today) for carrying out tasks relating to data-hungry AI applications such as computer vision or NLP.
  • The evolving definition of AI – As John McCarthy so aptly back in 1980: “As soon as it works, no one calls it AI anymore.” This eloquently highlighted one of the perennial problems in the development of artificial intelligence in which the metric for success was always shifting upwards as soon as a milestone in the field was reached, effectively moving the goalposts further away the more progress was made.

Rise of Expert Systems

Following the cool-down in interest around AI, the development of expert systems represented a turning point in both sentiment and the direction of artificial intelligence development and progress. By utilizing programmed knowledge from domain experts, expert systems were able to use this logic to effectively solve topic specific problems.

This focus on developing specific areas of intelligence (narrow AI), as opposed to the quest for an all-encompassing general intelligence (strong AI), meant that it became significantly easier to reach intelligence goals focused on a specific domain or area and subsequently develop AI which was able to address a single task or problem.

The success of expert systems soon led to commercial interest and as developments in this area progressed funding quickly began flowing back into the field.

Applications of early expert systems ranged from government interest through to commercial use in fields such as medicine and finance, and by 1988 IBM’s Deep Thought became the first computer to take things to the next level after defeating Danish grandmaster Jørgen Bent Larsen at chess.

Second AI Winter

While expert systems initially received a positive reception, the limitations of these systems soon became apparent – as did the fact that there were other commercial alternatives which effectively carried out the same job at a lower cost.

The financial element became even more pressing by the late 80’s, as the economic implications of the Black Monday stock market crash led to a freezing of new investment and funding.

As with the first cooling period of AI development, it was unsurprising that another was bound to follow the latest bout of high expectations and the resulting boom in interest and funding which the field had been experiencing. Again, the problem was one of over-promise and under-delivery.

This time around, the shortcomings of expert systems, particularly around scaling them to deal with increasingly advanced and complex problems, soon became apparent and the lack of a solution to address these problems highlighted the ceiling that AI had then reached.

Limitations in physical computing power imposed a similar cap on AI development as in the previous ‘winter’ and despite advances in computational performance in line with Moore’s Law, there simply wasn’t room to advance the field forward without more powerful hardware.

Machine Learning Revolution

As with previous cool periods, it wasn’t long before the scene began to heat up once again. This time however, things were starting to look different. Again, driven by advances in hardware and technological developments in fields such as communication and data science, funding again began to increase and AI research saw a resurgence.

With better and more affordable equipment and technologies at hand, alongside watershed moments such as the advent of personal computing, the explosive growth of the internet, and the prolific adoption of smartphones worldwide, the landscape for the next stage of artificial intelligence began to fall into place.

Increasingly ubiquitous computing and the almost unbelievable amounts of continual information now being produced every second with the arrival of big data produced fertile ground for a revolution in the development of AI and allowed for progress in the interrelated field of machine intelligence.

Shortly after the start of the new millennium, the arrival of super-fast chips and an endless stream of data began bringing the previously bottle-necked algorithms of neural net pioneers like Geoffrey Hinton to reality. What subsequently followed has been the machine learning fueled boom we’re currently experiencing.

Suddenly, computers began to understand human speech and be able to identify what was in an image – it seemed as though this was merely a taste of what was possible as AI became progressively smarter and more intelligent.

The age of machine learning had arrived.

The Future of AI

So the question is, will the development of AI continue to follow the same pattern as before, with the current growth leading to another inevitable cool down?

Are artificial intelligence breakthroughs doomed to a perpetual cycle of boom and bust? Are we currently just in an upswing?

Well maybe, however it’s very possible that this time might be different than before.

For example, the current exponential growth in AI is largely being fueled by an unbelievable abundance of ever-growing data that we now have access to and which simply didn’t exist until very recently.

Additionally, the power of hardware and the speed of communications infrastructure is currently supporting rapid development in the field, whereas these factors have historically caused bottlenecks in the past.

If you want to find out more about the recent history of AI, where it is right now, and where it’s going, then check out this reading list which highlights some of the best books for quickly learning about the topic and provides a perfect jumping off point for further reading.

May 27, 2019

RELATED POSTS