Skip main navigation

New offer! Get 30% off one whole year of Unlimited learning. Subscribe for just £249.99 £174.99. New subscribers only. T&Cs apply

Find out more

What is artificial intelligence?

This article explores the different perspectives on and definitions of AI, and a brief history of where it all began.
A blue microchip processor
© University of Twente

When you think of AI, all kinds of things might come up. You might think about voice assistants, self-driving cars, algorithms that produce search results on websites, or humanoid robots. Some of you might think of underlying hardware or related technologies, others of futuristic applications of AI, and the benefits and risks they can bring.

In this article, we will take a closer look at what AI is. In fact, that is not too easy. Just as human “intelligence” is not easily defined, there are also several different perspectives and definitions for “artificial” intelligence.

The history of AI

The history of Artificial Intelligence goes back a long time. Already in the 1950s, scientists, mathematicians, and philosophers were exploring the possibilities to build machines that can perform cognitive tasks.

Ever since, the field has been in a process of continuous change and development. Some fields of study, such as statistics or data analytics, we have now come to consider to be part of AI.

At the same time, some specific applications that we are all too familiar with, such as spam filters and search engines, are not seen as typical examples of AI anymore. AI could even be defined as “the things that machines cannot yet do”.

The Turing test

This links to the Turing Test: the famous test, designed by English mathematician Alan Turing (1912-1954), to determine the ‘intelligence’ of a machine. In this test, an evaluator observes the interaction between a human and a machine, without being able to see who is the human and ‘who’ is the machine.

When the evaluator cannot tell who is the human and who is the machine in this interaction, the machine has passed the test. By now, we can conclude that many systems have passed the test.

Due to the increased processing power of computers and the increased amount of data (‘big data’), specific AI techniques called machine learning and deep learning (see the glossary for more explanation) have really taken off.

AI can outsmart people

In fact, when it comes to some specific skills, AI systems have actually ‘outsmarted’ people. In 1997, for instance, IBM’s chess-playing computer Deep Blue defeated world champion Gary Kasparov, and in 2017 Google’s AlphaGo beat the world’s top Go player Ke Jie.

But also, in everyday situations, computers have developed the ability to behave in human-like ways: think of the Google Assistant calling the hairdresser for an appointment.

A cluster of technologies

It is important to realise that AI is not just one single technology. Often, various technologies are needed to enable a system or machine to exhibit intelligent behaviour in a specific environment.

Therefore, we should not separate AI from other digital technologies, such as robotics, the Internet of Things (IoT), digital platforms, biometrics, virtual and augmented reality, persuasive technology, and big data. The self-driving car is also a combination of IoT, robotics, and AI.

Automated perception

Moreover, within the field of AI, there are various areas of development. One of the areas which are developing rapidly is ‘automated perception’, with subfields, such as reverse image search, face recognition, deep fakes, and image-based medical diagnostics.

Natural language processing

Another influential area is ‘natural language processing, in which AI systems are trained to work with written and spoken language, for instance by speech-to-text rendering, automatic summarization, and machine translation.

Automated judgement

A third important field concerns ‘automated judgment’, with impactful subfields as spam detection, content recommendation, hate speech detection, or even automated essay grading.

Some of these areas are highly contested, such as ‘socially predictive AI’, for instance regarding recidivism, job success, terrorist risks, and at-risk children.

The ability to learn

A central feature of new AI systems is their ability to learn. Some AI systems use ‘machine learning’ for this: by means of algorithms they process large amounts of data, and draw conclusions or make predictions about the world on the basis of this.

Other systems use ‘deep learning’, which uses various ‘layers’ of ‘machine learning’ at the same time, mimicking the functioning of the brain: it organises parallel processes of interpretation in ways similar to the connections between neurons.

Certain AI systems are easy to fool. Changing just a few pixels or pasting a visibly distorting pattern over an image can make a smart system fail to recognise that image, or to come up with a silly answer to a question.

Automatic image recognition

Due to a bit of “noise”, for instance, automatic image recognition can suddenly classify a panda as a gibbon. Things that are easy for a human being can be very difficult for an AI system: the fact that a six-year-old child will not mistake a panda for a gibbon, and is able to understand depth and shadows in 2D drawings, is the result of a long process of brain development.

© University of Twente
This article is from the free online

Philosophy of Technology and Design: Shaping the Relations Between Humans and Technologies

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now