Theme briefing

An introduction to artificial intelligence

Credit: Bert van Dijk/Getty images.

Powered by

What is AI?

Artificial intelligence (AI) refers to software-based systems that use data inputs to make decisions on their own. AI has been a dream of humanity for centuries, but it was not until the invention of computers that we got closer to realising it. The Greek poem Argonautica, written by Apollonius Rhodius in the third century BC, refers to a giant made of bronze called Talos, which very much fits the description of a robot with AI. The meaning of AI has changed over the years. GlobalData’s definition purposely leaves out any mention of whether the software-based systems actually ‘think,’ as this has been the subject of heated debate for decades. British mathematician Alan Turing advocated back in 1950 that rather than considering if machines can think, we should focus on whether or not machinery can show intelligent behavior.

Weak and strong AI

Depending on their ambition or scope, we define two main branches of AI: weak and strong.

Weak AI

Also known as ‘artificial narrow intelligence’ (ANI), weak AI is a less ambitious approach to AI that focuses on performing a specific task, such as answering questions based on user input, recognising faces, or playing chess. Most importantly, it relies on human interference to define the parameters of its learning algorithms and provide the relevant training data.

Significantly more progress has been achieved in weak AI, with well-known examples including facial recognition algorithms, natural language models like OpenAI’s GPT-n, virtual assistants like Siri or Alexa, Google subsidiary DeepMind’s chess-playing program AlphaZero, and to a certain extent, driverless cars.

The approach to achieving weak AI has typically revolved around using artificial neural networks (ANNs), systems inspired by the biological neural networks that make up animal brains. They ‘learn’ to identify or categorise input data by seeing many examples. Inspired by the structure of the brain, ANNs are one of the main tools used in machine learning. An artificial neural network has anywhere from dozens to millions of artificial neurons - called units - arranged in a series of layers.

The input layer receives various forms of information from the outside world. This is the data that the network aims to process or learn about. From the input unit, the data goes through one or more hidden units with the aim of transforming the input into something the output unit can use. There are many neural network models suitable for different use cases and with various computational demands.

The most popular types include ANNs, convolutional neural networks (CNNs), and recurrent neural networks (RNNs). ANNs are the simplest and easiest to use, although less powerful than CNNs, which are highly suitable for image recognition problems, or RNNs, typically used for text-to-speech conversions. This technology is also often called ‘deep learning’, a subset of machine learning.

Strong AI

Also known as ‘artificial general intelligence’ (AGI) or ‘general AI’, strong AI is a theoretical form of AI whereby a machine would possess intelligence equal to humans. As such, it would be sentient and have a self-aware consciousness that could solve problems, learn, and plan for the future. This is the most ambitious definition of AI, the holy grail of AI, but it remains purely theoretical.

The approach to achieving strong AI has typically been around symbolic AI, whereby a machine forms an internal symbolic representation of the world, both physical and abstract, and therefore can apply rules or reasoning to learn further and make decisions.

While research continues in this field, it has had limited success in resolving real-life problems, as the internal or symbolic representations of the world quickly become unmanageable with scale.

Within strong AI, there is a theoretical next level above AGI, which researchers call artificial super intelligence (ASI). This is where a machine possesses intelligence far surpassing that of the brightest and most gifted human minds. Some researchers believe that ASI will likely follow shortly after the development of AGI, not least because AGI would be capable of iteratively creating better AI algorithms until they became an improvement over human intelligence.

Interestingly, there have been claims from weak AI practitioners that, with enough scale, their machine learning models can achieve AGI, completely doing away with symbolic AI, yet the debate is still open.

In May 2022, Google’s subsidiary, DeepMind, published a paper describing a generalist agent called Gato. It is capable of performing various tasks with the same underlying trained model. The current Gato version can play Atari, caption images, chat, or stack blocks with a real robot arm, among other activities.

Arguably, there is a third branch of AI: neuro-symbolic AI. This combines neural networks and rule-based AI. While promising and conceptually sensible, as it seems closer to how our biological brains operate, it is still in its early stages.

The AI roadmap

Despite the recent progress in the use of AI in real-world situations, such as facial recognition, virtual assistants, and (to a certain extent) autonomous vehicles (AVs), we are still in the early stages of the AI roadmap.

To understand the different types of AI, it is worth considering the information the system holds and relies upon to make its decisions. This, in turn, defines the range of capabilities and, ultimately, the AI scope. There is a significant difference between a system with no knowledge of the past and no understanding of the world and a system with a memory of past events that has an internal representation of the world and even of other intelligent agents and their goals.

As a result, AI can be classified into four types based on memory and knowledge. 

  • Reactive AI: This type of intelligence acts predictably, based on the input it receives. It does not rely on any internal concept of the world. It can neither form memories nor use past experiences to inform decisions. Examples include email spam filters, video streaming recommendation engines, or even IBM’s Deep Blue, the chess-playing supercomputer that beat international grandmaster Garry Kasparov in the late 1990s.

  • Limited memory AI: This type of intelligence can use past experiences incorporated into its internal representation of the world to inform future decisions. However, such past experiences are not stored for the long term. A good example is certain functionalities of autonomous vehicles, which constantly observe other cars’ speed and direction, adding them to their internal preprogrammed representations of the world.

    These representations also include lane markings, traffic lights, and other essential elements, like curves in the road. With this combination of static information and recent memories, the autonomous vehicle can decide whether to accelerate or slow down, change lanes, or turn.

  • Theory of mind AI: This type of intelligence, which has not yet been successfully implemented, involves a more advanced internal representation of the world, including other agents or entities, which are intelligent themselves and have their own goals that can affect their behavior. Following the prior example of the autonomous vehicle, to anticipate the intention of another driver to change lanes, they must hold an internal representation of the world that includes the other driver's state of mind.

    As complex as this may sound, experienced human drivers do precisely that, as they can anticipate, say, a pedestrian that may be about to cross the street suddenly or a distracted fellow driver that might abruptly change lanes without signaling.

    In psychology, a system that imputes mental states to other agents by making inferences about them is properly viewed as a theory and thus called a theory of mind because such states are not directly observable, yet the system can be used to predict the behavior of others. Only an AI system with a theory of mind could handle mental states such as purpose or intention, knowledge, belief, thinking, doubt, guessing, pretending, or liking, to name a few.

  • Self-aware AI: This type of intelligence effectively improves the theory of mind, whereby its representation of the world and other agents also includes itself. Self-aware, conscious systems know about their internal states and can predict the feelings of others by inferring similar internal states to their own. If it sees a person crying, a self-aware AI system could infer that person is sad, because that is how their internal state is when they cry.

    To understand the overall AI landscape, it is useful to place the various types of AI along a scope continuum, as certain AI systems may fall between the theoretical categories defined above. For instance, it is unclear whether truly autonomous vehicles require a theory of mind to anticipate other drivers’ behavior. Similarly, it may or may not be a requirement for fully autonomous mobile robots operating in a factory alongside humans, as they need to ensure they will not hurt them while operating heavy loads or machinery.

    We have so far only been able to develop AI systems falling into the reactive machine or limited memory categories and achieving theory of mind or self-awareness might be decades away.

Advanced AI capabilities

A fundamental question when building AI systems is what capabilities or behaviors make a system intelligent. The first step is to expand on our earlier definition and describe AI as any machine-based system that perceives its environment, pursues goals, adapts to feedback or change, provides information or takes action, and even has self-awareness and sentience.

Certainly, humans and animals in the higher layers of the evolutionary tree can interact with their environment, adapt to its changes, and take action to achieve their goals of, say, individual and species survival. Whether animals have self awareness or ethics is an open debate, but they certainly have sentience.

This is relevant as even a weak AI approach can build systems that behave intelligently but are far from AGI.

Therefore, we can identify five categories of advanced AI capabilities: 

  • Human-AI interaction: This includes computer vision and conversational capabilities, as well as less developed artificial senses such as smell, taste, and touch. The latter, also known as haptics, consists of creating an experience of touch by applying forces, vibrations, or motions to the user, and it is crucial in robotics. In this category, we also include types of interactions, such as brain-machine interfaces, that do not typically exist in nature but are interactions between humans and AI.

  • Decision-making: This includes capabilities that the average person will typically associate with intelligence, such as identifying, recognising, classifying, analyzing, synthesising, forecasting, problem-solving, and planning. An example could be a system capable of recognising symptoms and vital signs for a human patient that can also identify the patient’s medical condition and come up with a treatment plan. Both symbolic AI and neural networks can be used to perform these tasks, and it is likely that in the long run, a combination of the two will be required, if not new paradigms altogether.

  • Motion: This includes the ability to move and interact with the world, which although underappreciated in the early years of AI research, has become apparent that it is a complex capability that requires significant intelligence. For instance, an autonomous robot operating outdoors must be able to understand its position in three-dimensional space and move toward its destination. As it moves, it needs to analyze the feedback it receives from its sensors and adjust accordingly, as living things do. It has taken many years to build robots capable of walking on uneven terrain that can change as they step on it or grasping fragile objects like an egg or a human hand, something a five-year-old child can do without much effort. This is also intelligence, even if the average person might not appreciate it like that.

  • Creation: This includes creation capabilities across multiple areas including audio, video, and text. It is also known as generative AI, and one of the most renowned examples today is OpenAI's ChatGPT, which can write original prose and chat with human fluency. There is a relationship between intelligence and creativity, and it has been the subject of empirical research for decades. An intelligent system should be able to display creation capabilities in multiple areas. For instance, writing original text or music, drawing and painting, designing new structures or products, or even designing new AI systems. Allowing AI to make AI may sound like science fiction, yet researchers at OpenAI, Uber AI Labs, and various universities have worked on it for several years. They see it as a step on the road that one day leads to AGI or even ASI.

  • Sentience: This includes the emergence of self-awareness or consciousness, which is the ultimate expression of intelligence in an artificial system. However, this is, at the moment, far from achievable. Many neuroscientists believe that consciousness is generated by the interoperation of various parts of the brain, called the neural correlates of consciousness (NCC), but there is no consensus on this view. NCC is the minimal set of neuronal events and mechanisms sufficient for a specific conscious percept. Self-awareness, the sense of belonging to a society, an ethical view of right and wrong, or the greater good are just a few examples of matters that would be dealt with at this AI capability level. While such debates may appear purely philosophical at this stage, they will become more important as intelligent autonomous robots take on more roles in our society, which might involve moral decisions.

    AI researchers can become entranced with the development of AI with sentience and even a conscience. However, there is a much better business case for robots that simply have effective interaction and motion capabilities and modest decision-making skills.

GlobalData, the leading provider of industry intelligence, provided the underlying data, research, and analysis used to produce this article.  

GlobalData’s Thematic Intelligence uses proprietary data, research, and analysis to provide a forward-looking perspective on the key themes that will shape the future of the world’s largest industries and the organisations within them.