Despite the progress in the development of artificial intelligence, even the most modern AI models cannot yet be called general AI (AGI). They are capable of performing individual tasks at the level of a professional human, but do not possess intelligence. However, future advances in neuroscience may lead to the advent of general artificial intelligence in our lives. Some researchers are confident that this will happen in the coming years. RBC Trends explains what AGI is, how it differs from conventional artificial intelligence, and when to expect its appearance.
What is AGI
Artificial general intelligence (AGI) is a branch of theoretical research in artificial intelligence that aims to develop AI with human-level cognitive functions, including the ability to self-learn. Researchers assume that such artificial intelligence will be able to autonomously solve many complex problems in various fields of knowledge.
In other terms, AGI is defined as strong AI . This contrasts it with weak or narrow AI, which can only perform specific or specialized tasks within a predefined set of parameters. Examples of such systems include IBM’s Watson or Google DeepMind’s AlphaFold , which are better at solving scientific problems than humans but are narrowly specialized.
Opinions differ on how such AI might be implemented, as it remains a theoretical concept. For example, AI researcher Peter Voss defines general intelligence as having “the ability to learn anything in principle.” According to this criterion, an AGI’s learning ability would need to be “autonomous, goal-directed, and highly adaptive.”
However, AI researchers agree that AGI refers to “AI systems that have a reasonable degree of self-understanding and autonomous self-control and are able to solve a variety of complex problems in different contexts and learn to solve new problems that they did not know about when they were created.
What AGI Should Look Like
A true AGI would need to be able to perform human-level tasks , which would require abstract thinking, extensive knowledge, common sense, and the ability to make cause-and-effect relationships.
McKinsey identified eight traits that AGI should have :
- visual perception . Current AI systems do not yet have the same level of sensory perception as humans. For example, systems trained using deep learning are still poor at distinguishing colors. Some self-driving cars have been fooled by putting black tape on a red stop sign , while in other cases they have confused the sign with an image on a T-shirt;
- audio perception . People use sounds to determine the spatial characteristics of their environment, distinguish background noise, and determine where a speaker is located. AI systems have a more limited ability to extract and process sound, and are not yet able to interpret it as well as humans;
- Fine motor skills . AI robots have not yet achieved the level of fine motor skills that would allow them to braid hair or perform surgery on their own;
- Natural language processing . To compete with human-level cognition, AGI must consciously analyze sources of information. It must also understand hidden meanings. Recent AI tools have demonstrated improved natural language processing, but they still lack true context understanding. For example, researchers have found that large language models like GPT-4o and Claude fail to correctly count the number of r’s in the word strawberry . They return the answer that the letter appears twice in the word, not three times.
the ability to solve problems on its own . An AGI system must be able to recognize that a problem exists, such as recognizing that a light bulb has burned out so it can replace it. To do this, general AI will need to learn how to simulate the probabilities of a particular problem occurring. Systems must also be able to learn from experience and adapt to new situations without explicit human intervention;- Navigation . GPS technology, coupled with capabilities like simultaneous positioning and mapping (or SLAM, used in self-driving cars and robot vacuums), has made significant progress. But creating robotic systems that can navigate autonomously without human intervention still requires years of work;
- creativity . For an AI to become AGI, it will need to rewrite its own code. This will require understanding a huge amount of code and finding new ways to improve it;
- social and emotional engagement . To successfully interact with people, AI must be able to interpret facial expressions and changes in tone, revealing hidden emotions. However, this problem has not yet been solved even at the expert community level, so it is too early to talk about progress in the field of artificial intelligence.
What are the approaches to developing AGI?
Because of the vague concept of AGI, there are various theoretical approaches to how it might be created. Some involve methods such as neural networks and deep learning, while other methods suggest creating large-scale simulations of the human brain using computational neuroscience. AI researcher and “father of AGI” Ben Goertzel has highlighted several approaches that have emerged in the field of general AI research:
- symbolic – the belief that symbolic thinking is “the basis of human general intelligence”;
- emergentist – the idea that the human brain is essentially a collection of simple units or neurons that self-organize in complex ways in response to the body’s experiences. Proponents of this approach believe that general AI could emerge by recreating such a structure;
- hybrid – views the brain as a hybrid system in which many different parts work together to create something that is greater than the sum of its parts;
- Universalist – This approach is based on the “mathematical nature of general intelligence.” Once AGI can be created in the theoretical realm, the principles used to achieve it can be applied in reality.
What technologies underlie the development of AGI
There are currently several key disciplines and technologies used to create AGI systems.
Deep learning
It is the training of neural networks with multiple hidden layers to extract and understand complex relationships from raw data. AI experts use deep learning to create systems that can understand text, audio, images, video, and other types of information.
Generative Artificial Intelligence
This is a type of deep learning where an AI system can create unique and realistic content based on the knowledge it has gained. Generative AI models are trained using huge data sets, allowing them to respond to human queries in the form of text, audio, or visuals. AI companies are already struggling with the lack of information to train such models, so they plan to solve this problem by using synthetic data.
Natural Language Processing
Natural Language Processing (NLP) is a branch of AI learning that enables computer systems to understand human language and generate information using it. NLP systems use computational linguistics and machine learning technologies to transform language data into simple representations (tokens) and then learn to recognize contextual relationships.
Machine vision
It is a technology that enables systems to extract, analyze, and understand spatial information from visual data. Deep learning technologies enable systems to automatically recognize, classify, monitor, and perform other image processing tasks.
Robotics
It is an engineering discipline that enables the creation of mechanical systems that automatically perform physical maneuvers. In the future, advances in development technologies will allow robots to be given sensory perception and taught to perform complex physical manipulations that require fine motor skills.
Differences between AGI and AI
Researchers highlight several key differences between general and conventional artificial intelligence.
- How it works: AGI offers human-like levels of cognitive function, while AI imitates human learning and reasoning.
- Learning Ability: AGI learns like a human, whereas AI is limited by the rules set by the program.
- Reasoning process: AGI reason freely and solve problems without outside intervention, and the AI uses reasoning and other functions only because of this intervention.
What advances could accelerate AGI development?
Advances in algorithms, computing, and data have accelerated work on general AI.
Development of algorithms and new approaches in robotics
Researchers may need entirely new approaches to algorithms and robots to achieve AGI. One way to do this is to explore the concept of embodied cognition. The idea is that robots will need to learn very quickly from their environment using multiple senses, just like children do. Eventually, they will be able to perceive the physical world like humans.
Another technology that is already being used in the latest AI-based robotic systems is large behavioral models (LBM). They allow robots to imitate human actions and movements. These models are created by training AI on large sets of action and movement data. For example, Nvidia has introduced the Project GR00T initiative to develop universal base models, tools, and technologies for humanoid robots.
Advances in Computing
Graphics processing units (GPUs) have enabled major advances in AI in recent years. First, GPUs are designed to handle multiple tasks involving visual data simultaneously, including image rendering, video, and graphical computing. GPUs’ efficiency in processing massive amounts of visual data makes them useful for training complex neural networks. They also have high memory bandwidth, which means faster data transfer. Similar major advances in computing infrastructure will be needed before AGI can be achieved. Quantum computing is likely to be the answer. For example, Google has unveiled a quantum chip called Willow . It can solve a computational problem in five minutes that would take the world’s fastest supercomputer ten septillion years.
Growing volume and new data sources
Some experts believe that 5G mobile infrastructure could lead to a significant increase in data volumes. This is because the technology will enable the growth of smart device connectivity and the popularity of the Internet of Things. In addition, new sources of training data may appear in the field of robotics. The practical implementation of humanoid robots will allow companies to mine more new data. For example, advanced self-driving cars collect such data when they are tested on the roads, and this information helps to train self-driving cars.
Forecasts for the Future of AGI
In 2024, analysts from the American research and consulting company Gartner presented the traditional report Hype Cycle for Emerging Technologies . It also featured AGI. Gartner noted that the technology is at the innovation trigger stage (that is, it is in the launch phase and is just “breaking into the light”), and its implementation will take at least 10 years. According to the Metaculus website , which collects expert assessments on various topics, experts believe that AGI will be created in 2032.
Some industry insiders believe that AGI will emerge in the next few years or decades. For example, xAI CEO Elon Musk and OpenAI founder Sam Altman expect general AI to emerge as early as 2025-2026. Researchers from Microsoft and OpenAI have previously stated that GPT-4 “can reasonably be viewed as an early (but still incomplete) version of an AGI system” due to its ability to “solve new and complex problems that span math, coding, vision, medicine, law, psychology, and more, without the need for any special prompting.” However, many scientists dispute this statement. For example, Columbia University professor Vishal Misra was able to hack the GPT-4 algorithm. He gave it a task to find words in a list where “k” was not the third letter. When ChatGPT answered correctly, the researcher entered a new prompt. As a result, the bot apologized and gave a new, but incorrect answer. “It can solve some equations, draw diagrams, and analyze things pretty well, but sometimes it fails at simple things,” Misra said after the tests. And Apple’s AI developers even published an article in which they refuted the ability of large language models to reason. The main idea of the article is that AI cannot think like a human, but only imitates thinking.
Other researchers are more reserved in their assessments. Google engineer Ray Kurzweil is confident that AI will reach “human-level intelligence” in 2029 and surpass it by 2045. His position is shared by one of the pioneers of programming, John Carmack , who believes that strong general-purpose AI will be created with a 60% probability by 2030, with a 95% probability by 2050. At the same time, he is confident that AGI will not lead to global changes in society.
Turing Award winner Geoff Hinton believes that AGI will appear in less than 20 years, but will not pose an existential threat. Turing Award winner Yoshua Bengio argues that it will take decades to develop. Meanwhile, Jürgen Schmidhuber , co-founder and chief scientist of NNAISENSE and director of the Swiss AI lab IDSIA, believes that AGI will likely appear around 2050. Rodney Brooks , a roboticist at MIT and co-founder of iRobot, believes that AGI will not appear until 2300.
Some scientists are convinced that AGI will never be realized. As Ben Goertzel notes, it is difficult to objectively measure progress toward AGI because “there are many different paths” and there is no “complete and systematic theory of AGI.” According to him, the current understanding of general AI is rather a “patchwork of overlapping concepts, frameworks, and hypotheses” that are “often synergistic and sometimes mutually contradictory.” Sarah Hooker of the Cohere Research Lab agrees: “It’s really a philosophical question.” In her opinion, it is highly unlikely that the advent of AGI will look like a single event, with humanity concluding that it has achieved general AI. Hooker believes that even if there is a consensus among researchers on AGI, there will be no clear winner in the race.
Indeed, the milestones that have been used to define the arrival of AGI are changing. For example, it was previously thought that AI would be indistinguishable from humans if it passed the Turing test , first proposed by computer scientist Alan Turing. However, in 2024, cognitive scientists at the University of California, San Diego, announced that GPT-4 had successfully passed the test, fooling human subjects 54% of the time. However, the arrival of AGI was not officially announced. Instead, scientists said that the test itself should be rethought . Mathematician Alan Turing developed it back in 1950. A human participant would communicate with a computer through a text interface. If a computer’s answers were indistinguishable from those of a human across a wide range of possible questions, then it would be considered as intelligent as a human, Turing reasoned. However, researchers have now come to the conclusion that modern AI is only capable of deceiving a person into thinking that he is communicating with someone similar to himself.
The most fundamental problem that scientists and developers will have to solve is assessing the intelligence of AI, since it is still unclear, even at the level of neuroscience, how the emergence of conscious experience occurs as a result of physical processes in the brain. Just as no one knows how or why interconnected neurons function to create intelligence, it is unclear how interconnected circuits or interconnected nodes of a computer program can lead to the emergence of self-awareness.
Concerns about AGI
The development of AGI may also be slowed down by scientists’ fears about its emergence. Back in 2014, English theoretical physicist, cosmologist, and writer Stephen Hawking said that “the development of full-fledged artificial intelligence could mean the end of the human race.” According to Hawking, this is because such AI “would adapt ever faster, and humans, limited by slow biological evolution, would not be able to compete with it and would be forced out of this process.” Some researchers are particularly concerned about artificial superintelligence — the accidental creation of AGI that will be smarter than humans and could potentially be used to harm humanity. A supporter of this approach is, for example, OpenAI co-founder Ilya Sutskever, who left the company in 2024. Now he heads the organization Safe Superintelligence Inc. (SSI). The project is aimed at creating safe superintelligence.One of the pioneers of artificial intelligence, Geoffrey Hinton, has an even more radical attitude towards AI. In 2023, he stepped down as vice president of Google to warn humanity about the dangers of the technology. Hinton believes that AI can become smarter than humans within five years thanks to the development of large language models. The researcher says that today’s leading AI models already have genuine intelligence and reasoning abilities, and will soon gain their own experience like humans. Hinton admits that AI systems will eventually have consciousness. He warns against rushing to train more advanced models, pointing to a sharp increase in disinformation, job losses, and even the emergence of autonomous weapons that could destroy humanity.