The performance of generative artificial intelligence (AI), which the general public has been discovering since the release of ChatGPT, is impressive, and not a week goes by without the appearance of new tools using this technology. Based on training data and thanks to a hyper-powerful mathematical system, generative AI can understand a complex query and statistically predict the best possible response, whether this involves solving a mathematical or scientific problem, coding, or producing text, image, audio or video. While AI experts predict it could surpass us on several fronts in the coming years or decades, it currently lacks some of the skills needed to compete with us on every cognitive terrain.
Here’s what AI doesn’t yet know how to do, as explained by three AI experts in the Rencontres de Pétrarque* broadcast on France Culture. The three experts are Yoshua Bengio, founder and scientific director of Mila; Yann LeCun, researcher and scientific director for AI at Meta – both men are pioneers in deep learning and, along with Geoffrey Hinton, recipients of the 2019 Turing Award – and Patrick Pérez, director for AI at Valeo and scientific director of valeo.ai.
*Episode 1/5 : Qu’est-ce que l’intelligence, “Les révolutions de l’intelligence” series, Les Rencontres de Pétrarque, France culture, July 10, 2023.
Artificial intelligence fails to…
-
Reason and plan
The ability to reason and plan refers to “system 2” of our cognitive system, while “system 1” represents faster, “intuitive” thinking, which AI reproduces quite well. Although progress has been made, AI still doesn’t have a System 2 equivalent to our own. Similarly, as Patrick Pérez points out, AI-powered machines “have great difficulty doubting (the problem of estimating uncertainty) and explaining their decisions. Moreover, they have little resistance to unforeseen disturbances (the problem of robustness) and find it hard to improvise in new conditions they didn’t encounter when learning”.
-
Inhibit bias
In addition to systems 1 and 2, teacher-researcher and psychologist Olivier Houdé has highlighted the existence of a 3rd cognitive system, known as “inhibition,” which, according to its discoverer, is nothing less than the “key to intelligence” (see The 3 speeds of thought). However, artificial intelligence, which Houdé considers to be an inaccurate term, lacks such a system: “Computers still lack a prefrontal cortex, i.e. one that enables self-control, inhibitory control; and this is all the more contemporary and serious a subject as large databases, Big Data, amplify cognitive biases,” he explains in an interview for the publication of his book Comment raisonne notre cerveau (2023).
-
Have a motor control
“We’re a long way from having systems that, at the robotic level, for example, are as good as what most animals can do,” says Yoshua Bengio, adding that it may not be necessary, however, for AI to have motor control to pose a danger.
-
Understand how the world works
As Yann LeCun explains, “If we want to plan a sequence of actions to reach a goal, we need to have a model of the world that allows us to imagine the result or effect of our actions. For example, if I push the glass on the table at its base, it will probably move, but if I push it at the top, it will probably tip. We all have a model of the intuitive physics of the world, which allows us to plan what sequence of actions to carry out to achieve a particular result”. As he works on developing world models of this kind, LeCun notes that these will need to be used with goals: the machine will need to have goals to fulfil to plan its actions and be able to predict whether or not its goals can be achieved.
-
Have emotions
According to Yann LeCun, “If we have [AI] machines that can plan their actions, imagine the results of the sequences of their actions and have goals to satisfy, they will inevitably have emotions, and this is probably what will enable us to make them controllable, i.e. compatible with us, humanity.” Researchers have recently investigated ChatGPT’s ability to identify and describe emotions with a view to its potential use in the field of mental health (Z. Elyoseph et al., 2023). For this study, they used Lane and Schwartz’s (1987) Level of Emotional Awareness Scale (LEAS), “emotional awareness” being described in psychology as the ability to conceptualize one’s own emotions and those of others. The study showed that the conversational robot is able to generate responses appropriate to “emotional awareness” and that its performance can improve significantly over time. In some cases, ChatGPT even outperformed the norms of the general human population with which it was compared.
Related articles:
- The 3 Speeds of Thought
- Artificial Intelligence: Test Your Knowledge!
- 10 categories of generative AI tools
- Deciphering ChatGPT
- Deep learning 101
- Mini glossary of artificial intelligence
- Initiatives for a Responsible and Human-Centered Artificial Intelligence
- AI, make me laugh!
- Artificial Intelligence: Test Your Knowledge!
- Will a robot replace your job?
- Human vs. machine battle
- Artificial intelligence: Montreal, the star of the moment
- Intelligent Adaptive Learning: Everyone’s Training!
Author:
Catherine Meilleur
Communication Strategist and Senior Editor @KnowledgeOne. Questioner of questions. Hyperflexible stubborn. Contemplative yogi
Catherine Meilleur has over 15 years of experience in research and writing. Having worked as a journalist and educational designer, she is interested in everything related to learning: from educational psychology to neuroscience, and the latest innovations that can serve learners, such as virtual and augmented reality. She is also passionate about issues related to the future of education at a time when a real revolution is taking place, propelled by digital technology and artificial intelligence.
Leave A Comment