Image for Yann LeCun: How to Develop Autonomous Artificial Intelligence

Yann LeCun: How to develop an autonomous AI

Some enthusiasts will say that artificial intelligence (AI) is about to become sensitive. But not Yann LeCun, who won a 2018 Turing Award for his contribution to deep learning.

In fact, LeCun thinks AI is not on track to think and learn like humans. He points out that while a teenager can learn to drive in about 20 hours, a decent self-driving car today would require millions or billions of labeled training data samples or reinforcement learning trials in simulated environments. – and would still be below that of a human. ability to drive reliably. (Also read: Self-driving vehicle hack: Is this why we don’t have self-driving cars yet?)

Based on this realization, LeCun sketched out a roadmap for creating “autonomous artificial intelligence”. LeCun’s roadmap draws inspiration from various disciplines — like deep learning, robotics, cognitive science, and neuroscience — to sketch out a modular configurable structure. And while implementing this roadmap would require further exploration, it’s worth thinking about the different components needed to replicate animal and human intelligence.

This article delves into the methodology behind LeCun’s Autonomous Artificial Intelligence Roadmap.

How it works

The global model

The core of LeCun’s framework is a “world model” that predicts the conditions or states of the world. LeCun argues that animals and humans each have their own “world template” somewhere in their prefrontal cortex.

Although attempts have already been made to develop an AI-based global model, these models are task dependent and cannot be adapted to different tasks. LeCun, however, disagrees with the notion of multiple task-dependent world models and believes in a single dynamically configurable world model. According to LeCun, each individual’s unique world model enables the sharing of knowledge across multiple tasks, leading humans to reason by analogy.

As part of LeCun’s autonomous AI roadmap, the idea of ​​the world model is accompanied by other models that help the AI ​​system understand the world and perform actions in it.

The perception model

The “perception” model collects and processes signals from sensors and estimates the state of the world. Therefore, it imitates people’s five senses. The world model helps the perception model to perform two essential tasks:

  1. Filling in missing information in sensory data (for example, hidden objects).
  2. Predict the most probable future states of the world (for example, the location of a moving car in five seconds).

LeCun’s Autonomous AI Architecture also contains other models that work alongside the global model to facilitate the learning ability of the AI. These include:

The cost model

The cost model drives an AI system to achieve desired goals. It measures the level of discomfort in the system and consists of two sub-models:

  1. The intrinsic cost: a built-in, non-trainable model that calculates instantaneous discomfort (e.g. system damage).
  2. The critic : a learnable model that predicts the future state of embedded cost.

The AI ​​system aims to reduce the intrinsic cost over a period of time. According to LeCun, this is the cost model where basic behavioral needs and intrinsic motivations exist. It is important to differentiate this model because it allows cost gradients to propagate through other models – training them to work together to reduce intrinsic cost.

The acting model

The “actor” model takes steps to try to minimize the level of discomfort (i.e. the intrinsic cost).

The short-term memory model

The “short-term memory” model stores important information about the state of the world and the corresponding intrinsic cost. It plays an important role in helping the global model make accurate predictions.

The configurator model

Finally, LeCun’s Autonomous AI Architecture includes a “configurator” model to provide executive control to the system.

The main purpose of the configurator is to allow the AI ​​to handle a variety of different tasks. To do this, it regulates the other models of the architecture, for example by modulating their parameters.

Going back to the “self-driving cars” example from earlier, if you want to drive a car, your “perceptual model” (your five senses) should absorb information from the parts of a car relevant to driving. — you should look through the windshield, touch the steering wheel and listen to the engine. Meanwhile, your “actor model” has to plan actions accordingly – you start the engine and change gears – and your “cost model” takes traffic rules into account.

Interestingly, LeCun’s roadmap was inspired by Daniel Kahneman’s dual process theory, which he proposed in “Thinking Fast and Slow.” Kahneman’s model allows AI systems to exhibit two types of behavior:

  1. Fashion 1. Mode 1 is rapid, reflective behavior resulting from a direct mapping from perception to action.
  2. Fashion 2. Mode 2 is slow, deliberate behavior that uses the world model, perception model, cost model, actor model, short-term memory model, and configurator model for reasoning and planning .

How to Implement Yann LeCun’s Autonomous AI Framework

According to LeCun, a key challenge in realizing his conceptual framework is implementation.

LeCun believes in implementing his model using trainable deep learning models with gradient-based optimization algorithms. He is unconvinced to use the symbolic system, which requires hand-coded knowledge from humans.

Two promising methodologies for implementing this framework are:

1. Self-supervised learning

Since deep learning models require a large amount of human-annotated datasets to learn using supervised learning, LeCun advocates the use of self-supervised learning (SSL ): an unsupervised learning approach that uses naturally available supervisions in a dataset (i.e., no human annotation). LeCun argues that human children also use self-supervised learning to gain commonsense knowledge of the world – such as gravity, dimensionality, depth, and social relationships.

Apart from theoretical motivations, SSL has also shown incredible practical utility in learning fundamental language models using transformer-based deep learning architectures. (Also read: Core Models: The Next Frontier of AI.)

2. Energy-Based Models

While various SSL approaches exist, such as self-encoding and contrastive learning, LeCun emphasizes the use of energy-based models (EBMs).

EBMs deal with the encoding of high-dimensional data, such as images, into low-dimensional integration spaces preserving only the relevant information. Keep in mind: AI models are trained by measuring whether two observations are similar or not. To this end, LeCun proposes an EBM-based learning architecture called “Joint Embedding Predictive Architecture (JEPA)” to learn global patterns.

According to LeCun, a key feature of JEPA is that it can choose to overlook irrelevant details that could not be predicted easily. For example, in image processing, rather than predicting the state of the world at the pixel level, JEPA tends to learn low-dimensional features that are vital for a given task. LeCun also discusses how JEPA architectures can be stacked on top of each other to form a “hierarchical JEPA” (H-JEPA), which could be crucial for handling complex tasks such as multi-timescale reasoning and planning.

Conclusion: The Rise to Autonomous AI

While some researchers believe that artificial general intelligence (AGI) can be achieved by massively scaling deep learning architectures, LeCun says scaling is not enough to achieve autonomous AI. Although scaling has produced incredible advances in language models involving discrete data, it fails to achieve a similar impact on high-dimensional continuous data, such as videos. (Also read: Introduction to Natural Language Understanding (NLU) Technologies.)

LeCun is also not convinced that reward functions and reinforcement algorithms are enough to achieve AGI. He argues that reinforcement learning requires continuous interaction with the environment, unlike humans and animals, which primarily use their perception to learn.

Clearly, LeCun’s framework requires further exploration to address its implementation challenges.

#Yann #LeCun #develop #autonomous

Leave a Comment

Your email address will not be published. Required fields are marked *