Publié : 19 September 2025
Actualisé : 3 weeks ago
Fiabilité : ✓ Sources vérifiées
Notre équipe met à jour cet article dès que de nouvelles informations sont disponibles.

Figure and the Go-Big Project: Revolutionizing Robot Learning

Figure is tackling a major challenge in robotics: how to teach humanoid robots to perform complex everyday tasks? Their answer: the Go-Big project.

💡 Key Point: Figure rejects the idea of a “YouTube for robots” and focuses on creating a unique learning database.

Unlike chatbots that train on massive textual data from the web, Figure has chosen a different approach for its robots. The company favors capturing human movements to transpose them to machines.

Go-Big: An Unparalleled Database

Figure has partnered with Brookfield, owner of more than 100,000 homes, to build “the world’s largest humanoid pre-training dataset.”

💡 Key Point: Access to homes, offices, and warehouses diversifies learning environments and allows for exploration of various tasks.

This partnership provides Figure with privileged access to a variety of environments: apartments, houses, offices, and warehouses. This diversity is essential to explore a wide range of potential activities for robots, particularly in handling and logistics.

The Contribution of Subjective View

Go-Big’s innovation lies in the use of sequences filmed in subjective view (POV). Cameras worn on people’s heads allow capturing data close to the visual perception of a humanoid robot.

💡 Key Point: The subjective view (POV) is crucial to simulate a robot’s visual perception and optimize its learning.

This data collection method, already underway, is expected to intensify in the coming months. The objective is to build a massive and diversified database, similar to Wikipedia for language, YouTube for video, or ImageNet for computer vision, but specifically dedicated to robotics.

Pre-training: General Training for Specific Tasks

Pre-training consists of exposing an AI model to a large amount of generic data so that it acquires basic skills. This process is crucial to prepare the robot for more specific tasks, such as taking out the trash, cleaning, or performing handling operations.

“Every major advance in the field of machine learning is the result of exploiting large and varied datasets.” – Figure

Go-Big Project Summary Table

Aspect Description
Objective Create the largest humanoid pre-training dataset.
Partner Brookfield
Method Capturing human movements in subjective view (POV).
Environments Homes, offices, warehouses.
Applications Household tasks, handling, logistics.

Figure’s Go-Big project marks a significant step in the evolution of robotics. By focusing on an innovative learning approach, Figure paves the way for more efficient humanoid robots capable of performing increasingly complex tasks.

 

❓ Frequently Asked Questions

Figure plans to use data augmentation and normalization techniques to minimize bias. Comparative analysis of the data with alternative sources and the integration of feedback mechanisms in the robot learning process are also being considered.

Why does Figure explicitly reject the “YouTube for robots” approach? What are the limitations of this approach according to them, and how is the Go-Big method more relevant for humanoid robot learning?

The “YouTube for robots” approach lacks structure and consistency for effective robot learning. Go-Big, by focusing on structured POV data, offers a more controlled and relevant learning environment for the tasks targeted by Figure.

What are the consequences of using POV data on the hardware development of Figure’s humanoid robots? Does this influence the choice of sensors, cameras, and image processing algorithms?

The use of POV data imposes specific constraints on the hardware. The robots must be equipped with cameras and sensors that replicate human perception. Image processing and learning algorithms are optimized to exploit POV data, favoring an integrated hardware and software architecture.

Does the partnership with Brookfield really guarantee the diversity of the data collected? How does Figure plan to handle situations not represented in Brookfield environments, such as specific industrial environments or emergency situations?

While Brookfield offers some diversity, Figure is exploring other partnerships to supplement the data. Virtual simulations and transfer learning techniques could compensate for the lack of data for specific situations not covered by Brookfield environments.
Rigaud Mickaël - Avatar

484 articles

Webmaster Bretagne, France
🎯 LLM, No Code Low Code, Intelligence Artificielle • 3 ans d'expérience

0 Comments

Your email address will not be published. Required fields are marked *