Publié : 6 December 2025
Actualisé : 18 hours ago
Fiabilité : ✓ Sources vérifiées
Je mets à jour cet article dès que de nouvelles informations sont disponibles.
📋 Table of Contents
Artificial intelligence, ubiquitous as it is, both fascinates and raises questions. It is often attributed with quasi-human abilities, an artificial “intelligence” capable of solving complex problems. However, a simple request can reveal the abyssal flaws of these systems. This is the disturbing experiment I conducted, confronting a generative AI with a task of childlike simplicity: drawing a skyscraper and a trombone while respecting their relative proportions. The result? A festival of visual absurdities that challenges our blind trust in these algorithms.
🤖 The Skyscraper Trombone Experiment: A Revealing Test
The starting idea was simple: to test an AI’s ability to understand and represent the real world. I therefore submitted the following request to Gemini, Google’s AI: “Draw me a skyscraper and a trombone side by side so that we can appreciate their respective sizes.” The objective was to check whether the AI was capable of grasping the colossal scale difference between these two objects. What happened next was, to say the least… surprising.
Instead of a coherent scene where the skyscraper towers over the trombone, I got an image where the two objects were of comparable size, or even where the trombone was larger than the skyscraper. A visual nonsense that highlights a fundamental shortcoming: the AI, despite its computing power, is unable to grasp the concepts of size, scale, and proportion in the same way as a human.
Key Point: This experiment highlights the fact that generative AIs excel at reproducing statistical patterns, but fail when it comes to transposing these patterns into a context requiring an understanding of the real world.
🤔 Statistical Correlations vs. Understanding of the World
The reason for this failure lies in how these AIs are built. They are trained on massive amounts of data, from which they learn to identify statistical correlations. In other words, they spot recurring patterns in the data and reproduce them. However, they have no “understanding” of the meaning of this data. They don’t know what a skyscraper is, what a trombone is, or why one is much larger than the other. They simply juxtapose visual elements based on the statistical probabilities learned during their training.
This is why, when asked to perform a task that falls outside their learning domain (for example, comparing objects of very different sizes), they produce absurd results. They are capable of generating visually pleasing images, but devoid of any logic or internal coherence.
🤯 The Limits of “Chain of Thought”
Some developers have tried to overcome these limitations by integrating “Chain of Thought” (CoT) mechanisms into AIs. The idea is to force the AI to break down the problem into smaller steps and justify its reasoning. However, even with CoT, AIs remain prone to gross errors.
A striking example is the question about 1776 and leap years. Even by breaking down the question, the AI can provide a contradictory answer, a sign of superficial reasoning and an absence of a real conceptual model of the world.
🖼️ A Summary Table
To better understand the strengths and weaknesses of generative AIs, here is a comparative table:
| Characteristic | Strengths | Weaknesses |
|---|---|---|
| Image generation | Creation of realistic and aesthetic images | Difficulty respecting proportions and spatial relationships |
| Language processing | Production of coherent and fluent texts | Lack of understanding of meaning and context |
| Reasoning | Ability to break down complex problems (with CoT) | Subject to logical errors and contradictions |
⚠️ Implications and Perspectives
These limitations of generative AIs have important implications. They highlight the need not to overestimate the capabilities of these technologies and to always be critical of their productions. It is crucial to understand that these AIs are not “intelligences” in the strict sense, but sophisticated tools capable of reproducing statistical patterns. Their use therefore requires constant vigilance and human validation.
The future of AI may lie in a different approach, which would emphasize the development of conceptual models of the world and a real understanding of the meaning of data. In the meantime, it is essential to remain aware of the current limitations of these technologies and to use them with discernment.























0 Comments