Publié : 19 December 2025
Actualisé : 11 hours ago
Fiabilité : ✓ Sources vérifiées
Je mets à jour cet article dès que de nouvelles informations sont disponibles.
Google claims it can detect videos generated by its Gemini AI. Is this a significant step forward in the fight against disinformation, or just a marketing tool to reassure users? My analysis: it’s a start, but the limitations are obvious.
🤖 Gemini, Google’s In-House Detective
Google has announced that Gemini can now identify videos produced by its own AI algorithm. The idea is simple: since Google creates the AI, Google can also create the tool to detect it. It’s a bit like a counterfeiter creating his own detector… limited to his own counterfeit bills.
A month after integrating AI image recognition, Google is extending this capability to videos. The system relies on SynthID, a digital watermark embedded in Gemini-generated content. To check a video, simply submit it to Gemini and ask the fateful question: “Was this video generated using Google AI?”
All videos and images created by Gemini carry specific metadata called SynthID. A kind of digital fingerprint that facilitates Gemini’s detection work.
If SynthID is detected, Gemini will confirm. But the absence of SynthID does not guarantee that the video was not generated by another AI, or that SynthID has been removed. In other words, the system is reliable for identifying *its own* creations, but totally powerless against others.
Key Point: Detection is limited to Gemini’s productions, much like an antivirus that can only detect its own viruses.
🛡️ Vulnerabilities and Bypasses
This is where the problem lies. The article highlights a major limitation: Gemini only detects what it has created itself. However, the generative AI market is extremely fragmented. Dozens of models exist, and new ones appear every day. Relying solely on SynthID is ignoring a whole part of reality.
Moreover, the article rightly raises the question of SynthID removal. “We have no doubt that online tools will already offer to remove this metadata to make AI content detection even more difficult.” It’s an arms race: Google creates a watermark, and others seek to erase it. This engineering perspective reminds us that any security measure is perfectible and that attack vectors will always exist.
The effectiveness of SynthID therefore depends on its robustness against removal attempts. If removal is trivial, the tool becomes useless. If it is complex and costly, it could deter some malicious actors. The question of scale is also important: the more Gemini is used, the more relevant SynthID becomes.
⚖️ Transparency or PR Stunt?
Google insists on the need for a universal AI content marking standard. It’s a call for industry collaboration. But in the meantime, the company is content to offer a solution limited to its own products. Is that enough?
One could argue that it’s a first step, a way to set an example. However, it can also be seen as a marketing strategy to improve Google’s image, often criticized for its lack of transparency in AI. The company is positioning itself as a responsible player, concerned with combating disinformation. But the facts are stubborn: the tool only solves a tiny part of the problem.
🔮 Projection and Risks
Optimistic Scenario: A universal standard for marking AI content emerges, adopted by all major players. Detection tools become increasingly effective, making information manipulation more difficult and costly. Public confidence in online content increases. The AI value chain is cleaned up.
Pessimistic Scenario: The arms race between AI creators and detectors intensifies. Watermark removal techniques become increasingly sophisticated, rendering detection ineffective. Disinformation proliferates, fueling polarization and mistrust. AI becomes a weapon of mass manipulation. The challenges of inference (deducing the truth from incomplete information) become insurmountable.






















0 Comments