Publié : 16 November 2025
Actualisé : 6 days ago
Fiabilité : ✓ Sources vérifiées
Je mets à jour cet article dès que de nouvelles informations sont disponibles.
📋 Contents
- 🕵️♀️ Ready to spy on an AI? The Gandalf challenge awaits!
- 🔓 What is the “Jailbreak” of an AI? More than a simple trick!
- 🎮 Gandalf: The game that turns the curious into cyber-investigators!
- 🛡️ When AIs learn to defend themselves: a race against time!
- 🪜 Eight levels to master the art of digital persuasion!
- 🧠 Ready to sharpen your critical thinking skills? Now it’s your turn!
🕵️♀️ Ready to spy on an AI? The Gandalf challenge awaits!
Artificial intelligence is everywhere, but what about its secrets? Are they really well guarded? Imagine for a moment being able to test the limits of an AI’s security, pushing it to its limits? That’s exactly what Lekara, a cybersecurity company, is proposing with this amazing game. Forget boring tutorials and complex theories: here, you get your hands dirty in the digital world. Your mission, should you decide to accept it, is to make a large language model (LLM) confess to a secret password. Hang on, it’s going to be fun and instructive!
🔓 What is an AI Jailbreak? More than just a simple trick!
The term “jailbreak” may already speak to you for smartphones, but applied to an AI, it’s even more fascinating. It’s all about bypassing the protections, the “safeguards” that developers have patiently put in place. These barriers are there to prevent the AI from divulging sensitive information, generating malicious content or simply doing anything at all. Why take an interest in this? Because understanding how these protections can be broken also means understanding how to reinforce them. It’s a bit of a cat-and-mouse game, where data security is the main issue. You wouldn’t want your favorite chatbot to reveal your most intimate secrets, would you?
Important: Jailbreaking an AI aims to remove built-in restrictions or censorship, leading the model to respond to normally forbidden requests. A double-edged skill, but crucial for security!
🎮 Gandalf: The game that turns the curious into cyber-investigators!
The playground is called “Gandalf”. Yes, like the wizard. Developed by Lekara, this challenge plunges you straight into the heart of the matter. The AI, your opponent for the day, has a password which it is supposed to guard carefully. Your objective? Get it out of him by any means necessary! Level 1? This is the sandbox. The AI has no safeguards, no protection. It’s the perfect opportunity to get your bearings, test direct approaches, and see just how… talkative a defenseless AI can be. But make no mistake, the difficulty climbs fast!
🛡️ When AIs learn to defend themselves: a race against time!
It would be a mistake to think that AIs are naive. Security teams are hard at work! Every day, they observe usage, detect vulnerabilities and adjust defenses. It’s a real digital arms race between the “hackers” and the “guards”, as Adrien Merveille, Technical Director France at Check Point, has already pointed out:
“At the very beginning, people said that ChatGPT could create a phishing e-mail. Very quickly, vendors such as OpenAI put safeguards in place to ensure that their engines were aware that they could be used for malicious purposes.
A clear observation: the era of defenseless AI is over. Or almost. This is where Gandalf becomes interesting: he shows just how far we’ve come
🪜 Eight levels to master the art of digital persuasion!
Gandalf isn’t a sprint, it’s a marathon of cunning. The game features eight levels of increasing difficulty. Each adds layers of protection, from semantic filters to complex content checks. You’ll need to hone your techniques, think “outside the box” and demonstrate boundless creativity to reach the Holy Grail: the password!
| Level | Description of Defenses | Involved Strategy |
|---|---|---|
| 1 | No safeguards | Direct attacks, basic social engineering |
| 2-4 | Basic filters, keyword detection | Reformulation, indirect questions |
| 5-7 | Semantic, contextual analysis | Stories, role-playing, complex scenarios |
| 8 | Advanced defenses, attempt detection | Lateral thinking, extreme perseverance |
Key point to remember: Each level is a security lesson. By trying to break the defenses, you learn how they work and why they’re necessary
🧠 Ready to sharpen your critical thinking skills? Now it’s your turn!
More than just entertainment, Gandalf is an invaluable educational experience. It pushes you to understand the mechanisms of AI, its vulnerabilities and the complexity of securing it. So, if you’re keen to test your cunning, explore the hidden recesses of artificial intelligence and help improve its security (indirectly, of course!), the Gandalf challenge is for you. Get ready to manipulate, persuade and laugh a little too. Who will be the next to make the AI talk?




















0 Comments