Gemini now lets you check whether short videos were created or edited with Google’s own AI tools by scanning them for an invisible SynthID watermark. The feature makes it easier to spot AI content and understand exactly which parts of a clip were machine-generated.
Here’s how it works. In the Gemini app, you can upload a video up to 100MB and 90 seconds long. Then ask questions like “Was this generated using Google AI?” or “Is this AI-generated?” Gemini analyzes both the visual and audio tracks for SynthID. That’s Google’s imperceptible digital watermark embedded into media made with its AI models.
The coolest part? You don’t just get a yes or no answer. Gemini tells you which specific segments include AI content. For example, “SynthID detected within the audio between 10-20 seconds. No SynthID detected in the visuals.” This level of detail helps you understand exactly what was AI-made versus what was real footage.
The Catch
The system only works on content made with Google’s own AI tools. Google bakes SynthID watermarks into its AI generators, so Gemini can reliably verify those videos. But it can’t catch deepfakes made with other tools. If a video has no SynthID watermark, Gemini can’t definitively say whether someone made it with non-Google AI.
Still, as AI video tools become more powerful and realistic, telling what’s real gets harder every day. This feature gives everyday users a simple “upload and ask” check from their phone or browser. Google is pushing for transparency here, similar to how other platforms handle AI-generated content.
You can quickly verify suspicious clips shared on social media or messaging apps. This is especially useful for short viral videos that might be AI-fabricated. Creators and brands using Google AI tools can point to Gemini’s verification as proof. That may help with disclosure requirements or platform policies. It’s a practical step forward in making AI content more transparent and traceable.