Mirage 2
Mirage 2 is an AI-driven Generative World Engine that lets anyone instantly transform images or descriptions into fully playable, interactive game environments directly in the browser. Upload sketches, concept art, photos, or prompts, like “Ghibli-style village” or “Paris street scene”, and Mirage 2 builds immersive worlds you can explore in real time. The experience isn’t pre-scripted: you can modify your world mid-play using natural-language chat, evolving settings dynamically, from a cyberpunk city to a rainforest or a mountaintop castle, all with minimal latency (around 200 ms) on a single consumer GPU. Mirage 2 supports smooth rendering, real-time prompt control, and extended gameplay stretches beyond ten minutes. It outpaces earlier world-model systems by offering true general-domain generation, no upper limit on styles or genres, as well as seamless world adaptation and sharing features.
Learn more
Gemini Diffusion
Gemini Diffusion is our state-of-the-art research model exploring what diffusion means for language and text generation. Large-language models are the foundation of generative AI today. We’re using a technique called diffusion to explore a new kind of language model that gives users greater control, creativity, and speed in text generation. Diffusion models work differently. Instead of predicting text directly, they learn to generate outputs by refining noise, step by step. This means they can iterate on a solution very quickly and error correct during the generation process. This helps them excel at tasks like editing, including in the context of math and code. Generates entire blocks of tokens at once, meaning it responds more coherently to a user’s prompt than autoregressive models. Gemini Diffusion’s external benchmark performance is comparable to much larger models, whilst also being faster.
Learn more
Ray2
Ray2 is a large-scale video generative model capable of creating realistic visuals with natural, coherent motion. It has a strong understanding of text instructions and can take images and video as input. Ray2 exhibits advanced capabilities as a result of being trained on Luma’s new multi-modal architecture scaled to 10x compute of Ray1. Ray2 marks the beginning of a new generation of video models capable of producing fast coherent motion, ultra-realistic details, and logical event sequences. This increases the success rate of usable generations and makes videos generated by Ray2 substantially more production-ready. Text-to-video generation is available in Ray2 now, with image-to-video, video-to-video, and editing capabilities coming soon. Ray2 brings a whole new level of motion fidelity. Smooth, cinematic, and jaw-dropping, transform your vision into reality. Tell your story with stunning, cinematic visuals. Ray2 lets you craft breathtaking scenes with precise camera movements.
Learn more
Mirage by Captions
Mirage by Captions is the world's first AI model designed to generate UGC content. It generates original actors with natural expressions and body language, completely free from licensing restrictions. With Mirage, you’ll experience your fastest video creation workflow yet. Using just a prompt, generate a complete video from start to finish. Instantly create your actor, background, voice, and script. Mirage brings unique AI-generated actors to life, free from rights restrictions, unlocking limitless, expressive storytelling. Scaling video ad production has never been easier. Thanks to Mirage, marketing teams cut costly production cycles, reduce reliance on external creators, and focus more on strategy. No actors, studios, or shoots needed, just enter a prompt, and Mirage generates a full video, from script to screen. Skip the legal and logistical headaches of traditional video production.
Learn more