• Future Perfect
  • Posts
  • #112. ๐Ÿ“บ OpenAI's text-to-video ๐Ÿ”Ž OpenAI's AI web search ๐Ÿ“บ Google's AI video

#112. ๐Ÿ“บ OpenAI's text-to-video ๐Ÿ”Ž OpenAI's AI web search ๐Ÿ“บ Google's AI video

OpenAI's Sora t2v โ€ข OpenAI's Google competitor? โ€ข Google's Lumiere t2v

  • OpenAI unveils Sora, a new AI model capable of creating realistic videos from text instructions.

  • Sora allows users to generate videos up to a minute long, featuring complex scenes, characters, and motions based on written prompts.

  • The AI model can also transform still images into videos, extend existing videos, and fill in missing frames.

  • Demonstrations of Sora include diverse scenarios like aerial views of historical events and urban life, showcasing its creative potential.

  • The model, however, has limitations in simulating complex physics and cause-effect relationships in scenes.

  • Sora is currently in a testing phase with limited access to artists and filmmakers, and OpenAI acknowledges the potential risks of AI-generated realistic videos.

  • OpenAI is reportedly developing a web search product, potentially challenging Google's dominance in the search market.

  • The search service is said to be partly powered by Bing and could mark a direct competition with Google.

  • OpenAI's move follows the integration of AI into Microsoft's Bing and the emergence of AI search startups like Perplexity.

  • It's unclear whether this product will be separate from ChatGPT, which also uses Bing's web index.

  • The launch of an OpenAI search service could intensify the rivalry with Google, especially in the conversational AI space.

  • OpenAI plans to introduce other products, including "agents" for automating complex tasks, alongside exploring the search engine domain.


Future Perfect is brought to you by my law practice serving tech startups and providing solutions at the intersection of AI and copyright law.ย Come say hi at MarcHoagLaw.com or click here to watch my lectures about AI and copyright law or here for my Axiomic Legal Toolkit where you can self-educate on AI and tech startups (and more!)

  • Google introduces Lumiere, an AI video generation model using a new diffusion model called Space-Time-U-Net (STUNet).

  • Lumiere creates seamless video motion from text prompts by understanding the spatial and temporal dynamics of objects.

  • The model generates more fluid and realistic motion compared to other platforms, producing 80 frames as opposed to 25 by Stable Video Diffusion.

  • Comparisons with Runway show Lumiere's advanced capabilities in creating lifelike movements and textures.

  • Lumiere can also generate videos from images, apply specific styles, create cinemagraphs, and perform video inpainting.

  • Despite its advancements, Google acknowledges the risk of misuse of such technology for creating fake or harmful content and emphasizes the need for safety measures.


See you next time! ๐Ÿ‘‹

What'd you think of today's email?

This helps me improve it for you!

Login or Subscribe to participate in polls.

๐Ÿ™ PLEASE SHARE this post if you liked it!

Thanks for reading!

-Marc ๐Ÿ‘‹

Looking for past newsletters? You can find them all here.

Join the conversation

or to participate.