TechnologyStable Diffusion generates video from a few words

Stable Diffusion generates video from a few words

Stable Diffusion, the open source content generation model, is moving to video. A first version intended for researchers is available, before a wider deployment to the general public.

Generative artificial intelligence knows how to create images, text or even music from a few words. Stable Diffusion adds a string to its bow: video! All you have to do is describe what you want to see move, and the model will generate a video.

Will the next blockbuster be created by Stable Diffusion?

This first model for generative video is based on the Stable Diffusion image model, which allows you to create illustrations in different styles, logos, posters, images, fiction, code… Stable Video Diffusion adapts for many situations, such as creating multiple views of the same subject from a single image.

Stable Diffusion Video is capable of generating 14 and 25 frames per video, with custom refresh rates (between 3 and 30 frames per second).

Researchers can already get an overview of this new feature, while the code is available on GitHub. Stable Diffusion also shares a research article detailing how this model works. The general public and the curious have the possibility of subscribing to a mailing list to access the future web interface. There are many opportunities: education, entertainment, advertising, etc.

Unlike other models like Dall-E or Midjourney, Stable Diffusion is more accessible due to its open source nature. On the other hand, the quality of the visuals produced is variable, and the exploitation of the content generated by the model may pose problems with copyright.

Source :

Stable Diffusion

Daily Global Times
Daily Global Times
Daily Global Times provides the latest news from India and around the world. Read the latest headlines and news stories, from Politics, Entertainment, Tech and more.

Related Article

Editors Picks