Animated Diffusion | Generative AI

Below are some Animated GIFS created using Generative AI, Text2Image method. We possess the ability to generate visual loops, available in either GIF or H264 formats.

Using Stable Diffusion Automatic1111, we can achieve temporal consistency. (as compared to Midjourney<each generation will result in a different face.)

(Temporal consistency refers to the stability and continuity of a phenomenon or process over time. In  animation, video processing, and computer vision, temporal consistency is crucial for maintaining smooth and coherent sequences, example of the face being consistent on all angles, different outfits, and expressions.)

Node Based Workflows

We use ComfyUI, a powerful node-based  and modular stable diffusion GUI and backend. This enables us to have custom tensor models, LoRAs, Controlnet methods, as compared to consumer web based tools such as RunwayML and PIKA Labs, which uses preset methods which could limit customisation.

Using a Reference Video

DW OpenPose is a cutting-edge computer vision library specializing in human pose estimation.

We can capture human motions from any video effortlessly, eliminating the necessity for a motion capture suit, or manual animation by hand in 3d software.

Integrating DW OpenPose into your projects can enhance character animation and create lifelike movements. Its robust capabilities make it a go-to solution for crafting visually stunning and realistic human interactions in your creative endeavors.

Dancing Girl

Here’s an example of Openpose seamlessly collaborating with HED to capture the dancer’s movements and then seamlessly applying them to a Generative AI visual. We can apply any LoRA or ckpt safetensor (machine learning file format) , to achieve various visual asethetics.

Example of the node workflow for the dancing girl.

Example showing CANNY and DW OPENPOSE

Using a 3dAnimation ckpt safetensor, we replicated an animated 3D shaded model that accurately emulates the original motion.

More detailed workflow explanation here.

Contact us at for more information.