4.9 • 848 Ratings
🗓️ 14 July 2025
⏱️ 72 minutes
🧾️ Download transcript
How to maintain character consistency, style consistency, etc in an AI video. Prosumers can use Google Veo 3’s "High-Quality Chaining" for fast social media content. Indie filmmakers can achieve narrative consistency by combining Midjourney V7 for style, Kling for lip-synced dialogue, and Runway Gen-4 for camera control, while professional studios gain full control with a layered ComfyUI pipeline to output multi-layer EXR files for standard VFX compositing.
Goal: Rapidly produce branded, short-form video for social media. This method bypasses Veo 3's weaker native "Extend" feature.
Clip 1
: Generate an 8s clip from a character sheet image.Extract Final Frame
: Save the last frame of Clip 1.Clip 2
: Use the extracted frame as the image input for the next clip, using a "this then that" prompt to continue the action. Repeat as needed.[Genre: ...], [Mood: ...]
) to generate and extend a music track.Goal: Create cinematic short films with consistent characters and storytelling focus, using a hybrid of specialized tools.
--cref
and --sref
parameters.--cref --cw 100
to create consistent character poses and with --sref
to replicate the visual style in other shots. Assemble a reference set.Goal: Achieve absolute pixel-level control, actor likeness, and integration into standard VFX pipelines using an open-source, modular approach.
Loaders
: Load base model, custom character LoRA, and text prompts (with LoRA trigger word).ControlNet Stack
: Chain multiple ControlNets to define structure (e.g., OpenPose for skeleton, Depth map for 3D layout).IPAdapter-FaceID
: Use the Plus v2 model as a final reinforcement layer to lock facial identity before animation.AnimateDiff
: Apply deterministic camera motion using Motion LoRAs (e.g., v2_lora_PanLeft.ckpt
).KSampler -> VAE Decode
: Generate the image sequence.mrv2SaveEXRImage
to save the output as an EXR sequence (.exr
). Configure for a professional pipeline: 32-bit float, linear color space, and PIZ/ZIP lossless compression. This preserves render passes (diffuse, specular, mattes) in a single file.Click on a timestamp to play from that location
0:00.0 | Welcome back to Machine Learning Applied. This is the last segment of the mini series on multimedia generative AI, image generation, video generation, and stringing them all together. |
0:13.9 | This is the most important episode, and it's the one that I'm going to be linking out on socials because it's very practical. It teaches you |
0:20.9 | how to make a movie or an ad using the tools that we've discussed. The last two episodes had |
0:26.5 | sample workflows, total theoreticals. This episode has three real workflows, workflows used by |
0:33.4 | professionals in the wild. In fact, I might go back and remove the sample workflows from the |
0:38.3 | previous episodes. I think they were more distracting than helpful. If you land on this episode |
0:43.4 | first and you haven't listened to the last two episodes, those are what are these tools and how |
0:49.7 | do they compare? So you only need to listen to them if you don't know what the value prop difference is |
0:55.1 | between GPT40 versus Mid Journey or VO3 versus SORA. And especially if you don't know much about |
1:02.5 | the stable diffusion ecosystem because Workflow 3 will be stable diffusion heavy. And you'll |
1:08.6 | want some background information on stable diffusion. |
1:18.0 | So prompt engineering, my friends, I'm afraid I had to exclude it. The content of this episode got too long. And there's something in my DNA that requires miniseries to be in |
1:22.7 | threes, as well as I don't want to dilute this podcast series too much. The whole series about |
1:26.8 | AI and machine learning. I don't want a pigeonhole this podcast series too much. The whole series about AI and machine learning. |
1:28.0 | I don't want a pigeonhole multimedia for too long. |
1:30.8 | So I had to exclude it. |
1:32.3 | I'm so sorry. |
1:33.4 | I may do a super episode on prompt engineering across various domains in the future. |
1:39.4 | This would include obviously image and video generation |
1:42.3 | because prompt engineering for image and video gen is the |
1:45.7 | most important where prompt engineering is applicable is very nuanced and specific the types of words |
1:51.8 | and flags and parameters you use but i would also maybe include prompt engineering for vibe coding |
... |
Please login to see the full transcript.
Disclaimer: The podcast and artwork embedded on this page are from OCDevel, and are the property of its owner and not affiliated with or endorsed by Tapesearch.
Generated transcripts are the property of OCDevel and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.
Copyright © Tapesearch 2025.