Designing Multi-Shot Prompts for Sprite Animation Generation in Seedance 1.0 Pro
SpriteDX Pipeline - Stage 2

Okay, so I took 2 days to read through the Seedance 1.0 Pro paper (blog post). It was arduous but totally worth it. I learned so much about the prompt design of Seedance 1.0 Pro. Also learned how video generation models are trained in general. Very easy to follow along with little bit of help of ChatGPT.
Here is a prompt format we crafted at the end of that study.
[SHOT 1] Crowd’s-eye view as the arena lights converge on the ring. [CUT]
[SHOT 2] Slow-motion of the boxer throwing punches. [CUT]
[SHOT 3] Ultra close-up of the opponent’s reaction. [CUT]
[SHOT 4] Cut to the referee blowing the whistle—contrast between motion and stillness.
It has a format:
[SHOT 1] Shot description from cameraman's point of view [CUT]
[SHOT 2] Shot description from cameraman's point of view [CUT]
…
[SHOT N] Shot description from cameraman's point of view
Let’s try this approach out for Sprite Animation generation.
Model:
- Seedance 1.0 Pro
Tool used:
Scenario (most of the generations)
Fal.ai (some of the generations to cross check)
ComfyUI (for my personal workflow using Fal.ai API Key)
Parameters:
Duration: 5s
Resolution: 480p
Aspect Ratio: 1:1
Camera Fixed: True
Seed: 42
⚠️ Unfortunately, Seedance 1.0 Pro in Scenario does not seem to be fully deterministic. So fixing the seed didn’t produce exactly same result. Perhaps this is a Scenario bug.
Input Image:

Prompt Engineering
[Scene Description] Character is a game sprite character for game called “Machi.” Her name is “Eliana.” The video has no camera movements.
[SHOT 1] Intro: Camera static full body shot and pixel art character says “hi.” [CUT]
[SHOT 2] Idle Loop: Camera static full body shot and the pixel art character is facing slightly right (+x) looking straight right (+x) and shows sprite animation loop for “idle” state, breathing in and breathing out, on a pure white background. [CUT]
[SHOT 3] Run Cycle (in-place): Camera **locked** on the pixel art character running in positive x direction, and character starts showing sprite animation loop for “run” state on a pure white background. [CUT]
[Tags]
#character-animation #角色动画
#game-sprite #游戏精灵
#platformer #平台跳跃
#machi #Machi
#cute #可爱
#pixel-art #像素艺术
#1girl #少女
#subject-only #主体突出
#side-scroller #横版卷轴
#platformer #平台游戏
#idle-loop #待机循环
#run-cycle #跑步循环
#pure-white-background #纯白背景
This prompt is a multi-shot prompt with a description of the character up top.
It then follows up with rather detailed description of the scene starting with a camera queue.
Each shot repeats static features like “pixel art” because otherwise styles might drift.
Then it follows with some tags in English and in Chinese (since Seedance 1.0 Pro is bilingual model.
Sample Result
Again, fixing the model is not deterministic when “Seed“ value is fixed. So the it is almost impossible to generate exact copies.
3 out of 4 gave me a decent set of run cycle.
The tags do seem to help in constraining the video and make it less likely diverge into messy video.
75% success rate is not really good and the model is not deterministic so it is quite difficult to test and optimize the prompts. This seems like an Achilles Hill for Seedance 1.0 Pro (at least on Scenario).
Cost
That said, generation cost is amazingly cheap. In Scenario, it takes 20 credits to generate 5 second 480p video.
In comparision, Flux.1 Pro image generation model costs around 15 credits. WTF? How can it cost almost same as the image generation model?
Another stark comparison is Google Veo 3 costs 640 credits. 32x times cheaper than that.
It is indeed a game changing model.
Performance
Seedance 1 Pro is probably fasted video gen AI out there. If you were to run it through Veo or Sora, you would have to wait more than a minute. But Seedance 1 Pro spits out results in around 30-50 seconds.
Quality of Life Features
Aspect Ratio: The Seedance 1.0 Pro also stands out here because if you provide an image with certain aspect ratio, the model produces a video with same aspect ratio.
Multi-Shot Prompting: You can specify multiple shots for the video. This is also a game-changing feature that allows users to compose a video temporally using prompts.
Best-of Sampling
Sorry, got side tracked. But going back to the prompt engineering.
Now, current success rate of generating the run cycle is around 70-80%.
This isn’t that impressive but since Seedance 1.0’s inference cost is affordable, we can generate batch of animations instead of just generating one.
Idea is that we will generate 2-4 videos, then automatically pick out the best run-cycle.
Testing it out in Comfy UI
Next, I’ve set up the ComfyUI workflow to make sure things work as expected using the same description.

It kinda does but not really. It wasn’t obvious but from the source code you can see that it uses seedance/v1/lite.
result = ApiHandler.submit_and_get_result(
"fal-ai/bytedance/seedance/v1/lite/image-to-video", arguments
)
So, let’s see if we can just replace that “lite“ to “pro“ and make it work.
result = ApiHandler.submit_and_get_result(
- "fal-ai/bytedance/seedance/v1/lite/image-to-video", arguments
+ "fal-ai/bytedance/seedance/v1/pro/image-to-video", arguments
)
And it does seem to work. I should make a custom node with this so that I don’t forget about this change.
Conclusion
We designed a multi-shot prompt that will animate sprite character to idle, and run.
Next, I will work on automatically splitting the “shots” into separate segments, and detecting loops.
—Sprited Dev 🌱




![[WIP] Digital Being - Texture v1](/_next/image?url=https%3A%2F%2Fcdn.hashnode.com%2Fuploads%2Fcovers%2F682665f051e3d254b7cd5062%2F0a0b4f8e-d369-4de0-8d46-ee0d7cc55db2.webp&w=3840&q=75)