SketchDynamics: Exploring Free-Form Sketches for Dynamic Intent Expression in Animation Generation
Abstract
Free-form sketching enables intuitive dynamic intent communication for automated content creation, bridging human intention and digital output in animation workflows.
Sketching provides an intuitive way to convey dynamic intent in animation authoring (i.e., how elements change over time and space), making it a natural medium for automatic content creation. Yet existing approaches often constrain sketches to fixed command tokens or predefined visual forms, overlooking their freeform nature and the central role of humans in shaping intention. To address this, we introduce an interaction paradigm where users convey dynamic intent to a vision-language model via free-form sketching, instantiated here in a sketch storyboard to motion graphics workflow. We implement an interface and improve it through a three-stage study with 24 participants. The study shows how sketches convey motion with minimal input, how their inherent ambiguity requires users to be involved for clarification, and how sketches can visually guide video refinement. Our findings reveal the potential of sketch and AI interaction to bridge the gap between intention and outcome, and demonstrate its applicability to 3D animation and video generation.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SketchPlay: Intuitive Creation of Physically Realistic VR Content with Gesture-Driven Sketching (2025)
- Rewriting Video: Text-Driven Reauthoring of Video Footage (2026)
- Protosampling: Enabling Free-Form Convergence of Sampling and Prototyping through Canvas-Driven Visual AI Generation (2026)
- DepthScape: Authoring 2.5D Designs via Depth Estimation, Semantic Understanding, and Geometry Extraction (2025)
- SketchAssist: A Practical Assistant for Semantic Edits and Precise Local Redrawing (2025)
- LAMP: Language-Assisted Motion Planning for Controllable Video Generation (2025)
- ShadowDraw: From Any Object to Shadow-Drawing Compositional Art (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper