AI Animation &
AI tools Redesign
A binge study & redesign of 15+ AI tools based on Human-Centered AI and XAI theory
72 hours for an AI-powered animation from scratch to launch
15+ exisiting Generative AI products audited & redesigned
Aired in 2023
UX Audit
UX Redesign
Storytelling
Visual Design



Project Goals
Bring the yearly-celebrated event SloanFronts creativity show to the next level

Design Goals
Design an anime for the show like movie storytelling in 72 hours from scratch, ideally fully powered by AI

SIGNIFICANCE
There was a growing need for Teaching Assistants at non-design schools to understand essential media production knowledge. I initiated a UX project that quickly demonstrated AI-powered production for academic use.
From a UX design perspective, the project sparked multiple discussions on AI-enabled function design.
It was also a short-term, intensive learning experience for me, where I grasped the rapid iteration pace of AI-powered products.
TEAMS
Sean Huang (Me) - Design Lead & Researcher
UX Audit | UX Research | Product Redesign | Production
Ben Ryan Shields - Producer & Supervisor
Yvette Kong - Supervisor
BACKGROUND
As a Teaching Assistant for the Creative Industries 23-24 class at MIT, I introduced a new topic: the impact of AI-Generated Content (AIGC) on human creativity.
To explore whether Gen-AI could overshadow human creativity and production, I set myself an ultimate challenge: leveraging AI tools to create an anime for the 2023 SloanFronts design show in just 72 hours.
This experiment aimed to test the extent to which AI tools can truly replicate human capabilities.
TOOLS & METHODS
Research & Analytics
Desktop Research | UX Audit | Personas
Figma | Product Hunt
Design & Testing
User flow | Storyboarding | Fast - Prototyping | Animation
Figma | Adobe Creative Cloud (Photoshop
, After Effects, Illustrator)
OUTCOMES
4-min AI -Powered Animation

UX Auditing Summary
based on a typical animation production process

PROCESS
The year 2023 marks a pivotal moment for MIT as it integrates AI technology into its pedagogical environments, with both empowering opportunities and significant challenges.
As a Teaching Assistant for the Creative Industries class, I was given the challenge and opportunity to teach students about AI-Generative Content (AIGC) through a hands-on project:
creating an AI-powered animation in 72 hours for the annual creativity show, showcasing the culmination of the class.
Problems
Challenge 1 - Timing
72 hours for an Anime literally from scratch sounds like a mission impossible, given the overall procedure (storyboarding, character design, sound design, audio making, graphic design, dramatic motion work, video editing, etc.).
Challenge 2- Content
Giving an exact prompt with the most concise and accurate description to AI tools is hard due to translation nuances and the depth of description details;
For cultural differences, taking the pun suggested by AI tools might be risky, since some punches only work for specific people;
Before this event, I knew little about audio-making and voiceover generation.
Challenge 3 - Delivery
I have limited sense of how native-speakers (the majority of the audience) could vibe with the verbiage/tone suggested by AI.
Solutions
Regardless of the ambiguity, I decided to have a fast UX audit on the main AI tools for each phase of creation and stick to the relatively most intuitive option with shorter learning curve based on UX Auditing criteria:
Accessibility
e.g., Tool tips for laypersons to understand each piece of UI
Explainability
i.e., results & algorithms explainable to users
navigation
of Info Architecture ("IA") (for beginners, a comprehensive but intuitive IA helps save lots of time)
page Speed
it determines possibilities of multitasking

User Research & Persona
I learned Five different XAI needs and the importance of being mindful of human cognitive biases.
As a beginner myself, I decided to delve into the persona of a beginner to Gen-AI tools who is more desperate than the others for precise and accurate AIGC outcomes with a pursuit for a less steep learning curve.

🔼 A diagram of different audiences for a given prediction and explanation by Meg Kurdziolek (2022)

User Flow
Despite that most current AI tools are not positioned as consumer products thus making it too early to discuss UX design for such products, I still found two points where UX & UI design can very essentially enhance the user journey:
Space for explainability 1
Before users input prompts to AI, i.e., the stage before the blackbox system. This phase entails clean, intuitive and instructional UIs to assume that even if a beginner without watching tutorials can operate the tools smoothly.
Space for explainability 2
After the AIGC results are provided. In this phase, users will make decisions quickly whether to accept the results, modify prompts for regeneration or abandon, which requires the system to provide feedback on users’ regarding their wrong inputs, better alternatives, etc..

AI Tools Redesign
I binge studied 10+ AI tools covering the phase of pre-production, production and post-production by:
desktop research on AI product reviews;
picking the tools most suitable for my case and walking through all essential tutorials;
and mapping out their key pros and cons (see the summary below)

Outcome Summary
user Case Report ✅
Looking into the real time commitment and my anticipation, I found the several points when I could stronlgy feel what the current AI tools were not able to cover:
Making an impressive and creative story script (not a mediocre or cliche one);
Create cohesive and uniformed characters from beginning to the end;
Doing dramatic or exaggerating animations without sacrificing the high-pixel visual quality;
Smooth lip-syncing for multi-dimensional figures.

Discussion on future Gen-AI feature Design ✅
As we explore various tools across different production phases, it's becoming clear that the future of UX design will heavily focus on explaining AI to users, emphasizing explainability.
I suggested key elements of effective UX/UI design for a generative AI tool as follows:
Regional modification Allowed
Hard!!!
Regional, more refined sub-scale alterations and modifications based on prompts
Feedback System
Intermediate
Help improve generation quality by enabling instant feedback on each generation
Presets & Templates
Easy
The most generally used preset parameters and templates, scenarios for prompts, etc.
Clear Previews
Easy
Images - visual styles
Audio - voice tones, emphases, expressions, etc.
Video - movement directions, etc.
Intuitive Guides
Easy
Tips for crafting effective prompts to improve generation efficiency, essential guides for beginners to understand the finer details
Alerts On Privacy
Easy
Prevent misuse of private information in customized model training, such as commercial exploitation or accidental disclosure of confidential information
IMPACTS 🎉