Animation Production for Batman & Industry Workflows
Simon Warwick talked about his mentorship at CG Spectrum, animation production for Batman: Arkham City, current stage of technologies and more.
Introduction & Career
My name is Simon Warwick and I’m a mentor at CG Spectrum. I started my career as a character animator and it evolved into a technical animation role over the years. My first couple of gigs after college in 2004 involved working with an Unreal Tournament mod team and contributing to some CG shots in an independent live-action film based in New York. Neither of them paid but it garnered the experience and demo reel to land a job at Silicon Knights as a character animator in 2005. By 2009, I had enough technical game development skills to get hired at Rocksteady as a senior technical animator. I was primarily responsible for converting facial motion capture into the game for Batman Arkham City. By 2013 my wife and I decided to move back to Canada to work freelance and develop independent projects under Collectivision Studios. During those years I spent time working with other developers using UE4. By 2015, I began mentoring full time at CG Spectrum but I still develop ideas within UE4 on the side.
Current State of Technologies
I’ve found that the new potential with implementing animations within UE4’s Blueprints has completely changed what an Animator is capable of. It’s been great being able to test out ideas and experiment without having to translate (and motivate) an animation programmer. However, it’s still quite a complex system to understand for the average animator so I find it’s not something that fits everyone’s interest.
As a mentor, I still struggle to find reliable rigging systems for the students that are simple to set up, easy to animate with and also compatible as a game rig. There are some concepts floating around about integrated animation rigging systems within the editor and it’s appealing because it can solve these compatibility issues for animators. However, competing with the current animation workflows within software like Maya or Motionbuilder is still a tall order to fill.
Cinematic vs. Game Animation
The pipelines for cinematic and game animation can be completely separate. Cinematic animation has a goal of explaining something to the player that the player can’t do with gameplay. This involves longer clips of straight motion capture that are assembled where camera angles and traditional filming techniques drive the viewer. In-game animations are heavily manipulated short clips where animators are making changes based on feedback from those implementing and designing the gameplay. Gameplay animations can sometimes be quite abstract as it only represents a portion of an action that is blended into the currently playing skeletal mesh.
Animation workflows haven’t changed much over the last 15 years and I find there’s a tendency to have a studio/department favor particular software or a toolset to maintain consistency with all the character’s rigs.
Our cinematic workflow at Rocksteady used Motionbuilder which is quite robust and the rig is game-compatible although learning the software can be slow and frustrating for people starting out. Certain tools within Motionbuilder are really time-saving like batch retargeting motions from one character to another or story mode which blends animations together as clips in a sophisticated way along a timeline similar to video editing. Both of those workflows can now also be found directly within Unreal in a slightly more basic form. For example, the addition of Sequencer within Unreal has the ability to blend clips together and this has really changed what can be tweaked and polished editor side even just using in-game animations.
Animation Production for Batman: Arkham City
Rocksteady’s in-house staff ended up being around 100 employees during Arkham City. They were in the midst of growing and rapidly expanding so there were a lot of changes happening in and around the studio during production.
We had around 5 to 6 cinematic animators and 8 or 9 in-game animators at the height of production with 2 or 3 riggers and a dedicated animation programmer. Our cinematic workflow used Motionbuilder which allowed for cleaner and quicker turnarounds from our motion capture while also maintaining longer clips and cameras cuts together. In contrast, the in-game team preferred to work with 3ds Max because of the heavy hand keying and mo-cap cleanup workflows on their shorter in-game actions.
The studio has an in-house mocap facility which really opens up a lot of creativity and experimentation for the animators. It created a pipeline that allowed for lots of iteration and freedom with disposable capture time. The animators from both teams could maintain a quick turnaround by suiting themselves up and capturing the initial previs passes and then assemble it with a rough cleanup pass. Once things were approved it would go through stages of recapture using professional motion capture actors. Within the games industry, this can be an essential feature because storylines and gameplay concepts are added and changed right up until the bitter end of production.
We tended to hand out cinematic scenes to each animator rather than shots so one animator would be responsible for setting up characters and cameras and eventually implement a working version within the editor. The cinematic director would work closely with the team and organize reviews as we start locking down shot angles and timing based on feedback from the design director.
Our facial pipeline during gameplay scenes used FaceFX which can automate lip sync from the audio and the script text. This was combined with a carefully customized wiring of animation sliders allowed for us to polish close-ups shots.
Shots that were closed cutscenes were assigned to our facial motion capture pipeline and we would capture the face separately from the body motion. Because the game used very specific voices the priority was to record the authentic sound first and then match the timing for the body and facial captures. We would bring in specific face actors to mimic to the dialogue but we would cut it up into 10-second capture takes. I could quickly piece them all back together again because we used timecode when recording and within Motionbuilder it could be matched accurately. The capture data would then get retargeted appropriately onto the corresponding character using our custom toolset. I set it up to have many specific animation sliders to allow for any tweaking and polishing of the capture before it got exported into the game. Then afterward, the eye and additional blink animation would get created with custom controls in the editor so it could be lined up to the final character positioning in the environment. The facial capture turnaround was quite fast and I was able to convert and polish pretty much all the cinematic dialogue within the small window of time during the final polish pass at the end of production. In hindsight, we would have benefited from additional help to go back and add more polish to produce better results but during game production, there are always more technical hurdles along the way that spreads your time thinner than you planned for.
Mentoring at CG Spectrum
I’ve known Jeff who started up CG Spectrum before he began the school. We’ve worked together on some projects after our college years and kept in touch over the years. He contacted me while I was still at Rocksteady if I was interested in mentoring, it was appealing but I couldn’t offer him the time. In 2015, he offered me a full-time position which at that point in my career and my personal life made a lot of sense. Mentoring can be quite rewarding and it keeps me learning along with the students as the industry is ever-changing. It also allows me time to work on my own development ideas and continue to bring new experiences to the students. CG Spectrum is always on the lookout for experienced professionals who can bring their special talents and knowledge to the course’s content as well as through mentoring!
VFX & Game Design Course
What’s the course about?
VFX & Game Design course specifically is a great starting point for people entering the industry and looking to get into some type of 3D entertainment creation. We call it the foundation course because it’s about covering the bread and butter of working with and outputting 3D content. At this early stage of understanding modeling and animation, we are still able to keep all doors open until the student has a sense of which industry and department they’d like to pursue further. We then offer advanced courses in a variety of different directions that dive deeper into that specialization.
All the software we use during the VFX & Game Design course is free under student licenses which include Maya, Substance Painter, UE4, Houdini, and Nuke. Also, the course content gets refreshed every few years as software versions change, pipeline workflows shift and we receive feedback from students progressing through the course.
Everyone begins this course with a different story about their past level of experience and technical knowledge but because of how our online course is laid out we have a lot of flexibility to cater to each student’s pace. Our goal is to keep everyone pushing themselves and we offer various solutions to ensure everyone is getting the help they need and are moving through the course at a pace that they can handle.
How is it taught?
Let’s take one part from the first study period covering lower and upper body walk cycles and see how the whole process is organized.
The lower and upper walk cycles are good theory-based weeks starting by laying down the standard stride and cross poses. These weeks are recorded by Mark Pullyblank who is largely responsible for the animation side of our courses. As a mentor, I would be seeing the students final results or work-in-progress as they replicate the animation covered in the videos. During the weekly Q&A, I would have their file open on my computer while screen sharing and run through common issues and solve them during the Q&A session. The student can ask questions and observe the workflow of identifying and solving the animation issues.
During the same weeks, we’ll also be covering modeling and texturing workflows in the same way. We build upon solving unique problems and discussing different solutions and approaches as each student can put their own interpretation into the assignments. If the student is unable to make it to a Q&A or would like some additional feedback they can upload files during the week and the mentor will record a review offline and email it back to the student.
We also provide a private Slack workspace and an online forum for students to interact closely with mentors and other students enrolled or graduated from the school. Generally, we do what we can to remove barriers that might be preventing the student from ramping up their skills as fast as possible.
Work really hard in the beginning, both in learning new skills and in reaching out to new contacts and opportunities. Be prepared to make sacrifices and take risks until you find a path where your momentum helps push you along. The investment will pay in interest and will give you more freedom to continue growing in the areas you enjoy most.