This piece was initially published in DP 04 : 2019.
At this year’s FMX we caught up with Matt Aitken and chatted about Thanos, the history of Animation at Weta Digital, and whether or not there can be too many information to go along with the plates.
Matt Aitken has been at Weta Digital for 25 years and has worked there as a VFX & CG Supervisor on the very first VFX production by Peter Jackson – “The Frighteners” from 1996. He was Digital Models Supervisor on the “The Lord of the Rings” trilogy, Pre-Production/R&D Supervisor for “King Kong”, and CG Supervisor for “Avatar”.
Matt was nominated for the Academy Award for Best Achievement in Visual Effects, and the BAFTA Film Award for Best Special Visual Effects, both for “District 9”. He has won multiple Visual Effects Society Awards including Outstanding Visual Effects for his work on “Avengers: Infinity War”.
DP: Matt, you have been with Weta Digital for a long time and basically built and ushered in the whole technology of digital actors.
Matt Aitken: Well, the history of digital character work at Weta Digital is in many ways the history of Weta Digital itself. Weta Digital turned 25 last year and it’s also my 25th year with the company. I’m very privileged to have been involved with the whole story of Weta Digital, pretty much right from the start.
To be a bit nostalgic and give you a timeline: it all started with Gollum. He’s been a friend of Weta Digital’s for many years. When we started working with Gollum in 1997, nobody had ever done the main character of a film, a character who had to talk, who had to interact with other live-action based characters, the character who was an integral part of the story and he was completely digital. The most extensive goal that Weta Digital ever met – at that point.
Before Gollum, we had done “The Frighteneres”, Peter Jackson’s first Hollywood movie, and there were these two characters, the Grim Reaper and Carpet Man, which were our first foray into digital performance. They were all completely keyframe animated, there were no motion capture technologies at Weta Digital at the time. Those two – and Gollum – established some workflows that continue to this day. For example, as well as using commercial software that was available on the shelf, we have been doing in-house software development to augment and extend the software to areas that it couldn’t support.
I think at the start of “The Frighteners”, we were about 12 people, and by the time we finished the project we had grown to about 50 people. We completed about 570 shots on the show. With those, we convinced the Hollywood studios that we could take on the visual effects work for Peter Jackson’s next film project, “The Lord of the Rings” trilogy, which filmed in 2001, 2002 and 2003. At this point, Gollum was primarily a keyframe performance. Even in this very early workflow, it established the key aspect of our approach to digital performance, which continues to this day: we always base our digital characters’ performance on the performance of a live actor.
Gollum gave us the opportunity to develop our facial puppet for the first time. Creating a digital facial puppet is a very time-consuming process, it involves digitally sculpting essentially the effect of every muscle on the face, individually and in combination. We use that as the basis for breaking down the components of the digital face for Gollum.
There was enough range in the facial puppet to create both Gollum and his alter ego, Smeagol. Now, Gollum wasn’t entirely keyframe. “The Lord of the Rings” was the project where we used motion capture for the first time. We installed a motion capture stage at the facility and established a motion capture team in the pipeline, and that’s been running non-stop ever since the three “The Lord of the Rings” films, which we were working on at Weta Digital for 6 years, so for 6 years “The Lord of the Rings” was the only project in-house at Weta Digital. We moved from that to “King Kong” for the next 2 years. At that stage the crew size at Weta Digital was 450 people.
DP: What was new about the pipeline for Kong compared to Gollum?
Matt Aitken: A key component of the new motion capture pipeline was the development of a facial solver. This is something that takes Andy Serkis’s facial movements and then uses them to drive a facial puppet. Not keyframe, not reference, actually drive it. But we realized the importance to retain the ability to use keyframe animation. We wanted to retain the opportunity to work on the facial forms and not just have to live with what came out of the motion capture pipelines. Also, when it came to Kong, we took a long time getting the eyes right – and we are continuing to develop the eye model at Weta Digital.
DP: Let’s talk about “Avengers: Endgame”. What was your main body of work?
Matt Aiken: We did the majority of the compound-scene – except the underground-scenes where Hawkeye is running from the outriders. Also, we did a lot of the work on the final battlefield. We developed Thanos in parallel with Digital Domain, and we each worked out our own version of Thanos. We were reading through a great reference to, of course, Josh Brolin. Thanos experiences a full range of emotions in his own right. After an extreme closeup, he’s contemplated when explaining his motivations, he’s strange, he expresses frustration anger and so forth.
DP: And what was your job?
Matt Aitken: I was the Visual Effects Supervisor for Weta Digital – there were other companies as well, obviously. Onset it’s a huge operation. They were filming on many stages at the same time. Often, my role was jumping from stage to stage, just making sure that what was happening on any of the stages was going to work well for us when taking it back and putting it together.
DP: How often have you worked in the Marvel universe, when Weta was involved?
Matt Aitken: I think five of them by now – I wasn’t involved in their work on “Guardians 2”, I was busy with something else at that time.
DP: When you enter one of the stages, what is the checklist you have in your head for making sure you can work afterwards?
Matt Aitken: Well, there are some really basic things: just making sure that the greenscreen is actually green and that the lighting is going to work. And besides that, my main tip for any supervisor on set: record as much information as possible. We are working with the production, the film studio and the visual effects team, we all do reference photographs, shoot reference passes, HDRIs of the lighting, information about the cameras and lenses, and all that metadata. In this case, we had the on-set team from Marvel on set, which is exceptional. And they got us everything. This is something every supervisor should have: a lot of data of any kind from the set helps you afterwards.
DP: A personal question: my favourite shot in “Avengers: Endgame” was the beauty shot with Cap on one side and Thanos’s army on the other. Who did that?
Matt Aitken: Yeah, that was Weta as well – in that shot the applause should go to Justin van der Lek, the compositor who created that shot. We worked together for many years, and he built it beautifully – he’s from the Netherlands, and I think that he’s doing the modern-day equivalent of a Flemish masterpiece.
DP: How has your workflow changed from “The Avengers” to “Avengers: Endgame”, which is seven years by now?
Matt Aitken: On the first “Avengers” film, we worked at 2K, and now we’re working at 4K, and we are doing versions for IMAX as well – this might seem obvious, but we’re working with a lot more pixels. And we’re doing more of the work in true 3D stereo. Stereo obviously for full CG shots – which, in our part of “Endgame”, was about a third of the shots with no filmed element. That is more then usual – for films like this, I would expect around 25% of shots are fully CG, but for “Endgame” it was over 33%. And because the films aren’t shot in stereo, but instead with a single camera and then post-converted to stereo, but with full CG shots, we can actually render a truly majestic stereo picture.
DP: What’s the tech you think we will see in another seven years in CGI?
Matt Aitken: We have many exciting projects happening in terms of that high frame rate material, for example Ang Lee’s film “Gemini Man” with Will Smith, where we deage him – which is fiddly and very high-quality. And then there are the “Avatar” sequels. In terms of technology: we continue to consolidate our in-house renderer Manuka. We have developed that and that’s a really fantastic engine for our lighting and rendering – a big part of our R&D at the moment. Then there is the work we do for facial animation – we have multiple facial solvers that are available to our team. We track the data we get from the head mounted cameras on their performance and then we have to solve that onto the character. We have a traditional solver there that is analytical. But we also have a machine learning-based solver that we start to use more and more, and sometimes we will use one or both of those on a shot. We are very much at a point where we put a keyframe pass on top of anything procedural with an artist sitting down and polishing the work, bringing their creative eye to bear on the work. It’s really important.
That has always been the way we do things: when we develop something, we make it work for our internal use, and then see if it’s of use for the rest of the VFX community. For example, we identified that there was a real gap in the market for a good 3D paint package for texturing pipelines – and that became Mari. We developed that in-house and to make it available globally as a commercial software. That’s a good way to go as well. It makes it available to the wider community.
I think a lot of the computing work will move to the cloud – that’s been a recent development and a very interesting workflow. Some of those services can render your whole film two weeks before the deadline. So, if Peter Jackson re-shoots “The Lord of The Rings”, it’s going to take out all of Amazon Web Services for a week, and it’s done. Everybody else will have to wait for their stuff (laughs).