LAIKA Studios Expands Possibilities in Filmmaking

Intel's Applied Machine Learning team and oneAPI tools help LAIKA realize the limitless scope of stop motion animation.

At a Glance:

  • LAIKA is an award-winning feature film animation studio that was founded in 2005. Films include Missing Link, Kubo and the Two Strings, The Boxtrolls, ParaNorman, and Coraline.

  • LAIKA worked closely with Intel's Applied Machine Learning team to unlock the value of its data and discover new ways to bring its characters and imaginary worlds to life.

author-image

作者

LAIKA, best known for its Oscar-nominated hybrid stop-motion feature films Coraline, ParaNorman, The Boxtrolls, Kubo and the Two Strings, and Missing Link, is always striving to innovate the animation medium. The studio harnesses advanced technology to expand what's possible in its films. In 2016, LAIKA earned a Scientific and Technology Oscar for its innovation in animated filmmaking.

Watch the video below:

Watch the webinar below:

In this webinar, hear from the technical masterminds at LAIKA, VFX Supervisor Steve Emerson and Production Technology Director Jeff Stringer, who will share the behind-the-scenes efforts and technological innovations that go into making LAIKA's award-winning films.

Learn how LAIKA partnered with Intel to develop tools powered by machine learning and AI that accelerate its digital paint and rotoscoping tasks with the goal of learning and accurately predicting multiple shapes, open and closed, in relation to the image while ensuring spatial smoothness and temporal coherence. Machine learning typically relies on large amounts of data to build models. Intel and LAIKA collaborated on a counterintuitive solution with smaller amounts of data predicated on the stop motion animation process.

Read the transcript below:

Mary Killelea: Welcome everyone. Thank you for joining us for the Intel Customer Spotlight Series. This series highlights innovative industry-leading companies that are undergoing digital transformation, have tackled business and technology challenges, and created new opportunities using Intel data subject technologies and platforms. Today we're excited to welcome LAIKA to have a conversation on how they are using machine learning to make a dramatic impact in their movie making. Today's host is Tim Crawford. Tim is a strategic CIO adviser that works with enterprise organization. Tim, I'm now going to turn it over you to kick off today's conversation.

Tim Crawford: Sounds good, thanks a lot Mary and welcome to everyone on the webinar this morning. We've got an exciting conversation to go through around machine learning and AI and the movie making process but I will also say there are some really interesting surprises that are coming in our conversation today. So stay tuned for those. For today's conversation I'm joined by two folks from LAIKA: Jeff Stringer and Steve Emerson. And I want to take a minute and allow Jeff and Steve to introduce themselves as well as who LAIKA is, in case you are not familiar with LAIKA, and the process that LAIKA takes to produce a film just so we can set a bit of a foundation. Jeff do you want to kick us off and maybe take a minute and introduce yourself and LAIKA and the process?

Jeff Stringer: Sure, yeah I'm Jeff Stringer. I am the director of production technology at LAIKA, and LAIKA is a 15-year old company. We've been making stock motion animated films for that period. Our first film was Coraline in 2009 and we—our most recent film was Missing Link. And Steve is our visual effects supervisor. I think he can probably tell you more about how we do the process.

Tim Crawford: Great. Steve.

Steve Emerson: Sure. My name is Steve Emerson. I'm the visual effects supervisor here at LAIKA. Like Jeff mentioned we just, we're celebrating our 15-year anniversary this year. Over the course of the 15 years we've produced five feature length animated films. They are Coraline, ParaNorman, The BoxTrolls, Kubo The Two Strings, and Missing Link. The way we go about making our films here in this warehouse in Hillsboro, Oregon, all working together, is we use stop motion animation, which is something of a classic stop motion or classic film making technique. It's really as old as film itself and most of you are probably aware of how it works. Essentially what it is you are really seeing are 24 still images in quick succession that are creating the illusion of motion. And so with traditional 2D animation, you're drawing 24 pictures per second to create that illusion in motion.

What we do here at LAIKA is we take puppets, fully articulated puppets, and we create miniature sets to put those puppets within, with real world light and we capture them one frame at a time.

So, for every frame across one second of film we pose a puppet, we take an exposure; we move the puppet or something else within that environment, in the smallest of increments. We take another exposure and then after we've done 24 of those we have a one second performance. So we do this again and again and again and again until we have a 90-95-minute, or 2-hour movie. It is a time consuming, intense, insane process at times, that is they are rife with issues. And often times those issues are corrected in post-production and that is what I oversee here as the visual effects supervisor and it's part of that process that we're really looking to streamline things with machine learning.

Tim Crawford: That's great Jeff and Steve. Thank you so much for that foundation. I think that kind of sets a great stage for talking about the first subject in our tour here, which is really understanding the challenge that LAIKA was facing. You know, Steve you mentioned how you'd take 24 frames and that equates to one second of film, which is kind of amazing when you think about it. Can you maybe delve a little further into how you are taking those photos and the challenges you are having with the puppets and where the technology fits into it? And then later we will get into how you're addressing that challenge but let's talk about the challenge itself first.

Steve Emerson: So, first thing I would say just for more context is on average the animators here at LAIKA are capturing about three and a half seconds, maybe four seconds, five seconds if they are doing really well, per week. So, when we're talking about 24 frames it's essentially a day's work for a stop motion animator.

Now, one thing that we're doing here is since it's sort of the foundation of everything is we're really, we're trying to realize the potential of stop motion animation. We're trying to do things with this art form that has never been done before, or anything that people have seen before when they're watching animated films. And so, we get there using technology.

In the Coraline days, what we did was we brought in 3D printing for facial performances. And so how it all works is that before we even bring the puppets out to the stages, our facial animation team, they'll figure out a facial performance for a given shot and that will all be done in a computer and it's that performance then after it's been approved it's sent to a 3D printer and all the faces are printed up ahead of time and they're delivered out to the animators basically on these big cookie sheets. So the animators at that point, when they are doing the facial performances, they're basically snapping these faces on with magnets and then the rest of the time they're focusing on the body performance, they're focusing on gravity within that environment.

But one thing that was decided early on was that in order to give them a greater range of expression, they split the face down the center, right between the eyes and that will not only give the opportunity for the animator to change his or her mind on a given frame with which mouth or brow might be snapped in at that moment but it also allowed us to achieve a greater range of expressions for these puppet performances with fewer facial components. In the end what it meant was we have a line running down the faces, horizontally of all our characters. What we've been doing for the past five films is we've been using rotoscoping, which is a high-tech kind of tracing that you do with a computer.

And what you do when you're rotoscoping is you're essentially drawing lines and shapes to tell the computer this particular area I want to fix in some way, I want to alter in some way. These other areas that are… I'm also tracing and I'm saying I don't want to affect those at all. And that's really sort of the source of all. I like to think of it as if you're going to paint a house or if you're going to paint the interior of the house you've got to do all the taping first, right? And then you get to paint. But the taping is going to protect certain parts of your home. So rotoscoping for me is kind of like taping your house to get ready to start painting. It's time consuming and honestly a lot of artists don't like to do it. but that's really what we're gunning for fixing with machine learning. It's the cosmetic word to clean up the line that's running down the center of the face of all our characters and the rotoscoping that allows us to do that.

Jeff Stringer: If I could just add to that.

Tim Crawford: Go ahead Jeff.

Jeff Stringer: I just wanted to mention how the problem has actually grown over the years from the days of Coraline. We used the facial kits, you know so that the number of faces in the film was actually contained by the number of faces we could realistically print and process for the shots. But as the printing technology has developed and we've pushed that process further we're doing more and more shots with completely unique sets of faces. So I think on Missing Link we had more than 100,000 shapes printed for those characters. So as we push the nuance performances that we get, that we capture on stage, we also created a bigger and bigger problem for our rotoscoping team. I think the size of that team probably doubled over the course of this thirteen-year period.

Tim Crawford: Wow. And so, if I delve further into the rotoscoping process and how much of that work you can do in a day, what does that look like? You talked about how many seconds per day of a film you might be able to accomplish when it gets to the special effects team or visual effects team and you start into the rotoscoping process, how fast does that process go?

Steve Emerson: Well, for Missing Link we did a total of 1,486 shots that we augmented in some way in visual effects. Within those shots, there were over 2,000 character performances that all needed cosmetic work with rotoscoping and scene removal. For each of those 2,000 character performances when we bid them out and there are varying levels of complexity with what needs to be fixed exactly, we bid them out at about 50 frames per day, at an average shot length of 100 frames. So per shot we're expecting people to get the work done in a couple of days. But it's 50 frames per day, and there were 2,000 performances to be cleaned up from Missing Link.

Tim Crawford: Wow that's an incredible amount of work. How long does it typically take them to produce a film, before we get into the technology when we look at the more traditional way of the movie making process, you send the puppets and then during the rotoscoping afterward, how long does it take to typically create a film?

Jeff Stringer: From the design?

Steve Emerson: You want to run with that one Jeff?

Jeff Stringer: Sure. From design to post-production, it can be as much three years.

Tim Crawford: Oh my gosh.

Jeff Stringer: And if you include the story development time upfront it can go even longer. But a typical shooting period is about 18 months. That's after we've built everything and we are just capturing animation on the stages. And the nice thing about the way that our visual effects team is integrated into the work, into the process, they're working alongside the animators on the stages, completing shots as they complete them. So there is not typically a long post-production period after that. But it's a long time and so it's a long process.

Tim Crawford: That's incredible. So, let's maybe move forward and talk about how you are addressing the challenge and how technology fit into this. Do you maybe want to delve into how machine learning came into cleaning up those lines in the face? And you talked about in past conversations we've had about being emotionally connected to the characters through the face. Where does that line come in and where does machine learning come into the process?

Jeff Stringer: Well, you know I think first Steve you could probably speak to this, there is the choice to even do the cleanup, right? Which is a unique choice that LAIKA made. I mean, I think LAIKA's commitment to capturing as much as we can in camera combined with our commitment to telling these stories in the most nuanced and realistic way that leads to that desire to actually clean those things up. Some companies may have chosen to leave that there.

Steve Emerson: Yeah, that's when I first joined LAIKA during the production of Coraline. They were just coming off of a great deal of debate on whether or not to even clean up those lines. And really what it comes down to is, I think at least for myself personally, when I go to see a movie I want to be told a great story. I want to be taken to a place where I don't have the ability to go to in my everyday life and I want to be moved emotionally and in order to be moved emotionally you have to have empathy between the characters being watched on screen and the viewer. And it just came down to the fact that if there is this big line running down the center of these characters faces it would be distracting and it would take away from the performance, and it would interfere with that emotional connection between the characters and the audience. And so at that point the decision was made, we're going to clean up every frame of these films and erase, effectively erase those lines. You know over the years we've refined, we've streamlined, the process. I don't think we've taken as big of a leap over the course of these 15 years as we have honestly within the last 12 months. Jeff you could probably tell the story of exactly how we started working with Intel.

Jeff Stringer: The machine learning solution was something that we were pursuing before we met the team at Intel. There has been a lot of papers, a lot of research, done in that field that points to the capability of using machine learning to do image segmentation and there was even a couple of tools I think released that targeted the visual effects rotoscoping task. So, it was on our radar when we became aware of this team at Intel that was looking for an applied real-world problem solved with the technology that they had developed. And we got an introduction to who runs that Applied Machine Learning team. He brought his group over, we did some meetings at LAIKA where we laid out our process and went through this particular problem of the faces and the scenes and they thought it was perfect, they wanted to solve the problem and that became the collaboration.

Tim Crawford: Interesting. I want to delve a little further into the Intel partnership in just a minute but before we go there let's talk a little bit about how machine learning was applied to the rotoscoping process and cleaning out the faces. One of the things that you talked about in the past regarding machine learning is that more data is beneficial. But that's not necessarily the case here when it comes to these puppets and those lines in the face. You also can't just head towards perfection with the technology. Maybe you could talk a little more about what that means, in your process of movie making and the rotoscoping.

Steve Emerson: Do you want me to—

Jeff Stringer: There are two things there. I think the question about more data is interesting. I was thinking that was actually one of Intel's insights into this is that instead of trying to make a very generalized tool that just recognized the puppet faces, they made a tool that was very specific to this task and it turns out that when you do that the data you need is just as specific instead of generalized. I mean, we weren't actually sure that was going to be how it worked out when we started. We had access to computer graphic files for every one of these faces that gets printed and we can render files and we attempted some different ways to build a training data center including rendering a bunch of files of the faces, even photographing them. We put them on a robotic camera rig and shot the faces from different angles. But it turned out that when you focus in on this particular task of creating a rotor shape from tracking points on the face that a good five or six shots of well-designed ground troop data was enough to train the system.

Tim Crawford: You know one of the things that's came up in our conversations in the past is you talked about how machine learning can be used to find those lines in the face but you don't want it to necessarily correct all of the lines. And Steve I think you talked to this about protecting certain areas. How did machine learning start to create that opportunity, but also maybe some challenges that you had to work through as well?

Steve Emerson: Sure. So, when we're doing this type of work what we want to make sure of is that we're protecting as much of the facial performance as possible. The only thing that at the end of this process that's being impacted or the artifacts that are on the puppet face that need to be cleaned up but other than that everything else is being protected. So that's very important. The thing about stop motion animation, at least for myself personally, is that I know it when I see it and, it's something I think is a visceral reaction. People understand when they see something that's physical that's been photographed in real-world light and it almost feels as though it's your action figures from when you were a kid or boy and suddenly they've come to life and they are telling you a story. And part of that I think is that there are imperfections in those performances you are seeing. And the imperfections are being created by human hand, as opposed to a computer, which will always strive for perfection unless you're teaching the computer in some way not to make it perfect.

Often times when we're doing shot work and we're fixing things here at LAIKA it's not always about making it perfect but it's also about does this feel synthetic, or it doesn't feel quite right. What's missing here? How can we mark things up a little bit and make it less perfect. And so when it comes around to machine learning and what we're doing here I think a very important part of this whole thing was that we wanted to make sure that what was created, that ultimately this was a process that was artist driven. So the machine would take a crack at it, it would give you back your tracking information, you're rotor shapes. But then at the same time there was an opportunity there for a human being, for an artist, to actually assess what had been done and then also to have the tools to be able to go back in an augment that in whatever way he or she needed to in order to ensure that it felt authentic.

Tim Crawford: That's really interesting. I mean the combination of using machine learning with smaller amounts of data, but then also not striving for perfection I think is somewhat counter intuitive to the traditional approach that someone might think about with machine learning. So you touched on the partnership with Intel and where Intel kind of fit in and I'd like to talk about that for another minute or two; how they started off early in the process and then maybe you can delve into, I really need to stop using that term delve, but maybe you can talk a little more about the actual technology that you were using as part of the visual effects process.

Jeff Stringer: Sure. I think one of the things that made the partnership with Intel work was their willingness to learn how our artists were doing the task in intimate detail. They actually came and looked over their shoulders and took notes, they know a lot about machine learning and they don't know much about film making. So there was a definite exchange there, and it takes a little bit of effort to get your head around why we do things the way we do, there are not many people who make films like we do. So I give them a lot of credit for their patience. The question always comes up when you are looking at our process of why are you printing those CG pages and putting them on a puppet and then taking a picture of them? Why don't you just render it like everyone else would? But it's like Steve says, we want to see the light on that face, the way that it can only look when captured with a camera on stage. It is a bit of an odd process and Intel's willingness to go there with us and then build a tool that was really tailored specific to this task was unusual. We worked with other vendors and other partners on these kinds of problems and people were always looking to come up with a general solution to work for every film out there, LAIKA's are very specific.

Tim Crawford: It sounds like it.

Jeff Stringer: But you know they understood that the way to be ultimately successful was to target a specific problem, put the data in the hands of the artists that are doing this every day and they are going to know how to use it best. If you try to have the machine give you a solution 100 percent of the time, you're never going to get there. I think that that's probably the thing that's most generally applicable about this whole framework. Going forward we can do that again and again. We just look for artist processes that are repetitive and consistent and then try and build these kinds of things around those systems and accelerate what they are doing, but not replace what they're doing.

Tim Crawford: That's great. So you know when you think about where you go from here and think about how this is changing the movie making process, you've touched on the line between the halves of the face as just being one component, what other aspects are you thinking about with the change to innovation around the movie making process?

Jeff Stringer: Well, like I said—

Steve Emerson: You want me to run with that Jeff or do you have it?

Jeff Stringer: Yeah, take a shot; you have got an idea for it.

Steve Emerson: Well, I certainly know where I would love to take this. We're building out these enormous worlds and we're trying to make bigger films. But at the same time we're also remaining fiscally responsible and aware of our limitations. So we're reaching for ways to streamline in order to be able to make bigger films on similar budgets to past productions. One thing that happens time and again is now that we create worlds where we have puppet performances with big crowds, you will never see big crowds in stop motion films prior to LAIKA, because there just isn't the ability to create that many puppets. So if you're going to create an enormous crowd of characters, unless you're going to go out and build a thousand puppets to put back there, you are going to need to infuse some digital technology.

What that means for us is if we want to do something like, for instance what we've done, we did some of that on Two Strings and Missing Link and, what we have to do is we need maps, we need Alpha channels, we need rotoscoping in order to be able to pull the separate puppet performances from animation slides and to be able to put secondary background performances with digital characters back there. When we do that, sometimes we will shoot it on green screen but we don't like that because it contaminates the light out on the stages. So ideally we just shoot the puppet performances out there and then we rotoscope. But rotoscoping is expensive and it's time consuming and so what we will often times ask animators to do is they will shoot an exposure out there in the lighting as intended and then they will shoot a secondary exposure where they will take a small green card or just a little miniature green screen and put it behind the puppet. We will end up with two frames for each frame of film and we can use the green screen exposure to get the alpha channel we need in order to be able to put the crowd behind the puppet.

What would be incredible with this is if we could do full character rotomation, to be able to separate characters from animation plates without having to put green screens behind them. What that would enable us to do would be to skip that second exposure and then stop motion animators wouldn't have to stop to put the green screen in. They could just keep moving forward with the performances because a lot of what they are doing is all about rhythm. And when I say they are getting a second a day, that is if they are lucky, sometimes they are not lucky because I am asking them to put green screens behind the puppets. We could move much faster, they could be much more attentive to those performances, we could build different types of contractions and things that we could attach to the puppets to create even more realistic and nuance performances. But all of that starts with getting full character rotomation on the puppet performances, for us that would be an enormous breakthrough for the studio.

Tim Crawford: How interesting. When you think about where you go from here, you talk about automating the rotoscoping process, you talk about avoiding dual images. What are some other potential areas that you are thinking about for machine learning and technology in general in the movie making process?

Jeff Stringer: Technology in general. I mean there is certainly innovation going on across the film making process. I mean we're not just innovating in our rotoscoping department and visual effects department. We're trying to find ways to accelerate and expand how quickly we can make the puppets. The puppets were just kind of animations are marvels of engineering in themselves. You know they take a long time to build but they have to withstand the use of animators on stages for up to two years. We do all of our own mechanical armatures inside of them so we're looking for ways to use computerized design and 3D printing inside the puppets, which we haven't done as much of in the past. There are other innovations going on with the motion control rigs, but I think as far as machine learning we're targeting these repetitive tasks. There are a lot of them in visual effects that don't give you the same artistic control that you want, but that still have to be done especially with our kind of film making. We're looking at common ribs that we use, see if we can recognize those and pull them out of the frame. And the problem that Steve describes, of actually recognizing a full puppet body and creating the depth channel that could separate it, that is definitely something we want to look into next.

Tim Crawford: It's just so interesting to me because you are doing things that are somewhat counterintuitive to everything that we've heard about machine learning and AI kind of fitting into the mix and it is such an interesting approach that you are taking. You have talked about depth cameras; you've talked about image sensors, where does that kind of thing fit into the future process?

Jeff Stringer: It's something we've been working on for years. You know there has been innovations in this area, in fact I think Intel has a depth camera that they sell, they are inexpensive, things like the Microsoft Connect cameras. It's the basic concept of where you can use two lenses to capture geometry out of a hero image. That's something that has been difficult to make work at scale. I think that most of the technologies designed to work at a full human scale, for some reason when you shrink things down to our puppet world, it gets harder to make these mats. So we're looking for someone to work with us on that and it's going to probably require some specialized hardware.

Tim Crawford: Fascinating, absolutely fascinating. Well we want to move to the Q & A section. Let me maybe ask the first question. One of the things that you've talked about in past is the specific technology that you're running on. Why did you decide to run machine learning on CPUs or Xeon processors [Intel® Xeon® processors] and what advantages do you see of using that versus CPU?

Jeff Stringer: Well, it was one of the things that we liked about working with Intel on this. We knew that we had already made a sizeable investment in the [Intel] Xeon CPUs, they cover all of our workstations and our render firm. If they applied machine learning group that at Intel was going to build something, we knew they would be able to optimize it for those CPUs. In fact that was in their own roadmap of what they were trying to do, I think that that's an advantage. I mean anytime you have to introduce some new CPU appliance or some other technology into the workflow, it's difficult. So one of the things that we liked about collaboration was that it promised a tool set that would fit into our existing workloads and run on our existing hardware without major new investments.

Tim Crawford: Next question. You have talked about perfection and we talked in the past how machine learning leads to perfection. The question is, do you worry that machine learning would get too good and take away from the emotion of the film? You talked about connecting with phases and kind of feeling that emotion when you go in, Steve I think you mentioned this, when you go in and watch a movie you want to be taken to another place. Do you worry that ML will be too good and take away from the emotion of the film?

Steve Emerson: You know, I don't think so Tim. I mean I think that if you always have human eyeballs in the mix, and that's pretty much it's my job after we do any kind of post treatment on a given image, it's put in front of me before I put it up in front of the director and say okay this looks amazing or this is incredible. I think as long as you always have that human eyeball check to make sure that it feels authentic, you are going to be okay, but again it comes back to the story. One of my favorite anecdotes was somebody was once asked why would you ever paint a picture on canvas when there is Photoshop and a working tablet out there for you to use? It's just a different way to go about things, you know. It's a different way to express yourself.

And you know we're creating stop motion films and we're infusing them with technology and people will always ask, well where is the line, and the line for us is that we're going to get as much in camera as we possibly can. We're going to make sure that stop motion animators are driving the performances of all of our lead and hero characters in films. We're going to build as many sets and when it comes to the point where we have to limit our storytelling we're not afraid to jump in and start using technology in order to be able to do that. We want to do it in a way that's ultimately respectful of the craft and fortunately here in the visual effects team not only do my eyeballs and the supervisors and leads that support me, that are all passionate about stop motion film making. We've also got some of the most talented set builders, stop motion animators, lighting camera people that we can pull over here and show them shots on our computers and say, "Hey does this feel authentic, have we marked things up." We can still do the same darn thing with something that's come out of the machine learning process. You know it comes down to streamlining in order to be able to embolden ourselves as story tellers.

Tim Crawford: That sounds good. So the next question is somewhat related to that. How do you balance computer-generated graphics and rotoscoping with machine learning? I think that's kind of tied to not aiming for perfection, still having a uniqueness in the puppets and the process, the artistic freedoms if you will. But how do you balance between those two? I think that is a great question.

Steve Emerson: What you are getting at with that question Tim is how do we decide what's going to be digital and what's going to be puppets and stage plays. Did I get that right?

Tim Crawford: Yes. I think the question was just more of why not just do computer-generated graphics as opposed to rotoscoping with machine learning. You start to get to a degree of perfection potentially with computer-generated graphics.

Steve Emerson: Well, I will take the rotoscoping with machine learning to be honest. I mean it goes back to when we make these films. We start with the films and then there are storyboards that are created and animatics, which are animated storyboards, then you have a bunch of people from here at the studio that are creative leaders that pile into a conference room with the director and go through shot by shot by shot and try to figure out how we're going to execute a given visual for the film. When we're having those discussions I am in the room and I keep my mouth shut until there is a problem, because we're going to get that visual in camera if we can. And if for whatever reason it's physically not possible, to be able to say, animate, you know something that's got, it's like the finest of particulates or if it's resource driven where a given department just doesn't have the manpower to be able to pull something off then that's when I jump in and I start talking about solutions that we can do digitally. But we always start with getting it on camera. What's really exciting about machine learning and AI is that it's going to empower us to do that even more.

Tim Crawford: We only have a few minutes left. Let me try and maybe do more of a lightning round with you on a couple of questions here. It sounds like this is all happening, this is one of the questions from the audience, it sounds like this is all happening on-prem, is that true? Are there any pieces of your overall production process that lends itself better to public cloud and do you see that happening over time?

Jeff Stringer: It is on-prem. We are optimized for on premises work, that's just the way that we built out studio because it's been a priority for us to keep everyone on site and collaborate together. I can tell you that with the pandemic has been challenging and we have more and more need to make the data available off site in a more convenient way. And so we are looking at filed infrastructure for those purposes. Does that answer it?

Tim Crawford: Yes, I think that's great. From a visual effects perspective has adding artificial intelligence machine learning required a different skill set for those looking to get into visual effects? I think this is a great question.

Jeff Stringer: Yeah. I mean Steve that's probably yours but I would just say no. I think that there is a trick and a cost to building the training datasets and that does require some specialized knowledge and it could be that as we introduce more and more of this that my team, the production technology team, would have a data scientist role on it. But as far as the visual effects artists and the painting and rotoscoping and those crafts are still going to be, those are the ones we're going to be enabling.

Steve Emerson: We want to make sure that ultimately the tools that are being created are as intuitive as possible so that we're able to bring in artists that are familiar with rotoscoping and paintwork. And again it comes back to we're streamlining, we're speeding up the workflows, and on our side is James Pena, our rotor paint lead who's been spearheading a lot of this development. He came up here as the rotor paint artist here at the studio as well. so that's a big part of him being in the mix is that he can look at it from that point of view and say all right if I am coming in as a rotor paint artist with limited experience in terms of how the make films here at LAIKA I can use this tool and I can use it from day one. It's that intuitive.

Tim Crawford: Wow, that's great. So, I think we have time for just one last question. Has adding artificial intelligence and machine learning to the rendering process increased the time it takes to do the actual rendering and does it require a different server infrastructure to support it?

Jeff Stringer: It has not increased the rendering time for the shot. There is a training period that runs overnight when we're building the training for a new character and that's separate from the short rendering and it can be done on a per character basis, not a per shot basis. It doesn's have a big impact on that. and as far as the infrastructure goes, that's one of the benefits of working with Intel on the oneAPI [Toolkit], network of applications, it's all optimized to run on our [Intel] Xeon CPUs.

Tim Crawford: That's great. Okay, that's all we have time for Q & A. Jeff, Steve, thank you so much for joining this really insightful conversation of behind the scenes, if you will, to the movie making process. We've had a background tour into stop-motion animation and really are grateful for your time.

Jeff Stringer: It was fun.

Tim Crawford: Mary, I'm going to turn it back to you to close out this particular webinar.

Mary Killelea: Wonderful thank you very much, it's a great conversation and thanks to everyone for joining us today. Please look for other exciting customer spotlights that highlight data-centric innovations coming soon.

Download the transcript ›