Description
Key Learnings
- Understand the pros and cons of using CG integration onto live plates
- Discover or better understand the process necessary to be successful
- Appreciate how HDRIs can assist the lighting and impact your final images
- Learn about putting all the pieces together and expanding your expertise of still images into image sequences
Speakers
- LMLudovick MichaudWith over 20 years of experience, Ludo Michaud has spent most of his career in the creation of visual effects for televisions and film. He has worked on over 8 features films, well over 1000 commercials, countless game trailers, and even 3D rides. He’s a mentor for the Visual Effects Society, he built training books/tools for Softimage|XSI, taught seminars at Gnomon School of Visual Effects, and is a member of the advisory board of a few of the community colleges around Dallas, Texas, and he has consulted for University of Texas at Dallas on curriculum. He joined Corgan MediaLab to help elevate their 3D/FX department for visualization and he’s now an associate within the firm. Ludo keeps elevating the studio through new techniques and new tools, as well as helping find the right material for the group and teaching through the firm’s internal college program. In addition Ludo teaches for the NAD (National Animation Design University) around the country, starting new programs to teach visual effects.
- ELEric LabEric Craft has been doing design visualization in 3ds Max software for over 15 years. He started his career doing design visualizations for events and tradeshow booths. After 9.5 years working in the aerospace industry doing marketing product visualizations, he joined Corgan MediaLab as lead modeler. As an associate at Corgan MediaLab, he is responsible for bringing to the company new concepts to modeling and scene layout. His Associate of Arts degree in advertising and Bachelor of Business Administration degree in management enable him to bring business-oriented ideas to the visualization process. Craft’s education and experience with 3ds Max software communities have given him the honor of attending 2 3ds Max Gunslinger events at the Autodesk, Inc., offices in Montreal.
[MUSIC PLAYING]
LUDO MICHAUD: Good afternoon, everyone. I'm Ludo. Ludovick Michaud, your speaker today for the class of Architectural CG Integration to Live Footage. Speak about me-- let's talk about me for a second. I'm the VFX creative director of the Media Lab and at Corgan in Dallas, Texas. Some of you should know what Corgan is, a little company of about 600 people, 600 architects. And we have that little base in the back of the building, which we call the man cave with 20 guys in it. Which we call ourselves the Media Lab. And we help them create their stills, their animation, their interviews, their documentaries, their graphic design. We do a lot for them, basically, for the whole group, for the whole 600 troupe.
Me, personally, I've been doing production in media entertainment for about 20 years, a little more over that. Film, television, commercials, mostly. I've done a lot of game trailers. And I decided to join the lab about three years ago to help them improve their cinematographic eye to the architecture. I'm from Quebec, Canada, so if you have trouble hearing my accent, it's OK. I won't take it personal.
And as you can see, working with some crazy people right here. You might have seen this guy and this guy earlier today and yesterday doing some drone classes and storytelling classes. I do fly drone myself. This is part of my little team here. This is a workspace we have right there. And it's kind-- it's bigger than that, but-- so this is me. And then we have Eric, who is going to be my co speaker. He's my little modeler in my troupe. He's leading about three guys, two, three guys, helping improve the workflow between the architects and us, as we get most of our content from the architects. And it's very clean all the time, very nicely done in Revit or SketchUp or AutoCAD. So we have to have people to help us make it look better.
And Eric, 15 years. I called him about a year ago. He was working for Bell Helicopter and doing pretty much the same thing he's doing for us at the lab, doing-- helping the designs and making the images pretty and so on and forth for marketing and such. He's a gunslinger for Max, so he's a guy that's been up at the-- in Montreal a few times to give his-- share his thought about what Max should look like. So if you have any problem, you can always talk to him about why it's-- no? OK, no.
And today-- this year is his first time at AU, and today at 1:00 was this first class. So now he's in his second class. That's pretty cool stuff. And so basically what we're going to do today is run after this stuff, learning objective, and so on and so forth, the big picture stuff. So I'm going to skip that, so I can win some time. Usually this class-- I give this class in college over one or two semester. And I was asked to reduce it to two weeks about five years ago. And now I'm down to-- last year was a year-- an hour and a half. Now it's an hour. So I don't know. Maybe they just want to compress the data. I don't know.
Maybe VR is taking over. I don't know. But basically, what we're going to learn about it today is why do we want to put our buildings, our designs, into a live plate? Why is it so important? Why does it help? Why does it-- where it can hurt? We're going to talk about the process itself. So we're going to go by the steps. Because a lot of you are saying, well, I can shoot my footage, but I don't know what to do with it next. So we're going to try to help you understand how to get that stuff. And also we're going to talk about how to help you use your footage that you currently have, and help your lighting, and the quality of your renders to make it more realistic at the end of the day.
The whole point of this is to make your environment real, make this stuff look right, and look as close to reality as possible. Obviously, we can use it in massing, in phasing as well, but those are different looks and so on and so forth, which we can talk about. And then putting it all together, what it look like. And that's what this class is about today. I'm speaking a little fast because I have an hour. I got two semester to cram into an hour. So let's do that together, right? You guys are sitting comfortable? Yeah, good.
So let's start with the beginning, the pros and cons. So old wive tales-- which is not tail, but tale.
ERIC CRAFT: Yeah. I swear I checked for all-- the typo. I missed that one. Thanks, PowerPoint.
LUDO MICHAUD: Good start. Good start. And this is being recorded, darn it. So people think putting a real-- putting a CG element into a live plate might cost dollars, because usually you see it in movies, not homemade. And usually when it's made homemade, it doesn't look very good. So you're questioning, can I do that myself? Yes, you can. You have all the tools, most likely. If you have an iPhone, if you have a cellphone like After Effects, if you have 3D Macs which most like you have-- most likely.
And if you have a composing tool, of course, you have After Effects or Photoshop, you most likely can do all that stuff. So that means you already have most of the tools. Now it's how to use them to get there, right? It can be effective communication-- tool for on the budget. So what happens is that obviously, when they come to us and say, hey, I need help with this. This is how much I got.
And coming to me, coming from production side, I'm like, you got what? You got a penny and dollar? Great. So let's see what we can do with this. Well, because using the quick tricks and the way to do it on the quote unquote cheaper way, you still can make something very pretty and still fast on a very small budget. And because it's small budget, it becomes very powerful, because now your client, whoever your client is, gets to see what they want to see, whether it's a car in the environment. They're supposed to work the car into the environment, or a building into-- what we did late two years ago, we took a mountain hillside and we took it down, and we just put the building where it should go.
So suddenly the client was like, oh my gosh. This is what it's going to look like? And they fell in love with the result, because they finally understood. Instead of just looking at sites and stuff like that in stills, they finally understood what it looked like, really what it was going to look like four years later. So it was very effective, and they really liked it. When do you use it? Not when you're doing a design phase. So the problem with design phase, is usually the client doesn't want to think that it's over already.
And funny enough, a lot of pieces that we've shown in the past, people are like, is that already built? So you don't want to make them think that it's already built, or it's already done, or that's the only idea you have. So usually you want to bring that when you're towards the part where you're ready to build construction, stuff like that, to give the client even, hey, let's close that design. Let's say we're done with this.
We like to use massing and phasing to explain. Massing the volume. They'll take in the space that they want to take. So a lot of time we have-- at Corgan we work on a lot of commercials, and our airports, and health care, education, critical facilities. So they all take big, big footprint on the space they take, right? It's a big footprint. So usually that means they don't-- they're going to have to tear down what they have there and then show what it's going look like. So for us it becomes very useful for that, because now we can share what the environment's going to look like, what the result will be.
So from there, we're going to talk about the best practice. So one thing we're going to talking about is pre-viz, also known as a pre-visualization. Being French, I like to say pre-viz. It's easier for me. That means we visualize before we put-- we make the final piece. That means somebody-- we go to Media Lab, and we say, hey, guys can you show me what it's going to look like if we wanted to do a camera from this point of view? So usually what we're going to do, we're going to go Google Earth. We're going to say, hey, this is what-- they want to take a picture.
We kind of mock-up a quick camera, say, hey, this is what you're going to see. Client approves what they want to see from the sites. And then at that point, we can go to the-- send that-- the pre-viz result to the shooter, the film, the guy who's doing the film with his drone or camera or whatever he's going to use. And he has an idea where, first of all, where the building's going to be. So he knows what to shoot. Because usually when we don't do the pre-viz, what happens is we have to guess where the building's going to be.
So with the pre-viz he you get to see where the building's going to be actually. So now your shooter know-- has an idea where to shoot. So when they don't know when the building is going to be, they guess. So that means they're going to overshoot, most likely. So that means they're going to shoot with the building not in frame, things like that. So it's not going to look as good or impactful. So let's say we've done the pre-viz. Then we go to frame rate.
So I know if you go on the web right now, you go into YouTube, you say, I want to do camera tracking. How do I do camera tracking? So most example you're going to have is a kid or-- no offense to people that do tutorials. I do some of those myself. But usually you're going to find a kid that's going to go and shoot a small environment to which he has the control over, meaning I can move the chair-- I can move a chair here if I want to, right? So they have control over the environment. They can move the table. They can take things off of the screen. They have control over the environment.
So when you view tutorial on the camera tracking, like, oh, this is beautiful. It's so easy. But then you put your drone 400 feet, 300 feet in the air, 400 feet in the air, you can't control what's happening in your seat anymore. And trust me, I tried to put a marker, because I needed to see for myself that it didn't work. And the best I had was a 25 by 25 feet x on the floor. So it's not curable. It's quite a pain. And it was useless. So with that in mind, you have no control over your environment, basically.
So tracking becomes a different beast at that point. It's a different place. So to help you with that, the few things you're going to do is frame rate. So when you should drone, it's all pretty when it moves. But if it moves too fast, it becomes motion blurred, right? You know when you're shooting, you're doing this. You see motion blur, right? Actually, you should see a motion blur right there. So what happens is that motion blur doesn't help you track what you need.
So a higher frame rate will help minimize that motion blur, and give you a better tracking capability, what you need when you track your footage. Shoot the grid. Well, each-- cameras come with a lens distortion. So as you know, most of you know, when you build a lens, a lens is only as perfect as the tool that builds it. And no lens is-- no lens is built perfectly round, symmetrical and perfect. So the grid that we talk about, I forgot an example of that, again, skipping time. But it looks like a grid, basically. And you put it in your frame.
And if you look at it, you'll see that the edges-- the grid is not straight anymore. It's kind of deformed, distorted. So that's the distortion of your lens. You need to know that so you can flatten the grid back to a normal perpendicular grid. So you can put your CG-- because your CG is not distorted. Your CG is perfect. So you want to be able to distort it at the end, which we'll show you in a minute-- a little later.
Then you can shoot 360 cameras which is always fun, but it's a lot more work. Software is now like a N Premiere, I won't say-- Fusion. They all come with these stereo or VR camera kit, which helps you open those frame and look at them normally, not all distorted and weird and looking like a fisheye lens. Next thing you want to make sure is that highest resolution. When you're 300 feet in the air, 4K is your best friend, 2K is not.
It's too much compression in the frame. So 4K is what you want to use. 4K and up is what you want to shoot. I know it's heavy. It's a lot of footage. It's big. It's going to get reduced at the end. But the information you get out of the 4K is much more valuable than what you get out of the 2K. No matter what those little Inspires, or those Drone, or the GoPros and stuff. When they showed they have to compress, because they have to shoot at a high frame rate. And they have to compress DNH in order to shoot the frame fast. Well that breaks the quality of your image, thus breaking your tracking, or not helping your tracking anyway.
So the next thing on the list, when you want to shoot. You want to shoot with a golden hours. Anything with high contrast. So overcast is not your friend. Overcast has no shadows. Overcast is lacking contrast. Overcast is flat. So again, your tracking doesn't really know what's happening when everything is flat and so the more contrast you can get. So nice noon sun, or between 10 and 2, is very good, and no cloud or barely no cloud.
Especially as little as possible cloud is very important, because the clouds will move against your movement itself, because they're moving in different direction. So suddenly your tracking is like, wait, am I filming the camera or the cloud? So less cloud as possible is always great. We'll figure out ways to remove that. And then of course, the edit for the rough cut, so you know exactly how much you have tracked. And then locking the shot, which usually, when we do full CG stuff, the camera is never locked. And it is never locked.
It's quite a pain, because you work two months on a project, and suddenly the guy's like, I don't like this camera. You're like, I'm done with it. No, no, no I don't like this camera. I'm sure I'm hearing-- I'm not the only one that gets to deal with that. But yes. What's nice about this, you get, with the rough edit from the editor, you can get to lock your shot faster. And then you have your parameters to which you work with. So you don't track five minutes of footage for just 30 second of it. So you don't want to track five minute. Trust me. It doesn't work. But 30 seconds usually works pretty good.
So here's an example of shooting without pre-viz. So this is the original plate. And I have my fight with my internal staff sometimes, I say, I need to give you pre-viz, so you know where the building is. Why-- you told me it was going to be somewhere there. Well, great. So the problem is, no, the site is there. So what it does-- well, at least they shot at 4K, OK. So that's a nice thing.
So because of that we were able to reset as best we could to reframe the footage to give us the building where we needed the building. But you see how the focus is important, not just for tracking, but also for helping yourself so you can re-center your footage. But also knowing where to shoot. So unfortunately, going back to this, it took us about four hours to get all the footage. We were going from one school to another, which was three miles away. We had to do the whole travel with the drone, which is very interesting to drive while you're recording a drone. But it was a very fun experience.
So you can imagine, one guy's standing-- is kind of sitting out of the sunroof, and he's got the controller. And somebody is like, turn left, turn left. A very interesting experience. That was fun. I was driving. There's a car coming. No, keep going. Keep going. So again, the reason the pre-viz is very important to all people because even at the end, no matter what I did, we would have wanted the building to be more centered, right? With this, I can't do it. That's the best I could do because of what I had access to, but this is why pre-viz can be very important.
It seems very-- I know people will think it's a unnecessary step, or a longer step, or taking more time. But sorry, guys. That two hours you'll take to do the pre-viz-- it shouldn't take more than that-- will save you a lot of time and energy and client's saying, well, my building is not centered. Why not? Well I can't move the photo more than that, or have more than that. You guys ever had-- the guy's like, can't you have-- you have a picture. Like, can you see what's behind the picture? Yeah. I love those people. You're like, yes, I can, sure. That's the kind of-- so it was like, well, you shot the image. Don't you have the-- no, I don't have next to it.
Next. So we talked about the process a little bit earlier. One thing we do is we talk about what we want try to show to the client. Are we doing massing? Are we doing phasing, or doing final piece to show? One of the pieces we're going to see, the client wanted to see Ft. Worth. I live in Dallas, so we have Ft. Worth right across the street. And they wanted the school-- whatever we were going to build for the school, they wanted to make sure that you could still see Ft. Worth through the school. So that was one of the important things. So we knew when we did the pre-viz that we had to make sure that we knew exactly where Ft. Worth was, and so on and so forth.
So we recreated a version of the current establishment, and then we did the pre-viz with a Google picture to give us an idea where things are. So from there, when we went to shoot, you'll see that shoot is much different. Because now the building is literally the-- what we want to be the star is the star. So we did the pre-viz. We shot the pla-- the film plate. We talked about frame rate. We talked about the contrasts. We talked about-- what else did we talk about? Talked a lot about stuff already.
There's distortion, yeah, tracking, cleaning. What's nice about tracking, if you're too busy-- which happens to a lot of us-- there's cool companies out there. They'll be more than happy to track the stuff for you at a very nice cost. Usually, the company we deal with is called BOT VFX, so B-O-T VFX dot com. I'm not putting a plug. So I'm just saying it because that's the people we use. I shop a lot. So in the few-- back in the years. And that's one of the best. They actually do a lot of movie tracking and stuff. They do most of the tracking for movies, big blockbusters and stuff. And they charge about, I'd say for 10 second, they'll charge you about $100, which is very cheap, considering my hourly rate, I'm pretty sure that goes to a very nice cause. Of course, then we have 3D creations. We bring the-- now we've tried the camera, so we know exactly, on 3D space, we know what the camera did in real life. Now we have to bring it back to 3D. We have to integrate that. We have to light. And then we have to render. We have to comp. We'll talk about softwares we can use and the steps. I don't even know why steps is there, but it shouldn't be there. Sorry.
And case study. So let's look at the final product of Melissa ISD, which is the one that didn't have the pre-viz. We'll talk about the multiple layers that were involved. We'll bring the camera into Max. We'll see the final result with the 3D lighting. And then we'll clean the-- we'll show the cleaning the plate process. And then the compositing with Nuke. Which by the way, this is not a plug for Nuke. Nuke is just the tool we like to use. But it's very usable, easily usable, with After Effects. Photoshop, Fusion, Composite-- is it Composition, Composite with Autodesk? Flame, Inferno, all these guys.
And then we'll show you the one, the Fort Worth ISD, which is the one I was talking about. We're looking at Fort Worth, and we actually show the pre-viz. So you'll see the result of that. So here's the final result with some of the layers. So I'm going to play that again. Can I play that again? Come on. Can you click one-- All right. Try that. Just and do it. There you go.
So this is the footage. this is the final result. But then you see all the multiple layers. There's more layers than that, but we're giving you a big picture. Tracking, then the elements that we added to make it real. One thing you'll notice is that-- cut off my laser but-- here, in the original plate, you'll see in a minute, that was not-- that grass that doesn't exist right now. This is a literal trash site. At least it's not Indian site, so kids don't have to worry about that. No. OK? No. I'm trying guys. So this the final result itself. So the hours of shooting, we compressed that down in this. It was a five minute piece.
[MUSIC PLAYING]
So you see the [INAUDIBLE].
OK. Tracking. Internal versus outsourcing. It's all about money. It's all about time. It's mostly about time, and then money. Internally, guys, it's not hard to learn how to track. It's more you do of tracking, the more you do of it, you'll get the hang of it. The first one might not be perfect, but then the 10th one will be very good. And when we've done about 150 or 200 like I did, it just becomes second nature, right? But there's a lot of good still out there. Just keep in mind that what they're showing you is very practical. And what we deal with, in architecture, which is site, survey, and stuff like that. It's not as practical as what they show you, so always keep in mind those little facts.
So again, it's up to you if you want to go out to get your tracking. Usually when the tracking information comes back, it's very easy to work with and deal with. But internally it's always fun to do it yourself so you don't have to pay anybody. But you can do it yourself, and have a little fun experience. In fact, I was tracking last week. Footage constraint, don't forget about the contrast. Have lines. What I mean by lines is-- if I go back here for a second-- so, well, what I mean by lines, is in this footage here, you can see there's lines. There's a street. There's that separation of the floor. There's the tree lines. There's a lot of information that the software can use to help him-- to help it figure out the information of lens distortion, or tracking information, stuff like that.
So keep in mind, it's always useful. So again, with the contrast, you see the shadows are pretty predominant so I can-- it's very easy to be able to track that stuff when it happens, right? With those lines, it's very useful. Obviously, you can work without it, but you'll have better result faster if you have some of it, right? Shoot a high frame rate again. Again, that depends on the software-- the hardware you have. The inspire can do 42, 72, and 96 I believe. And we have an octo-- one with eight blades instead of just four drones.
So we shoot-- we take the GH4, the red. And we just go and shoot the red upstairs. And we just have the red shooting the footage. It costs a little more. But again, it's all about what you can afford. But then we shoot at 96 frames per second. There's barely any motion blur to deal with, which is great. The clean up. Well, the clean up is-- things to think about when you clean up is, like I said, the original-- I think we're going to go to-- no.
I'll show you in a minute, when we do the Photoshop part. You'll see that the original footage has a big trash site. We have to remove that, because obviously we want to show something to the client that doesn't-- won't exist once this building is there. So we have to clean that up. Be mindful of rotoscoping. Anybody doesn't know what rotoscoping here is? So rotoscoping is, basically, when you cut somebody out of an image, just take them out so you can put something behind them. So you're in Photoshop, you just cut them out with a path or selection. Take them out, you put something behind them.
Rotoscoping is, you do it per frame, at every frame, OK? The important thing is to understand that the trees, for instance, will go in front of the building. You don't want the trees to go behind the building, otherwise it will break the client's imagination that the building exists. If the tree that's supposed to be in front is actually on the right. Photoshop is a great tool to paint. That's usually where we start painting our baseplate. And we'll use the result. Well, obviously, the treating team will use the result to make their scenes. And the compositing theme will use the back plate and the tracking to put more stuff into the image, and make it more compelling to the client in the process. So clean the plate.
What frame to use. So you have your footage. We set up footage. You have to find the best frame to paint. The best frame to paint is the one that has the most information possible. So if I were to start shooting here, I'm turning like this, and I go here, and I want to remove some-- whatever that nice little design that is in the center of the frame rate there-- I'm going to take the frame that is the most-- that has the most information in it. This one has almost no information, which is, nobody is in that frame, and all I see is a wall.
But when I turn here, I see everybody. And I have even the floor. So I can take that one still and start painting it. Make sense? Again, guys, I'm going a little faster because it's a lot of theories to go in a small amount of time. So we'll go for-- let's go Photoshop. A Sorry, guys. I'll be right there. Where's my Photoshop file? Where's my Photoshop file? Why am I not seeing my Photoshop file? It should be right here. Of course, it's not. Why is it not? Sorry, guys. I'm going to get there. There you go. So, OK.
So this is the original footage. Can we see it good? So for that one shot we had to find the best-- the one that had the most information. As you can see there a side trash. There's the people shooting the drone, they're right there in the middle of the screen. Which is the beauty of shooting site series. It's very easy to remove that kind of stuff. So finding the best frame-- we were able to go and paint what we needed out of it. And then we start painting. And then we add this. And so now we have a full site.
So this image, once you have the tracking-- so imagine now I have a 3D camera that reflects what my live camera did in real life. So using this, now I can use that in my compositing software, and put that as my back plate. And we can believe that-- make people believe that this is real. So as you can see that's pretty easy to paint that stuff. Literally, that took about maybe an hour of work. So then you go back to this.
So other tools that exist are-- you can do this in After Effects. You can do that in Gimp. I don't know if any of you have even heard of Gimp. I'm sure you've heard of the term gimp, but not the tool Gimp. You guys are tough. Man, I know-- I know that the band yesterday wasn't that good, but gosh, they should have killed the whole-- hashtag with-- what was the? So anyway, so for the shot now, we've cleaned the plate. And then we go to Eric, who's going to show us what to do with that plate.
ERIC CRAFT: All right, guys. Let me switch over here. OK. So typically, what I'll get back from the track team is either an FBX or an [INAUDIBLE] file. And what that will end up giving me, when imported, is something like this. You will typically have some sort of helpers, a camera, geometry that gives you some sort of reference to the site, what was there, so that you can use that to help line it up against the back plate.
What you typically won't get with this, is something that is to scale. Because the tracking software does not know how big the site is. All it's doing is tracking pixels within the background plates. So it doesn't know if it's 1 foot or 100 feet. So let me switch over to this guy here. All right. So this one, I've already brought in the background plate. And what I've done is switch to wireframe. So we can see that the helper grids that Ludo set up in a track now show up in the 3D scene against the background plate.
And if you haven't done it, to set your background plate, alt v, use file. Make sure you have animate background down here. And then select your footage here. That will actually put in the background. Now if you're like a lot of people, you do that and you go, it's not doing anything. Well, they hid another preference option under view ports, update background while playing. Make sure that's enabled, and now the background plate actually updates with the camera being tracked. So now you can see, OK, let's grab here, and let's verify.
All right. That is sticking as close to the road as I think we can. We're in pretty good shape here. So we'll end up saving this out as a max file. And then what we end up having on the rendering side is, we'll bring the actual model together, the background plates, and then the camera track. So the model is in the scene. And this is what I'm talking about, where the camera does not line up, because it's nowhere to be seen. So what you want do-- I've got these brought in as extra scenes, and what I do is on my camera file I'll bind it to a helper, like a point helper.
The reason I do that is you want to scale. So we know the building and the site and all that is accurate. We do not know the accuracy of the camera. So we're going to scale the camera to actually fit our model. So I've scaled it. I've rotated it and positioned it for this. I just used a list controller so that I can turn everything on and off. But you should now see that our model and everything matches and tracks along with that.
So the important thing here is scale what you know not to be accurate. We know the camera track, while it tracks points accurately, it doesn't know the size. So scale your camera. When I've worked with other companies, whenever you did that, they'd scale the model. And so your 100 foot building is now an inch off the ground because the track was small. And then it's like the lighting looks wrong and the shadows are wrong. Well, what did you do?
Oh, we scaled the-- no, you don't. We know the building's good. Leave the building alone. Scale the camera. So from this, we will go in and set up lighting and renders. And here's a rendering against the background plates. One important note, whenever you do the rendering for compositing, make sure you disable the background. Because what you actually want to see whenever you render is black. Because if you're putting this over the background, and you've got the actual background plate in the background, if you try to use an alpha to knock it out, you're going to get noise around the edges of the track. Because it doesn't know what to do with those additional colors that you put in there.
Black, it can filter that out, and it can give you a correct result. And then with this, we typically have a large number of passes. We use material IDs from V-Ray, object IDs, as well as just standard beauty type passes, diffuse filter, global elimination, lighting match shadow, things like that. So, Ludo I'll let you go ahead.
LUDO MICHAUD: So in our workflow we use V-Ray currently. But we're also investigating Arnold, that just came out on beta, I think still, for Max, and Redshift, which is a great real time rendering tool. So we're investigating all the possibilities here. But there's so many options, you're not-- if you're still using Mental Ray, it still works, no problem. You might want to think about where Mental Ray is at, but-- thank you, whoever is laughing. Thank you. Support. Thanks.
So composit-- well, we have Smoke, Flame. Obviously if you have $5,000, $100,000, or you have free, or you have whatever the cost per month is for After Effects, Fusion, which is free, by the way. Blackmagic, which is a cool company that has that software now. Or Nuke, which is at anywhere from two grand to five grand solution, will give you all the tools you need. Nuke comes-- what's nice about Nuke, and After Effects in fact-- what's nice about these two is they come with the compositing software. But also come with the tracking capabilities. Nuke has a much more advanced tracking capability then After Effects does, but to be frank, we do a lot of our tracking in After Effects when we're masking pieces and stuff.
When it goes for the real stuff, we try to get a little more accurate result. Other tools for tracking, you have a Pixel Farm, also known as PFTrack or PFTools. We have 3DEqualizer, it's a $5,000 solution out there. Boujou. What's nice about Boujou is you can actually ask Boujou to remove objects from your scene. Then you have-- what else you have? You have--
ERIC CRAFT: SynthEyes.
LUDO MICHAUD: SynthEyes, which is a very cheap solution, $500 solution. I love SynthEyes. It works great for when you do a zoom in and out in your frame, or out of focus to focus. SynthEyes works very good with that.
ERIC CRAFT: And MatchMover.
LUDO MICHAUD: MatchMover, yes. Yes, MatchMover, Yeah. I didn't say that. Please erase from the recording. It's a tool. It's a tool. That's it. But yes, those are your tools. And so finally, once you put all the layers, like he showed you-- we have a lot of layers. But the reason we have so many layers, is we like not to go back to 3D. It costs a lot of money to go back to 3D to render again. So we like to have a lot of control of our objects in the comp, so that we can limit the amount of time we go back to 3D to re-render stuff.
In fact, when I got there three years ago, they were rendering about 15 times in 3D to get the point B. Now we're rendering about two times, maybe three at most, depending on what the problems are and so on and so forth. But, yeah. The compositing helps control. Also with Nuke, Nuke has a 3D capability, which, OK, it's not Max or Maya, but it allows you to do basic blocking of objects, blocking of things and making 3D rotoscope, which is very nice to rotoscope in 3D, because you do one shape and you're done with it, type of thing.
And what else? Yeah. Yeah, that's very much it for that. So HDRI. So how do we light the scene? Can you go back, switch back to your scene where you have the render? Please.
ERIC CRAFT: Sure. One second.
LUDO MICHAUD: You're going to miss the best part, man. It's OK. If you walk, it's all right. OK, so the render-- can you show me the render real quick? No pressure.
ERIC CRAFT: Yeah. I want to-- give me one second.
LUDO MICHAUD: Oh yeah, express render, good.
ERIC CRAFT: No, I'll just open up the frame by frame.
LUDO MICHAUD: So if you go back to the one with the back plate. I'm asking a lot, sorry. You see we have a lot of layers. So the lighting-- what we do here-- the reason we're running with the back plate when we're running in 3D before we press render for the final, is we like to see that our shadows are matching. We don't have the shadows matching exactly the same color as the shadows in real life. We take care of that in compositing.
However, we do match the direction of the shadows, the intensity of them being very dark or very light. So we match the basic stuff. Now I like-- because I use a lot of composite-- I'm a compositor background, but because I do a lot of compositing stuff, I tend to make sure that the 3D matches my back plate about many percent. So I don't make it a perfect match. It's quite hard if you want to do that. It's possible. I've done it. But it takes a lot more time.
So usually when you reach about 90%, you're compositing tool can take care of the last 10% for you to make the image fit where you want it to fit. So when you do your lighting, there's many ways to light. In this case, we did a trade show lighting. What is trade show lighting? That means we took a spotlight. We put it where we thought the sun would be, looking at all this, we put it where the sun would be. And we pressed-- we started rendering. We did a couple bounce light for the back of the building with some infinite lights and stuff like that. So that's basic. And the other result we're going to show soon enough-- we used HDRI. So I didn't make range imagery. We went and we took a picture of the environment on a sphere, on one of those chrome balls. Now you can take a [INAUDIBLE] or the Kodak Pixpro, and they'll take care of doing a 360 for you, basically. But basically we took multiple exposure of those-- that environmental on that sphere. We put it together, it becomes one image, which we call eye dynamic range. And then we use that to light the image.
So using this, basically, in Max, you just go and say, hey, use the environment picture that I'm giving you. And it base all-- the lightning is all based on the actual picture of your environment. The advantage of it is the accuracy of your result will be at 90% on first render. The problem you'll have with these is you won't be able to control the image as much as a traditional lining.
So they both have their advantages. [INAUDIBLE] lighting will take you a little longer to get where you want, that 90% range, however, you'll have all the control you're looking for. The HDI will get you at 90% in literally one click, but after that, you need to start counting on your comp, because you won't be able to control what's happening next. So again, you can always look at Paul Debevec website about information on HDI. Great info out there. There's a couple books on HDI on Amazon, that I can't remember the titles right now. But they're very, very good. I'm not talking about DHDR effect in photography, I'm talking about HDRI, which is two different things.
So in photo, it just makes all the specular go away and look weird. But in real life, what we're looking for is the eye dynamic range. So back to our techniques. So I just said this. Quick walk through the process. Reference the reflective ball, the Pano. So funny enough, you can take your iPhone and just do a little 360. And it will give you what you're looking for. It's not going to be perfect, but you actually can get there.
So again, with almost no money and not finding that-- that reflective ball by the way is a garden ball. Funny enough, it's a garden ball. So don't look for a gardening reflective ball. If you want to buy one, that's the only way you'll find it. It's a reflective garden ball you can find it through Home Depot. They have multiple sizes in the back. And if you didn't know, very funny physical effect, is on the edge of that ball, it sees exactly what's happening in the back of the ball.
So when you take a picture of that reflective ball, at the top edge on the edges of the ball, right there, it sees exactly-- it reflects exactly what's behind it, 100%. It's a very-- very interesting physical result. Anyway traditional lights works well. Old techniques never dies. Like I said, depending on what we do and the level of quality we do, when we do massing we don't take the time to do GRI. We just go with traditional lighting that matches. And then-- but when we do photorealistic stuff we like to use the HDRI.
Where are we at? Come on, baby.
[VIDEO PLAYBACK]
[MUSIC PLAYING]
- Everything we do is centered around our kids. And this for me is just an opportunity for me to participate in that design, and for the children of Fort Worth to have something great.
- The STEM school and the School for the Performing and Visual Arts are--
- Had certainly happened. The passion of everybody in that room-- we can become--
LUDO MACHAUD: So you see we have a very good--
- It's great to hear from the people that are actually going to be users and the committee--
LUDO MACHAUD: But we still don't know if it's going to cover Fort Worth or not so we want to see the final results.
- Because they bring the energy--
LUDO MACHAUD: And it's fun to see where the final results are going to show up.
- Fired up, excited to deliver on the best facility that we can give them.
- It makes me feel excited as an educator. It makes me feel important, and just a part of the process. See this is just the first step, and we have a lot of work to do still. It will be a shining jewel for Fort Worth ISD.
LUDO MACHAUD: So this is the final piece, where it shows the final result. You see how we didn't change the framing. I'm not just cheating here. I'm using the frame as it was shot. Because we did the pre-viz first, so we could see exactly what we wanted to shoot. And these shots we tracked by the effect. We sent those shots-- we were working on 500 projects at the same time, so tracking was not something we had the time for, unfortunately.
Naturally, one could start talking about the quality of the textures, and stuff like that.We have to use what we have [INAUDIBLE] and sometimes, a time rush means that we have to cut and sort of place and sort of do some corners on the textures and shadings [INAUDIBLE]. We do sometimes have to set lines, especially when they're older model. They get vibrating a lot. You see that last shot. That was to show that Fort Worth was still seen through the main entrance. So this is a whole-- we did the whole piece. So from the interviews to be B roll and stuff like that. This is where what happens next for us.
[END PLAYBACK]
So I think I'm-- I did good. 45 minutes. So we can have questions. So look, guys. It is simple. I want to make sure that you guys understand it's as simple as it sounds. It just takes a little bit of time and practice. It doesn't come for free. You'll have to shoot a few times. I suggest if you do serials, do the same-- shoot the same thing they're shooting in the material, so you have the same environment to work with. And then, stabilize that. Get control over that. And then talk about taking the big footage and understanding how the big footage works, the survey and stuff like that.
So do keep that in mind. The hardest part is the tracking, the lighting are the biggest thing. The 3D model-- I mean, you've dealt with 3D model all of your whole life, so I'm not worried about that part. I just want to make sure you guys understand that these are simple, if you take the time to just do it a couple times in a row. And oh. One thing we do a lot-- sorry, I'm reading my text here. One thing we do a lot is we do still images. And then even with this, which is taking the still image. We can start moving the camera just a little bit.
We don't have a lot of freedom of access. We cannot just go 90% around to the-- because obviously we can't see what's behind. But with the cameras we can-- with a still we can still move the camera about 15, 20 degrees on each side, and all the sides. And using compositing you can start separating your elements, creating a parallax effects. You guys know what the parallax effects is? I'm seeing a lot of yes, no, nos. Good. So creating the parallax effects, giving the impression that the scene, even though it was just shot with one image, is now moving in 3D space.
And with that-- thank you so much for listening to my French accent. And actually Texan French accent. Please, if you have any question-- I will have to repeat your question because we're being recorded. But please, if you have any questions. I saw that one first.
AUDIENCE: [INAUDIBLE]
LUDO MACHAUD: Yes. Good question. He's asking for reflective objects or properties, whatever. What do you do with a real environment? That HDRI image comes in very, very handy in that point to help you get with the reflection of the environment that you it.
AUDIENCE: [INAUDIBLE]
LUDO MACHAUD: Yes. We fake it too in the compositing software as well. Yes. You seem to-- oh, the microphone. Oh, there you go. He's got a microphone. Next. There was a couple more hands, so--
AUDIENCE: So this is-- a lot of this is still kind of new to me. But I missed the part where, how did you get that model in Max that showed the plate and everything? What steps did you take to get that from the footage?
ERIC CRAFT: [INAUDIBLE] track it. And then the trackers have the option to export out [INAUDIBLE] are typically the favorite formats, the ones that we use and typically work best for us. And then-- [AUDIO OUT]
LUDO MACHAUD: So yeah, the tracking software will take care of you. You'll have to export it from there. And then from there you bring it back in Max, or through the package you got, and then you can work with this. I saw a couple of hands here.
AUDIENCE: When you frame the export it renders out of Max, and then what's your final framework for the framing?
LUDO MACHAUD: So I shoot a 48, usually. That's my preferred-- that's my favorite because with 48 I still have motion blur. The No, but it's still there, so it's keeping the reality of the image. So anyway, I shoot at 48. That's my favorite.
[INAUDIBLE]
That's where we convert back to 24, and we move forward. I don't have a microphone anymore.
AUDIENCE: Yeah, I'm having an issue.
LUDO MACHAUD: OK. So go ahead.
AUDIENCE: Could you explain how you used that Photoshop background image in your composite? I didn't see how that connected to what you were--
LUDO MACHAUD: Yes. He wants to look at Nuke. That's fine. So real fast. Here we'll switch real fast for you. Oh, I'm locked. And my password is, the gang here rocks. So let's maximize this, all right. Maximize this. I think it's going to give me an error. OK. It doesn't want me too. OK. That's fine. OK. So the tree itself-- I'm just going to scare you guys a little bit. It's not that big actually. It's a very small tree. But-- so the image itself-- and we'll see in a second here. I'm just going to go as fast as I can.
Here's our original plate. Recognize that at the top, right there. So the painted footage, which is right-- where are you, baby? You're right here. This plate right here? As you can see, it's on the same frame. But now that I have a 3D camera-- also I'm going to look here for a second. We're going to look at the camera. Oh, baby. So we see the scene. Come on, baby. OK. There you go.
So again, our scene here, the cameras are right somewhere right about here. Where are the cameras? Cameras should be somewhere. I have to open them I guess. There you go. Here's my camera. So now with this in mind, I can actually do a camera right here, a scene right here. Where you see my plate has been put back into a 3D element. So my actual painting that I did, it's been put back into a 3D element, which means as the camera move, it'll move around it.
So then if you look at this-- we'll look at this here. It looks like that. So this is the plate that I pain-- you remember the thing I painted in Photoshop? This is all I kept from it, because that's all I really wanted to fix. So when I merge it back on top of the image, that's what it looks like. So before the image right here. So before the image. And then with the image. And now wherever I go, because I'm using the 3D camera in my 3D scene, I can see my plate. As you can see it's following perfectly what I wanted to do. Yes?
AUDIENCE: Is this something you could do in After Effects, as well, or is this more Nuke?
LUDO MACHAUD: Well it's a more expensive piece of hardware. So Fusion will do it. Nuke will do it. Flame--
ERIC CRAFT: You should be able-- you should be able to do it in After Effects, if you can map an image on. Basically all that is is a ground plane. It's a grid on the ground that has that image mapped onto it. So if you can repeat that same process in After Effects, or whatever compositing package you're using, you can do that.
LUDO MACHAUD: Yes?
AUDIENCE: So is there a reason why you do it this way, instead of just creating a ground plane in Max that fits your site, and then using it in material?
LUDO MACHAUD: I'm a control freak. I mean, there's no reason, honestly, other than being-- I mean, I want to have control over my image as much as I can. That's really the only reason I can give you. And there's somebody kissing too much over there, but-- yes. There's no real reason other than-- to me it's flexibility, control, and scalability of your scene. So I'm not limited to what I have-- if I do it in 3D it's called-- I'm calling it bake. I'm taking it. I don't want it to be done-- well, it's done in 3D here, but it's in the composite. So I don't have to worry about control. I don't have to worry about things, what can I do with it?
ERIC CRAFT: And when you get that fun request, like on this one. We originally had the building, or in the house in the background cut out. And they came back and said, Yeah, those people are staying. We need to keep that house in the image. So because it was in comp, he just did it a little roto, and hey, the image is back. 3D, depending on what you have there, you're dealing with a lot more stuff. Plus it makes the process non-linear. He can work on that while we're working on the 3D stuff, and doing the rendering. And we don't have to wait for the Photoshop and all that to get done.
LUDO MACHAUD: So as you can see the house didn't exist until I put it back. But you see this is-- funny enough, that's a very good reason not to go back to 3D. You would have to go back to 3D to put back the house, back in there, because you didn't know that somebody lived in that house, and they cared about their house. Well, it's because when you look at the whole site, I mean, they live by a trash site, and they're pretty much alone. So it's kind of weird. So I didn't know there was actually somebody in there.
AUDIENCE: That's Texas people. They don't move.
LUDO MACHAUD: Yeah. Exactly. They don't move very easily. So in the process, you see-- you can see layers here. But I wouldn't do all this. I want to see if there's any other good questions. You guys have been asking really good questions.
AUDIENCE: The lens distortion, do you use the DeskPro LensPro files? Or do you do your-- you bring up your tech board grid--
LUDO MACHAUD: Yes.
AUDIENCE: But do you use the lens profiles, or do you just do it yourself?
LUDO MACHAUD: I do it myself. No lens is identical. There's not a lens on the world-- 235 mil is not physically, accurately the same ca-- one will be 34.8, one will be 35.1, really at the end of the day, right? So because of that, each time you use a camera, I'm sorry, but you have to shoot the cam-- the grid. Don't rely that it's the same camera. When it's cold outside, the camera will contract, just like anything else. So the lens, again, will be different a little bit. We're talking pixels, but those pixels on fisheye lens cameras are very, very important. They make a big difference.
So for instance, this scene here is to just show you what the impact of-- this is a GoPro footage. And then we had the grid, which I don't have here. But the grid is right here. And then-- but then this is what it looks like when it's [INAUDIBLE]. So you go from here to here, and if you put a 3D object in there, the object is fully, perfectly lined up. There's no distortion to the object. Even if you say, well, I shot with 12 mil. Well, maximum what we're going to do, give you a mathematical 12 mil, not an accurate 12 mil that your lens is. So that grid and the distortion is very important for you right here. Yes?
AUDIENCE: I was going to ask you, do you have any difficulties flying drones in certain areas [INAUDIBLE]
LUDO MACHAUD: Oh yeah. You have a lot to deal with, FAA being the easiest part funny enough. Yes. No. There's zone you can't fly. There's zone-- I mean, we have three airports. So it's really interesting to find out where we can shoot and not shoot. Any places with people, you have to request, because if there's an accident, you're liable for the person you're hurting, of course. So you have to have insurance. You cannot just fly a drone in public and hope for the best. You can, but don't say I said. I didn't say it. I'm not responsible.
We do have lawyers and insurance on our sites. But yes, you've got to think about that. Pre-viz is one thing, but one of the piece in the reel that we showed was [INAUDIBLE] vision, which is Nashville airport, which we're rebuild-- we're upgrading. And it took us about three weeks to get permission to just shoot two cameras. And they gave us very specific areas. So at that point, knowing that fact, we had to go back to pre-viz and say, OK, if we're just locked tot his, what's the best? So pre-viz was very important so that when the shooter went on site, he was like, I know exactly what I'm supposed to look at.
This is what you are going to get. And that's what you get back. So you get the right framing and stuff. And suddenly it looks like-- it looked like we shot on all sides of the camera, because-- we almost got in trouble-- because we took Google Earth, and we took the other side that way. We rebuilt a few things. We did some projection trickery. And then people are like, we told you not to shoot on that side. We're like, no, no, no. It's all fake. But yeah. Next.
AUDIENCE: Do you mind explaining how you got the shadows to fall-- on the second video that you showed-- how you got the shadows to fall from the recreated building onto the real building?
LUDO MACHAUD: Yes That means-- so in 3D we recreated the real building.
AUDIENCE: The one that's there?
LUDO MACHAUD: The one that's there. We didn't recreate to the letter of the window and stuff. We just did the boxes. We had some basic information. Using the footage we were able to match things. But basically we just drew a box. And the 3D object projected shadows on that box. Then we, in the compositing we just used the shadow mat. And then we color correct the current footage to match with that.
AUDIENCE: Because I was going to say that's different than the first video you showed all the shadow.
LUDO MACHAUD: Yeah. They're all by themself on that, yes. Yes. There will be more complex and easier ones also.
ERIC CRAFT: Well and that was kind of a trick, because on the Melissa ISD, while we rendered it with grass, the only thing we used from that was a mat because the work that he did in Nuke of replacing the ground was an accurate-- a more accurate representation. So then we just used that as a mask, with the match shadows to actually put the shadows on the background plate.
LUDO MACHAUD: Yeah, right here, somewhere here. There you go. So yeah. So your shadow is there. But yes. In the case of a real building, if we don't have real building, we just do the volume of it if you will, depending on the accuracy. If we have a camera that goes flying, well that little corner will be a little more detailed. But again, the detail is just enough to say that, if there's an indentation on your wall, just put that. Don't make all the intricacy of whatever design you've got in it. You don't need that. It'll work. The trick will work. And that still works for movies as well. Anything else guys? I think we have time for one. Let's just say one more. No? Well I hope you enjoyed and thank you.
[APPLAUSE]