Cinematic Composting is a new feature in Jetset that enables direct preview of the live cine camera video tracked, keyed, rendered, and composited completely inside Jetset.
Today we are going to go through our updated calibration process and our new cinematic compositing feature, and this will let us actually composite the cine camera feed in real time at 24, 25 frames per second matched and composited in the iPhone.This lets you actually see a one-to-one preview of what you are shooting with your cine camera composited and tracked live in the iPhone, so you have complete virtual production process in Jetset.So let’s get started.
As you see, we have already connected our iPhone to our Cine camera. We have a live HDMI feed coming out of the cine camera into our Accsoon SeeMo then going into Jetset.We also have a cooling fan on this, which is very important. And we have our Tentacle Sync and that is going to provide matching time code to both our cine camera and to our Jetset device.
So you can see on the camera we have time code is at 12:06:44. And if we tap on our iPhone panel we can see that it shows the name of our Tentacle Sync that we’re connected to, which is Johana.We can see here that our code is coming in at a 24 frames per second frame rate. And that matches the recording rate at our cine camera in this case. So we want to make sure that your time code frame rate exactly matches the recording frame rate of your cine camera. Okay. So we can tap and close our external connections device.
So the first thing we’re going to want to do in Jetset, if you haven’t done this before, you’re going to make a new project. We’re just going to click on the gear icon and we’re going to go to Project, and we can actually create a new project.And if you already have a project, don’t worry about it. If you have not created a project before, just go ahead and click on the new project and click edit and then we can change the name of our project. We’ll call this CineCal.All right. And we’re going to edit and change our prefix. In. The prefix is just a six letter prefix that is appended on the beginning of all the files that is generated by Jetset. There we go. All right, so now we’ve created our project. I click save. Okay, so now we can see in the bottom of our Jetset view that the CineCal project is currently loaded.That means all of our calibrations will go into the Cine Cal project.
So let’s go ahead and start our, a new calibration. So we’re going to go to recording, and we’re going to just go to cine calibration and click new. And it shows up our usual calibration process.And we can see in the upper left hand corner we have our Jetset video. And the lower left hand corner is the cine video with a Ready sign on it. That means that we actually have live video coming into that, and we can click start.
One of the things we’ll see in the upper right hand corner is now we have an adjustable delay for the incoming Cine video, and this is so that we can correctly align the time offsets between the cine video and our and our iPhone video.And as we scroll back, as we pan back and forth, you can see that both the images are pretty well aligned in time. Usually the delay is about two frames. That’ll keep us pretty close in time synchronization, which will make it easier to do the calibration.Okay? We can click okay on the two frame delay. You can adjust it back and forth to, to get your delays correctly aligned, depending on your equipment.
All right, next we’re going to pick up a camera and we’re going to go find a spot that has a lot of feature matches. You don’t Tentacle Sync do your calibrations on a green screen because in real time what you’re seeing is all the little red squares are doing natural feature matches on the screen. And so a green screen has almost no natural features. So we are actually going to move over to a spot that has lots of natural features in it. And in this case, this something like the back part of a stage works really well.because this is where all the random equipment is. And in this case, we have a set of storyboards on the on the wall that’ll actually work Great. And what natural features are based on. Is a series of corners and the information around those corners. And as you can see here our natural features are detecting pretty well on the corners of all the images on the note board. Okay.
So you can look through here, we can see we have quite a few feature matches, and importantly, they’re all, they’re showing up through the entire range of the scene. You want to have your feature matches both in the center and on the edges . We’re going to be using this to actually do a full lens calibration including lens distortion.Most of the lens distortion actually happens at the corners of the image. And so you want to make sure you have features at the corners of your image as well as the center. And this is another reason why we generally don’t want to do our lens calibrations on green screen because again, there, there’s no data in the corners but almost any random back area with a lot of detail, carpets, all these things can work really well.
Let’s start out we’re going to click a test frame. And what that does is it detects all the corresponding features between the the iPhone camera and the cine camera. And it shows how many features. So that’s quite a few features. That’s plenty.And it’s showing a good detection. It’ll draw lines between the top and the bottom of the matching features in a particular area. And that looks great. That’s 159 features. That’s plenty. So we can click keep frame. And what we’re doing here is we’re picking an area where we can be a similar distance from the board that we’d actually be using to shoot in an actor, a reasonably similar distance, because that way our focus distances are fairly similar.So we can actually capture it another frame , and we can range around. Once we’ve captured a frame, we’re going to move maybe a foot to the right and do another frame capture and, and then, hold the camera, still do another frame capture. After each one of the captures we can detect how many good frames we’re seeing and make sure we have enough matches between both the iPhone camera and the cine camera —the up down matches as well as the matches between the first cine frame and the last and the most recent cine frame. Those are the matches on the right. So what we’re going to want for this is to move in almost a semicircle around the objects that we’re capturing and we’re doing a little version of photogrammetry here. We’re getting as many different points of view of the same set of objects as we can.
You want to move in a semicircle around the object and not stand in place. For lens calibration, what you want is parallax between many shots of the same object as opposed to putting the lens on a tripod and panning it.That will actually break the mathematics that we use for lens calibration. So what we’re doing here is moving around and getting lots of different calibration points from multiple different angles of basically the same objects in the scene, and each time we’re holding the camera stationary, triggering a a test frame, making sure we have plenty of captures, and then keep frame.
Once we have 10 or 12 captures then we can actually click save, and we can enter in the calibration name. You’re going to want to name this something descriptive. Usually with both a bit of the date and also the lens information. In this case, we’re calibrating a 24 millimeter lens. So we have the calibration, the date, and the lens focal length.There we go. So we can click okay. And then once we have that, now we can click exit.
Now what we see here is that Jetset internally has done some some 2D processing, and it’s done a very first order pixel map of the area that the cine lens covers, that 24 millimeter lens covers is shown as a red reticule inside the Jetset view. And again, this is an approximation.It’s not exact yet , but this is a good sanity check because the default iPhone lens is somewhere around 18 or 19 millimeters wide. Whereas we are calibrating a 24 millimeter lens on the cine camera. And you can see that the reticule is actually taking up less space than the iPhone camera.So this is a good sort of sanity check to make sure things are about what you’d expect. Now the reticule is red because we have not yet solved this in 3D. We don’t yet have a full lens calibration. This is just an approximation right now. So we’re actually going to switch over to Autoshot now.
Okay, so we’ve moved over toAutoshot and we are in the calibration tab in Autoshot. It’s automatically detected Jetset on the local network and we can see that our project name is Cine Cal. And we see that the current calibration that is loaded is Jetset Calibration 05-06-2025 at the 24 millimeter lens.
So that is the calibration we just completed. Now we’re going to want to give it one more piece of information before you do the calibration, and that is the sensor width of the camera. This case, it is a Black Magic pocket cinema camera 6K. So we know that our sensor width is 23.1 millimeters. And when we solve for our focal length, it’ll, it give us a more accurate focal length.
Okay. And we can click go ahead and click calibrate. What it’s going to do is going to link to Jetset on the network and it’s going to pull over those calibration images that we just captured, and it’s going to do the math and find all those natural feature points that showed up as red squares during our calibration process, and it’s going to match them.It’s going to match them both between the different cine images that we captured and also the different iPhone images we captured. And by doing this combination, we’re going to be able to drive both the offset from our iPhone camera lens to our C camera lens and more specifically to the entry pupil of our cine camera lens, which is the part we need to know in 3D graphics as well as the actual focal length and a first level of radial distortion in the cine camera lens. So it’s completed calibrating. It has already pushed the file to Jetset.
And you can see that our focal length that we solved for is 24.50 millimeters which is fairly close to the 24 millimeters that’s printed on the side of the lens.And this is a nice way to verify that your sensor width is reasonably correct : if you’re solved focal length more or less matches. It’s not going to be exact, because what’s printed on the side is an approximation; no one wants to buy a 24.50 millimeter lens.They want to buy a 24. But this is the actual true focal length that lens is calibrating to. Now we can move back to Jetset.
Now back in Jetset, we can see that our aiming reticule has now turned yellow and that means that it is a correct 3D solve, and is also now dimming the area outside of that.So this tells us that we have actually solved our lens calibration in 3D. Now this is great, but we still are seeing the reticule approximation. That is an approximation for what the cine camera would view overlaid onto the Jetset UI.
But now we can take this one step further and we can click our our cinematic compositing button. And then we can immediately see the actual cine camera footage composited live in Jetset. And this is a great way to verify that your calibration is working correctly, because now you are seeing the actual cine footage. The cine feed sent live into Jetset, composited matched and tracked, And with a green screen with a green screen, key pulled on it so you can actually see your footage lined up in your 3D world. So this is a one-to-one match of what the cine camera is seeing to what is being composited in Jetset.
So when we actually record, we will now record what we call a Superdaily, which is a 24 frames per second real time composite, time code matched to your camera original.
So what we’re going to do now once we have our calibration, we’re going to reset our tracking and we’re going to find the floor and let’s let’s make sure we get a good solid point cloud.So let’s pick the camera up and we’ll move around the area just a little bit, and we’re going to. Move around and capture our environment from a few different areas. And this is just to get the point cloud. And once we get a reliable point cloud in the system, so we have solid tracking. Now once we’ve done that then we can actually tap and define our origin.
Okay, so now we’ve got our origin set. Let’s go ahead and go up to our main menu and let’s go click scan. All right. Then we can click start our scan. All right. And it’s going to map very quickly there, the scanning area and it, we don’t need the whole area around it. We just really want the area that’s going to be in the shot.So we can just move a little bit left and right and get a quick capture of the surrounding environment. And there we go. Then we can come back and then we can click stop. And the goal of the Jetset scans is to have just enough scan coverage so that it it covers the shot area.Okay. There we go. All right. And so then we can see in the scan that’s still aligned pretty well. Our origins aligned pretty well, so that’ll give us a good aligned scan for our shot. Okay. Now we can click hide, then we can click.
Okay. All right. Now we can move on to keying. So we’re just going to go into our main menu and then go to our key tab.There we go. Okay, and we’re already set on the green screen level, so let’s go ahead and click the reset button for our key. And then we can click the keyer swatch the green swatch in the middle of the screen. And we can pick a reasonable color of green to operate the key at.And as you’d click and drag on the screen, it’ll update the key in real time. And so if we have, in this case, a something that’s strength reflecting a lot of the green light, then it’s going to be partially transparent. So we’re going to want to boost the opacity. So we’re going to bring up our black and white points; there we go.So we actually harden up the mat. Okay, so now we can toggle off the mat display.
Now that we have set up our origins and done our scan we actually want to be able to set up our 3D garbage mattes. And this will let us pan off the existing 3D green screen and go seamlessly to our digital world.And because we know where we are in the 3D environment, we can do this pretty easily. So we’re going to go up to our main menu and we’re going to go pick our key menu.And now we have a green screen key working, but if we panned off to the left or the right off of our green screen, you immediately see that we’ve got just the sound stage going on there.And we actually want to have a more seamless experience. So what we can do then is we’re going to go over to our 3D mattes menu on the right, and we’re going to click start.And it’s going to detect all the horizontal or surface or vertical plane of this. And it’s also going to put in a proxy quad. So you can see where it would be dropping the quad if you hit the plus sign.So we’re going to start off with the back wall. And there we go. And we’re just going to hit the plus sign to drop a quad on the back wall. There we go. And you can see it’s created a little quad that quad is now stuck to the back wall. We’re going to repeat the process for our sidewalls and our floor.There we go. There’s a sidewall and there’s another sidewall. Come down to our floor and drop a quad in those locations. Great. Now we can click stop. because we don’t want to add them anymore.
And now we’re in a point where we can drag it around and edit it. So in this particular case, we can see that our quads are not going all the way to the end ends of the the green screen area.So we can actually just drag those into place so that we have a more. More seamless match between the different quads. So again, and we just drag those silver balls around. And as you can see, they are actually staying stuck to the original vertical plane that they were originally on.So we can just drag the corners to, to match so that we have a seamless match between our green screen, 3D mattes, and it takes a little bit. But this is we can save this for later.And we just drag those in place for all the different sides of the wall of your green screen. There we go. And let’s come down and get our other corner matched in. There’s a corner.And drag up our silver balls, plant up those different areas, have a little bit of overlap so we don’t have gaps, and come down and bring it down to the floor.There we go. And last we can bring in the floor quads into their correct spot. And this is nice because this lets you handle adding a green screen 3D mat. Even after your stage is full of stuff you can just drag the quads around the various objects. I’m just going to walk over here and drag those to a line.And there’s our last floor quad.All right. Hash do nicely. Great. Now we can click. Okay. And now as we pan off this screen, it is seamlessly transitioning to a fully digital CG representation of the scene. So we can now move to large scale camera moves and we automatically map back and forth between the green screen and the 3D garbage mattes.