Motion Tracking A recent personal project, combining iPhone 6 film, Cinema 4D’s motion tracking system and composited in After Effects.
On a beautiful June afternoon I visited Scotney Castle in Kent. A property managed by the National Trust. The perfect environment to rest and unwind surrounded by rolling landscapes, sensational views and beautifully mature victorian garden. At its heart lies a 14th century moated castle ruin which I thought was still in pretty good condition for what is for all tense and purpose a glorified folly!
This visit followed a networking event on the pier in Hastings where I had been talking to a film producer about the possibilities of motion tracking and how easy it is to overlay virtual objects and elements into real footage. This conversation had been playing on my mind for a while and this visit to Scotney castle was the perfect location to capture a few film sequences and prepare an example of some technique.
Once I had returned to my studio, I captured the iPhone 6 footage on to my system and selected the shots I wanted to work with and worked well together. I already had in mind that I wanted to show additional compositing techniques in the one film, the obvious requirement in the sunny conditions was the shadow projection and light matching across the real footage into the virtual environment. The other techniques I wanted to cover was the reflection across the water and the well in the last cut was always going to be perfect for a simple particle system.
Whilst on location at Scotney castle I also made sure I took a few panoramic photos for use as environment maps in the textures for the overlaid text, this is to help blend the text in the real footage and give a more believable finish; as far as floating text in a landscape can be believable!
Motion tracking in Cinema 4D
For the majority of sequences the motion tracking procedure in Cinema 4D is a painless and hands off affair, the process starts with a 2D track whereby the system searches for points that it can track accurately and over prolonged periods during the duration of the footage. Once the 2D track has run through its process you then progress on to the full 3D solve, during this procedure the system establishes distance and positions to each of the tracked points creating a null object at each point. Successful tracks will show a variety of points assigned a range of colours which represent how accurate each particular null has been tracked. The key colour to look for here are green nulls, green equals a solid and good track, the more green the merrier! For some sequences a good track can be tricky even when you think it looks like the footage should be good for a motion track. If this happens there are other options available to create your own track points. My first point of call if an ‘auto track’ doesn’t work straight off the bat is to choose a series of frames throughout the sequence which I manually create a series of track points, then returning to the beginning I run a new track procedure. This usually fixes the problems, but if not I will look at increasing the number of tracks plus I will adjust the minimum spacing of the tracks. By default its set to 19, I typically increase this to around 50, I find this gives a nice distribution of track points across the entire frame / sequence.
Motion track point cloud
Once the motion track is complete and I’m satisfied the virtual camera is locked to the real footage I will then set about matching my virtual scene with the real footage.
The first aspect to this for me is to orientate the virtual scene to align with the real footage, this can often be the trickiest of processes in camera matching. The process involves selecting 3 nulls and assigning them a directional plane that they are lying on; seems simple enough but trust me sometimes the orientation is displays bears no relation to how you think it will look. It can often take several attempts of searching for a set of 3 nulls which align how you expect them to. Typically I would look for points on the floor and then set their alignment to Y, occasionally this proves impossible to achieve so the need to find alternative sets of nulls becomes essential, at this point selecting nulls on walls or other objects in the scene that can provide a decent source of reference will be required. You just need to remember to then set their orientation to the correct axis!
Once orientated correctly I will then move to setting the origin of the shot, choosing a Null that will be the 0,0,0 location for my virtual scene. This would typically be a null that is on the floor and which is fairly close to the part of the footage I want to include 3D assets into or at least somewhere close to the centre of my footage focus.
The final stage is to then scale the virtual world to either match or at least be representative of the footage Ive tracked. Sometimes accurate measurements are not possible so best guesses are good enough to give a good result and will make your life a lot easier when it comes to constructing scene elements.
With those 3 additional procedures locked down I move on to the next stages, adding objects and assets…
In this Scotney Castle example I’ve concentrated on overlaying and integrating 3D text which meant that most of the scene build was fairly straight forward. I purposefully wanted to do the water reflection sequence as this would give me a nice opportunity to show a nice interaction between the the 3D overlay and the real footage. The water material was tweaked until the right balance between bump size and complexity was matched to the real ripples seen in the footage plus I also wanted to keep the reflection recognisable and fairly readable. Using a simple plane object I then subdivided this plain to give enough quads that I could go in and select the quads around the lilies so to avoid the reflection across vegetation which wouldn’t have looked very convincing.
The more interesting scene construction was the building of the well in the last scene. I always had in my head to create a fantasy based particle system emerging from the well and was was pleased with the results for the limited time I spent constructing the expresso to control the different sets of particles. But in order for it to look like it was emitting from the well I first created a low poly cage that matched the shape and size of the actual well, this also had to be perfectly matched in position with the real well. Once the primitive shape of the well was created I scrolled through the timeline to check it aligned with the real footage. A material was generated by the motion tracking system which acts as a frontal projected texture which is applied to the geometry of the well and a compositing tag is applied to the geometry. In the compositing tag I ensure that the compositing background check box is active, this avoids any lighting issues falling across the geometry and serves to create a mask for the emitting particles.
Once all the scene are have all the text in place with in and out sequences and the additional elements are setup I render them all out, making sure to assign correct buffer channels and shadow maps and Ambient Occlusion channels to the multipass renderer. With all scenes rendered as png sequences I import them into After Effects and begin to build up the various layers until Im happy with the look and feel and the movie. For this Scotney Castle sequence I also tweaked and adjusted the original footage to give it more vibrance and saturation.