Figure 0: Original greenscreen footage
Concept & Artistic Influences
The idea behind the concept was to show a courtship between a couple, set in the Victorian era. I initially wanted to place the couple in a garden outdoors and have them walk into an archway of leaves, as if this is their meeting point for a stroll in the garden.
Figure 1 – Original background image
Due to a large mismatch of the perspective with my footage, in the end, I composited them into an indoor environment inside, overlooking the inside of a palace from a balcony.
Figure 2: Final background image
Like my previous module, I decided to use Beauty and the Beast (2017) as my main artistic influence. A couple in a palace, meeting up for a courtship–it sounded exactly like the famous ballroom scene from Beauty and the Beast.
Figure 3: Ballroom scene from Beauty and the Beast
As well, for the colour grading, I looked at romance films to see what style of lighting they use to convey love between characters. These kind of shots are generally bright, warm, and lights seem to glow. Softness was the key.
Figure 4: Film inspirations
Background: Adding Movement
To create an illusion that the background was moving footage and not a still image, I added animated flames on the candles, as well as the light it casts on the walls. In order to do this, I first had to remove the original light bulbs that were from the fake candles in the image (Figure 8), by keying them and eroding the background inwards, and then add back in my own animated flames.
Figure 5: Final clean-up of background
To animate the flames, I used animated Noise to distort the lights that I cut out. To make the noise animate in a random matter in a flame-like way, I used the random wave expression in the “z” parameter of Noise:
random((frame)/30)*(5) Meaning: random((frame+offset)/waveLength) * (maxVal-minVal) + minVal Therefore: Randomly move between 0 to 5 over a frame span of current frame number divided by 30
Figure 6: Random Wave expression (Carson, “Nuke Wave Expressions”)
The maximum value in the “z” parameter was 5 and the minimum was 0, therefore I used those values for my range. I used different particle size and wavelength speed of Noise for the flames in the foreground and flames in the background, because the flames that are further away would move less than those closer to the camera. I also added a Transform to animate it moving upwards, imitating the motion of flames (Figure 7).
Figure 7: Animated noise
I then used this as a distortion map to warp the lights, making them flicker like flames.
Figure 8: Animated flames
I also added in a fake camera move by adding a Transform at the very end of the comp to make the camera slowly move upwards.
Defocus & Atmospherics
In order to get an accurate defocus, I created a depth map by using different shades of grey Constants masked for each plane in depth. I found a useful tutorial which explained how to use the ZDefocus to get nice bokeh effects, which I wanted for the candelights (Raasch, 2018). Instead of the built-in filter shape in ZDefocus, I used a Flare as a filter for to get bladed bokehs.
To add more depth, I added VolumeRays that had slow moving light rays, multiplied by a large pass of Noise to imitate dust. I used Hugo’s Desk’s tutorial on adding volumetric lights as a guide (Guerra, 2018).
Figure 9: Volume Rays tutorial by Hugo Guerra (2018)
Figure 10: Final background comp
All of the movement added was very subtle, to not distract from the main focus, but enough to break up the stillness of the background image.
Core and Soft Edge Mattes
To begin creating the alpha channel, I denoised my footage first. I then keyed a core matte using Primatte (Figure 11), and a soft edge matte with Keylight (Figure 12).
Figure 11: Core matte
Figure 12: Soft edge matte
I had to clean inside and outside the masks separately for the actors, James and Paola, because they had different levels of details that required different values. As well, the edge detail for the beginning of the sequence when they are close to the camera and the end of the sequence when they walk away required different values because of the slight lighting change. The IBK Gizmo worked better for the beginning section, while Keylight was better for the end. Because they walk into the shot from offscreen, I had a few frames of pure green screen that I used with a FrameBlend as my colour plate input for the IBK Gizmo and many other keys.
Figure 13: Cleaned matte
Fine Detail Mattes
The core and edge matte did not capture the motion blur and fine details of the fan that Paola holds. Luckily, it is a black fan, so I was able to get a good luma key out of it, but because there were tracking markers on the greenscreen, some markers stayed. I extracted a clean greenscreen by dividing the luminance of the footage by the luminance of the frame-blended empty greenscreen, then multiplied the result by the difference between the empty greenscreen and a Constant that is the same shade of green as the greenscreen (Figure 14).
Figure 14: Original and cleaned greenscreen
Using the new footage with the clean greenscreen, extracting a luma matte of the fan gave a much better result (Figure 15).
Figure 15: Matte with fan key improved
For the feather on Paola’s hat, I converted the footage to HSV colorspace, because the Saturation channel (which was now the green channel) gave the most difference in the feather against the background. After grading and inverting it, I shuffled it out to the alpha and Keymixed with the other mattes (Figure 16).
Figure 16: Feather matte improved
Figure 17: Final matte
Keying: Foreground and Background Treatment
To despill the footage, I used Keylight with the green channel set to 1, along with a HueCorrect with green suppression. After showing my work in progress to my tutor and to a professional in the industry, I was told that the skin tones were too flattened. So I divided the Keylight despill result with the original greenscreen footage, to get the intensity difference. I increased the gain on the red and blue channel and lowered the green channel’s to make it more pinkish, then multiplied this back onto the despilled footage. This resulted in the variation of colour from the original multiplied back on top with a pink tint instead of green.
Figure 18: Original, despilled, and pinks added back
I found the most challenging part of this project was the edge treatment. I tried multiple techniques, but ended up using edge mattes from EdgeDetect and Matrix to grade through and make the edges darker and more orange. I am still not happy with the result, because I want the edges to interact with the luminosity of the background. So I took the difference from the despilled footage to the original, then multiplied the result by the background image which I blurred. This helped, but it still gave me edges that were too bright in the dark areas of the background. In the given time, however, I was unable to come to a result I was happy with.
I found, a little too late, some very in-depth tutorials by CreativeLyons on keying which may help me to review over my script when I come back to this shot (YouTube, 2018). Hopefully this will give some solution to a better despill and edge treatment.
Figure 19: Keying tutorials by CreativeLyons (2018)
Feather & Fan
In order to gain back details that were lost in the key and despill, I locally graded the feather from Paola’s hat using the same method as the one used to extract the alpha, which was to switch to HSV colorspace. I made it brighter to look like the light was shining through it, as it is not a solid object. I also locally graded the fan with its alpha to make it darker, because the fan is black.
Lastly, for the background treatment, I used the additive keyer method to add back details of the rest of the foreground, similar to the fan and feather. I shuffled out the difference between the original greenscreen footage and greenscreen plate into individual RGB channels to get a black-and-white version of all channels, which were then blended by Blend Channels. I weighted the red channel the heaviest because it had the most detail. This was then multiplied to a desaturated version of the background, to have the image react with the background luminosity. Finally, the result is added on top of the background, under the keyed foreground, to add back highlight details.
Figure 20: Background with additive keyer (faint outline of foreground)
Final Colour Grading
For the final look, I graded the composited footage globally to have a golden look, almost like the golden hour. I started by matching it to my original artistic inspirations, then graded it to be much brighter.
Figure 23: Final grade
To add grain, I kept the denoised background plate and added grain to it to match the foreground. I added the grain through the keyed alpha channel, so that the foreground would not have two passes of grain, and also through the luminance of the background, so that only the darker places had grain. In addition, I added the high frequencies of the background with the Laplacian filter, so that the edges were more pronounced and had more grain. Finally, I desaturated the grain to almost black-and-white, so that the grain only affects the luminance of the footage and not the colour, and added on top as the last step.
Figure 24: Final comp
To demonstrate various skills in projections, I wanted to do a location replacement of a scene. My original footage was shot in Southbank of London, but I wanted to change the shot to be set in Barcelona.
Figure 25: original footage
The things that needed to be done were:
- removing couple in background
- replacing the scene from the cars and onwards, meaning sky replacement and adding some kind new objects to integrate the foreground and new background
My initial idea was to replace the posts with a railing, which overlooked the city of Barcelona.
Figure 26: Initial concept
After receiving feedback from my instructor and the same mentor who reviewed my keying shot, I was encouraged to really push the shot and make it look completely different; to make it look better than the original, otherwise there was no point. I researched many different kinds of look-out locations that view over a skyline around the world, and found one in Barcelona that I liked a lot. It was a location called Park Guell, where instead of a railing, it had an interesting wavy bench that looked out to the city.
Figure 28: Park Guell
I decided to try adding in this bench into my scene, and have it look over the same skyline. But before this, I first had to remove the unwanted elements of the scene.
This footage was shot by one of my peers, so luckily I had the information for the camera and lens used. After I undistorted my footage with LensDistort_L by using the line analysis, and inputted this information into my camera tracker.
Figure 29: Camera track
I set the ground points as my ground plane, and followed the line of the tiles to set my z-axis. This way, my 3D world will be aligned relatively the same as my scene, making translations easier.
I recreated my scene in 3D using the ModelBuilder with my render camera. I required a model for the building, including its entrance and columns, the ground, and a piece of the curb.
Figure 30: 3D models of environment
I also unwrapped my own UVs because the default weren’t as useful.
Figure 31: UVs
Projecting clean patches
I created all my clean patches clone painting with RotoPaint, sometimes separating the high frequencies and low frequencies with the Laplacian filter to make my paintwork more seamless.
Ground, entrance, and curb
I needed to project the ground where the couples feet were, as well as where there was a streetlamp. The streetlamp would not have made sense to keep in the scene, because I was going to be removing the road entirely with a new ground. I projected my clean patches to the ground geometry made in ModelBuilder. I also projected clean patch of the entrance onto a card that was aligned and snapped into place by the building geometry. Finally, I projected a clean frame of the curb that was frame-held onto a cube.
For the cleanup on the wall, because it’s all flat planes and there is a slight corner, I decided to unwrap the UVs and edit the texture to make the corner seamless. After Rotopainting a clean patch, I applied this as a material to the building geometry.
Figure 32: Edited UV texture
I used the same method for the back wall of the building, by UV unwrapping and Rotopainting this image, then applying as a material to a card snapped to the building.
Figure 33: Original and final clean-up
Adding bench to scene
Creating the texture for projection
It was very difficult to find an image of the bench that was suitable for projections because of its curved surfaces. The real bench is also quite large, so I needed an image that showed the bench far away enough to include most of it, but with high enough resolution to be able to project without losing quality. I ended up opting for this image, which I duplicated and mirrored it so that I had double to work with.
Figure 34: Image of bench used for projections
Figure 34: Image of bench mirrored
Creating the model
Using the same image (Figure 37), I built a model of the bench in ModelBuilder by viewing through the render camera. This would result in a warped geometry, because the camera used for the image and my footage is not identical, and because I have already altered the perspective of the image by mirroring it. That was fine, because I would project a still image with a frame-held camera to project onto it, and render this out with the render camera. Because I used the same render camera, the model will move the same way as the rest of the models.
Figure 35: 3D model of bench
This, however, resulted in some stretching in the projections, of course. To fix this, I decided to do a second projection with the same image, but from a slightly altered angle so that the there was less stretching, and merge this with the original, main projection.
In order to seamlessly integrate the two projections, which were projected on separate but identical models, I had to do some tweaking in both 2D and 3D. I used an axis for both projections as the global controller, which would move the models together when I position it in the scene.
Figure 36: Projection node graph
The two projections did not match up exactly, so with a combination of SplineWarp and RotoPainting, I altered the image for both projections so that they merged together seamlessly.
Figure 37: 3D model with projected image
Creating texture for projection
I projected the ground from the original image after extending it by RotoPainting. I then used a planar UVProject from directly above to unwrap the UVs of the ground and have a flat texture to edit. I multiplied a texture of a tile pattern on top of my original ground. The pattern image was very low resolution, so I used a Tile2 to extend the image. I now realize that I transformed the pattern after the tiling, which was the wrong order for concatenation, and gave me a very blurred image to multiply with. Although, when I used a proper high-res pattern image, Moiré patterns (odd mesh-like stripes in the image) started appearing because of the excessive tiling. The low quality image gave me a much more similar looking pattern to my original foreground, so I kept it the way it is even though I lost a lot of image quality.
Diffused and contact shadows
In order to figure out where to place shadows casted from the bench and building on my new ground, I brought in the respective models into the ScanlineRender for the UV unwrapping from above. When I used a white constant as an image into the models, it also showed up as white on the UV unwrap. I used this as a point of reference for creating Roto shapes to grade through.
Figure 38: Unwrapped UVs of ground (with reference and with shadow)
With such a large texture image (over 8K), I had to use VectorBlur in order to add motion blur to the ground, because the ScanlineRender was much too slow and lost image quality when adding some sampling. I researched that ScanlineRender does not run on GPU (Foundry Community, 2018). With a good GPU on my PC, it was much faster and customizable to use VectorBlur.
Background replacement (adding skyline)
I used this image to project in the background, furthest away from the camera.
Figure 39: Skyline for projection
I separated the front buildings from the rest of the background in order to create parallax by projecting onto cards in different positions in Z space. Additionally, I added footage of birds flying to add some movement into the sky.
Figure 40: Final projection of skyline
Integrating elements with foreground
Once all the projections were finished, I added lens distortion and graded it to match the foreground footage. I also wanted to add some defocus in the background, but I knew I could not add too much because the original footage has only a little defocus, since it is quite a wide shot. I used the depth passes that came from ScanlineRender to use as the depth pass for ZDefocus. Finally, I added grain as the last step.
Adding foreground elements back into scene
I wanted to add back in the posts to keep the sense of parallax with the ground, also to make a change of tiling make more sense, as if it were a real border. I stabilized the footage by projecting the components onto their respective geometries and unwrapping the UVs. I applied the rotoscoped and premultiplied footage as a material back onto the geometry, and added a VectorBlur to the alpha to make sure it had motion blur.
Figure 41: UV unwrap with roto
For this shot, I wanted to create a nostalgic feel. I used one of my favourite films, Her, which does this very well, as a point of reference to imitate the film grading.
Figure 42: Her (2013)
By viewing this image and my shot through the Waveform and Vector Scope, I knew that to create this look I needed raise the reds and a little bit of the greens to make it orange. I first did a technical grade to even out the colours, as well to make sure no whites were being capped before grading. I used a luminance key to create more contrast between the sky and the foreground, to make it look more like evening time. I then generated a still frame of Noise to Merge(screen) over my footage, to unevenly flatten the blacks and create a more film-like look. Finally, I added a vignette as a final touch.
Figure 43: Before and after of colour grading
Figure 44: Final comp
Beauty and the Beast. (2017). Directed by B. Condon. USA: Mandeville Films, Walt Disney Pictures.
Foundry Community. (2018). ScanlineRenderer speed. [online] Available at: https://community.foundry.com/discuss/topic/124400 [Accessed 16 Jul. 2018].
Guerra, H. (2018). How to make Volume Rays using The Foundry Nuke – Compositing Workshop. [online] YouTube. Available at: https://www.youtube.com/watch?v=thXx_LNlPSU [Accessed 16 Jul. 2018].
Her. (2013). Directed by S. Jonze. Annapurna Pictures.
Raasch, J. (2018). Defocusing an image in Nuke. [online] Joe Raasch. Available at: http://www.joeraasch.com/defocusing-an-image-in-nuke/ [Accessed 16 Jul. 2018].
YouTube. (2018). Advanced Keying Breakdown – YouTube. [online] Available at: https://www.youtube.com/playlist?list=PLt2Nu4KGXJ2iXe7s-ydCQ9u1tTzzApmJX [Accessed 16 Jul. 2018].