Module 03: Keying & Projections

Keying

 

Figure 0: Original greenscreen footage

 

Concept & Artistic Influences

The idea behind the concept was to show a courtship between a couple, set in the Victorian era. I initially wanted to place the couple in a garden outdoors and have them walk into an archway of leaves, as if this is their meeting point for a stroll in the garden.

AdobeStock_36351889.jpeg

Figure 1 – Original background image

Due to a large mismatch of the perspective with my footage, in the end, I composited them into an indoor environment inside, overlooking the inside of a palace from a balcony.

AdobeStock_115705428.jpeg

Figure 2: Final background image

Like my previous module, I decided to use Beauty and the Beast (2017) as my main artistic influence. A couple in a palace, meeting up for a courtship–it sounded exactly like the famous ballroom scene from Beauty and the Beast.

f6e5fca78c48d88ef10522271c08dbd0

Figure 3: Ballroom scene from Beauty and the Beast

As well, for the colour grading, I looked at romance films to see what style of lighting they use to convey love between characters. These kind of shots are generally bright, warm, and lights seem to glow. Softness was the key.

 

 

This slideshow requires JavaScript.

Figure 4: Film inspirations

Background: Adding Movement

Flame Flicker

To create an illusion that the background was moving footage and not a still image, I added animated flames on the candles, as well as the light it casts on the walls. In order to do this, I first had to remove the original light bulbs that were from the fake candles in the image (Figure 8), by keying them and eroding the background inwards, and then add back in my own animated flames.

 

 

This slideshow requires JavaScript.

Figure 5: Final clean-up of background

To animate the flames, I used animated Noise to distort the lights that I cut out. To make the noise animate in a random matter in a flame-like way, I used the random wave expression in the “z” parameter of Noise:

random((frame)/30)*(5)
Meaning:
random((frame+offset)/waveLength) * (maxVal-minVal) + minVal
Therefore:
Randomly move between 0 to 5 over a 
frame span of current frame number divided by 30
Figure 6: Random Wave expression (Carson, “Nuke Wave Expressions”)

The maximum value in the “z” parameter was 5 and the minimum was 0, therefore I used those values for my range. I used different particle size and wavelength speed of Noise for the flames in the foreground and flames in the background, because the flames that are further away would move less than those closer to the camera. I also added a Transform to animate it moving upwards, imitating the motion of flames (Figure 7).

noise.gif

Figure 7: Animated noise

I then used this as a distortion map to warp the lights, making them flicker like flames.

flameflicker
Figure 8: Animated flames

I also added in a fake camera move by adding a Transform at the very end of the comp to make the camera slowly move upwards.

Defocus & Atmospherics

In order to get an accurate defocus, I created a depth map by using different shades of grey Constants masked for each plane in depth. I found a useful tutorial which explained how to use the ZDefocus to get nice bokeh effects, which I wanted for the candelights (Raasch, 2018). Instead of the built-in filter shape in ZDefocus, I used a Flare as a filter for to get bladed bokehs.

To add more depth, I added VolumeRays that had slow moving light rays, multiplied by a large pass of Noise to imitate dust. I used Hugo’s Desk’s tutorial on adding volumetric lights as a guide (Guerra, 2018).

Figure 9: Volume Rays tutorial by Hugo Guerra (2018)

BG_final

Figure 10: Final background comp

All of the movement added was very subtle, to not distract from the main focus, but enough to break up the stillness of the background image.

Keying: Alpha

Core and Soft Edge Mattes

To begin creating the alpha channel, I denoised my footage first. I then keyed a core matte using Primatte (Figure 11), and a soft edge matte with Keylight (Figure 12).

alpha_core
Figure 11: Core matte

alpha_softedge

Figure 12: Soft edge matte

I had to clean inside and outside the masks separately for the actors, James and Paola, because they had different levels of details that required different values. As well, the edge detail for the beginning of the sequence when they are close to the camera and the end of the sequence when they walk away required different values because of the slight lighting change. The IBK Gizmo worked better for the beginning section, while Keylight was better for the end. Because they walk into the shot from offscreen, I had a few frames of pure green screen that I used with a FrameBlend as my colour plate input for the IBK Gizmo and many other keys.

alpha_cleaned

Figure 13: Cleaned matte

Fine Detail Mattes

Paola’s fan

The core and edge matte did not capture the motion blur and fine details of the fan that Paola holds. Luckily, it is a black fan, so I was able to get a good luma key out of it, but because there were tracking markers on the greenscreen, some markers stayed. I extracted a clean greenscreen by dividing the luminance of the footage by the luminance of the frame-blended empty greenscreen, then multiplied the result by the difference between the empty greenscreen and a Constant that is the same shade of green as the greenscreen (Figure 14).

gs_original
Original greenscreen
gs_cleaned
Cleaned greenscreen
Figure 14: Original and cleaned greenscreen

Using the new footage with the clean greenscreen, extracting a luma matte of the fan gave a much better result (Figure 15).

alpha_cleaned
Starting matte
alpha_fan
Keymix of matte and fan luma key
Figure 15: Matte with fan key improved

Paola’s feather

For the feather on Paola’s hat, I converted the footage to HSV colorspace, because the Saturation channel (which was now the green channel) gave the most difference in the feather against the background. After grading and inverting it, I shuffled it out to the alpha and Keymixed with the other mattes (Figure 16).

alpha_feather_original
Original
alpha_feather
With saturation key of feather
Figure 16: Feather matte improved

alpha_final

Figure 17: Final matte

Keying: Foreground and Background Treatment

Despill

To despill the footage, I used Keylight with the green channel set to 1, along with a HueCorrect with green suppression. After showing my work in progress to my tutor and to a professional in the industry, I was told that the skin tones were too flattened. So I divided the Keylight despill result with the original greenscreen footage, to get the intensity difference.  I increased the gain on the red and blue channel and lowered the green channel’s to make it more pinkish, then multiplied this back onto the despilled footage. This resulted in the variation of colour from the original multiplied back on top with a pink tint instead of green.

 

This slideshow requires JavaScript.

Figure 18: Original, despilled, and pinks added back

Edge Treatment

I found the most challenging part of this project was the edge treatment. I tried multiple techniques, but ended up using edge mattes from EdgeDetect and Matrix to grade through and make the edges darker and more orange. I am still not happy with the result, because I want the edges to interact with the luminosity of the background. So I took the difference from the despilled footage to the original, then multiplied the result by the background image which I blurred. This helped, but it still gave me edges that were too bright in the dark areas of the background. In the given time, however, I was unable to come to a result I was happy with.

I found, a little too late, some very in-depth tutorials by CreativeLyons on keying which may help me to review over my script when I come back to this shot (YouTube, 2018). Hopefully this will give some solution to a better despill and edge treatment.

Figure 19: Keying tutorials by CreativeLyons (2018)

Regaining Details

Feather & Fan

In order to gain back details that were lost in the key and despill, I locally graded the feather from Paola’s hat using the same method as the one used to extract the alpha, which was to switch to HSV colorspace. I made it brighter to look like the light was shining through it, as it is not a solid object. I also locally graded the fan with its alpha to make it darker, because the fan is black.

Additive Keying

Lastly, for the background treatment, I used the additive keyer method to add back details of the rest of the foreground, similar to the fan and feather. I shuffled out the difference between the original greenscreen footage and greenscreen plate into individual RGB channels to get a black-and-white version of all channels, which were then blended by Blend Channels. I weighted the red channel the heaviest because it had the most detail. This was then multiplied to a desaturated version of the background, to have the image react with the background luminosity. Finally, the result is added on top of the background, under the keyed foreground, to add back highlight details.

addtivekeyer

Figure 20: Background with additive keyer (faint outline of foreground)

Final Colour Grading

For the final look, I graded the composited footage globally to have a golden look, almost like the golden hour. I started by matching it to my original artistic inspirations, then graded it to be much brighter.

finalgrade

Figure 23: Final grade

Matching Grain

To add grain, I kept the denoised background plate and added grain to it to match the foreground. I added the grain through the keyed alpha channel, so that the foreground would not have two passes of grain, and also through the luminance of the background, so that only the darker places had grain. In addition, I added the high frequencies of the background with the Laplacian filter, so that the edges were more pronounced and had more grain. Finally, I desaturated the grain to almost black-and-white, so that the grain only affects the luminance of the footage and not the colour, and added on top as the last step.

Figure 24: Final comp

 


 

Projections

Concept

To demonstrate various skills in projections, I wanted to do a location replacement of a scene. My original footage was shot in Southbank of London, but I wanted to change the shot to be set in Barcelona.

Figure 25: original footage

The things that needed to be done were:

  • removing couple in background
  • replacing the scene from the cars and onwards, meaning sky replacement and adding some kind new objects to integrate the foreground and new background

My initial idea was to replace the posts with a railing, which overlooked the city of Barcelona.

Annie_Amaya_Projections_CONCEPT_PNG_sRGB

Figure 26: Initial concept

After receiving feedback from my instructor and the same mentor who reviewed my keying shot, I was encouraged to really push the shot and make it look completely different; to make it look better than the original, otherwise there was no point. I researched many different kinds of look-out locations that view over a skyline around the world, and found one in Barcelona that I liked a lot. It was a location called Park Guell, where instead of a railing, it had an interesting wavy bench that looked out to the city.

park-guell-tour-15309887151_b6c29aa298_b

Figure 28: Park Guell

I decided to try adding in this bench into my scene, and have it look over the same skyline. But before this, I first had to remove the unwanted elements of the scene.

3D Clean-Up

Camera Tracking

This footage was shot by one of my peers, so luckily I had the information for the camera and lens used. After I undistorted my footage with LensDistort_L by using the line analysis, and inputted this information into my camera tracker.

ezgif-2-2e31009eac
Figure 29: Camera track

I set the ground points as my ground plane, and followed the line of the tiles to set my z-axis. This way, my 3D world will be aligned relatively the same as my scene, making translations easier.

ModelBuilder

I recreated my scene in 3D using the ModelBuilder with my render camera. I required a model for the building, including its entrance and columns, the ground, and a piece of the curb.

Screenshot (1)

Figure 30: 3D models of environment

I also unwrapped my own UVs because the default weren’t as useful.

Screenshot (2)

Figure 31: UVs

Projecting clean patches

I created all my clean patches clone painting with RotoPaint, sometimes separating the high frequencies and low frequencies with the Laplacian filter to make my paintwork more seamless.

Ground, entrance, and curb

I needed to project the ground where the couples feet were, as well as where there was a streetlamp. The streetlamp would not have made sense to keep in the scene, because I was going to be removing the road entirely with a new ground. I projected my clean patches to the ground geometry made in ModelBuilder. I also projected clean patch of the entrance onto a card that was aligned and snapped into place by the building geometry. Finally, I projected a clean frame of the curb that was frame-held onto a cube.

Walls

For the cleanup on the wall, because it’s all flat planes and there is a slight corner, I decided to unwrap the UVs and edit the texture to make the corner seamless. After Rotopainting a clean patch, I applied this as a material to the building geometry.

uv_wall_cleaned

Figure 32: Edited UV texture

I used the same method for the back wall of the building, by UV unwrapping and Rotopainting this image, then applying as a material to a card snapped to the building.

 

This slideshow requires JavaScript.

Figure 33: Original and final clean-up

Adding bench to scene

Creating the texture for projection

It was very difficult to find an image of the bench that was suitable for projections because of its curved surfaces. The real bench is also quite large, so I needed an image that showed the bench far away enough to include most of it, but with high enough resolution to be able to project without losing quality. I ended up opting for this image, which I duplicated and mirrored it so that I had double to work with.

6350742630_d1a6148a94_b

Figure 34: Image of bench used for projections

benchtexturel

Figure 34: Image of bench mirrored

Creating the model

Using the same image (Figure 37), I built a model of the bench in ModelBuilder by viewing through the render camera. This would result in a warped geometry, because the camera used for the image and my footage is not identical, and because I have already altered the perspective of the image by mirroring it. That was fine, because I would project a still image with a frame-held camera to project onto it, and render this out with the render camera. Because I used the same render camera, the model will move the same way as the rest of the models.

Screenshot (3)

Figure 35: 3D model of bench

This, however, resulted in some stretching in the projections, of course. To fix this, I decided to do a second projection with the same image, but from a slightly altered angle so that the there was less stretching, and merge this with the original, main projection.

In order to seamlessly integrate the two projections, which were projected on separate but identical models, I had to do some tweaking in both 2D and 3D. I used an axis for both projections as the global controller, which would move the models together when I position it in the scene.

Screenshot (4)

Figure 36: Projection node graph

The two projections did not match up exactly, so with a combination of SplineWarp and RotoPainting, I altered the image for both projections so that they merged together seamlessly.

Screenshot (5)

Figure 37: 3D model with projected image

Ground replacement

Creating texture for projection

I projected the ground from the original image after extending it by RotoPainting. I then used a planar UVProject from directly above to unwrap the UVs of the ground and have a flat texture to edit. I multiplied a texture of a tile pattern on top of my original ground. The pattern image was very low resolution, so I used a Tile2 to extend the image. I now realize that I transformed the pattern after the tiling, which was the wrong order for concatenation, and gave me a very blurred image to multiply with. Although, when I used a proper high-res pattern image, Moiré patterns (odd mesh-like stripes in the image) started appearing because of the excessive tiling. The low quality image gave me a much more similar looking pattern to my original foreground, so I kept it the way it is even though I lost a lot of image quality.

Diffused and contact shadows

In order to figure out where to place shadows casted from the bench and building on my new ground, I brought in the respective models into the ScanlineRender for the UV unwrapping from above. When I used a white constant as an image into the models, it also showed up as white on the UV unwrap. I used this as a point of reference for creating Roto shapes to grade through.

 

Figure 38: Unwrapped UVs of ground (with reference and with shadow)

Motion blur

With such a large texture image (over 8K), I had to use VectorBlur in order to add motion blur to the ground, because the ScanlineRender was much too slow and lost image quality when adding some sampling. I researched that ScanlineRender does not run on GPU (Foundry Community, 2018).  With a good GPU on my PC, it was much faster and customizable to use VectorBlur.

Background replacement (adding skyline)

Matte painting

I used this image to project in the background, furthest away from the camera.

IMG_0742

Figure 39: Skyline for projection

I separated the front buildings from the rest of the background in order to create parallax by projecting onto cards in different positions in Z space. Additionally, I added footage of birds flying to add some movement into the sky.

sky_final

Figure 40: Final projection of skyline

Integrating elements with foreground

Once all the projections were finished, I added lens distortion and graded it to match the foreground footage. I also wanted to add some defocus in the background, but I knew I could not add too much  because the original footage has only a little defocus, since it is quite a wide shot. I used the depth passes that came from ScanlineRender to use as the depth pass for ZDefocus. Finally, I added grain as the last step.

Adding foreground elements back into scene

Rotoscoping

I wanted to add back in the posts to keep the sense of parallax with the ground, also to make a change of tiling make more sense, as if it were a real border. I stabilized the footage by projecting the components onto their respective geometries and unwrapping the UVs. I applied the rotoscoped and premultiplied footage as a material back onto the geometry, and added a VectorBlur to the alpha to make sure it had motion blur.

Screenshot (6)

Figure 41: UV unwrap with roto

Colour Grading

For this shot, I wanted to create a nostalgic feel. I used one of my favourite films, Her, which does this very well, as a point of reference to imitate the film grading.

screenshot2015-07-14at9.54.36pm

Figure 42: Her (2013)

By viewing this image and my shot through the Waveform and Vector Scope, I knew that to create this look I needed raise the reds and a little bit of the greens to make it orange. I first did a technical grade to even out the colours, as well to make sure no whites were being capped before grading. I used a luminance key to create more contrast between the sky and the foreground, to make it look more like evening time. I then generated a still frame of Noise to Merge(screen) over my footage, to unevenly flatten the blacks and create a more film-like look. Finally, I added a vignette as a final touch.

 

This slideshow requires JavaScript.

Figure 43: Before and after of colour grading

 

Figure 44: Final comp

 


Bibliography

 

Beauty and the Beast. (2017). Directed by B. Condon. USA: Mandeville Films, Walt Disney Pictures.

Foundry Community. (2018). ScanlineRenderer speed. [online] Available at: https://community.foundry.com/discuss/topic/124400 [Accessed 16 Jul. 2018].

Guerra, H. (2018). How to make Volume Rays using The Foundry Nuke – Compositing Workshop. [online] YouTube. Available at: https://www.youtube.com/watch?v=thXx_LNlPSU [Accessed 16 Jul. 2018].

Her. (2013). Directed by S. Jonze. Annapurna Pictures.

Raasch, J. (2018). Defocusing an image in Nuke. [online] Joe Raasch. Available at: http://www.joeraasch.com/defocusing-an-image-in-nuke/ [Accessed 16 Jul. 2018].

YouTube. (2018). Advanced Keying Breakdown – YouTube. [online] Available at: https://www.youtube.com/playlist?list=PLt2Nu4KGXJ2iXe7s-ydCQ9u1tTzzApmJX [Accessed 16 Jul. 2018].

 

 

 

 

Advertisements

Module 02: Clean-Up & CG Integration

Concept

My idea was to create a shot that looked like it came from a period film with a model of a violin that I wanted to use.

violinmodel

Figure 1 – Model by Andrea Lacedelli.

My inspirations were French period films, such as Hugo (2011) and Beauty and the Beast (both the 1991 and 2017 release).

This slideshow requires JavaScript.

Figure 2 – Hugo (2011) and Beauty and the Beast (2017)

Stock Footage

I searched for footage that also required cleanup so that I can showcase multiple skills in one shot. In order for the task to be challenging enough, I required a shot with camera movement, elements to take out or replace, and good composition overall that left space for the integration of the violin.

The footage that I decided on was one from Pond5, which was shot in an antique shop in Paris.

Figure 3 – Original footage. Courtesy of Pond5.com.

I wanted to take out the lamp on the foreground table as that is the only electrical object that is in the scene, to go along with concept of a period film. (My designated period based on this footage was, loosely, around late 18th to early 20th century France.) The footage was long enough that I had enough clean frames to use for my cleanup. I would then put the violin in its place instead, to maintain good composition.

Camera Track

In order to camera track this shot, I first had to mask out any reflections that would interfere with the tracker. This included the mirror on the back wall, as well as the photo frames on screen left.

I utilized 2D tracking for these sections, which I then applied as a Matchmove to a Roto shape that will be used as a mask for the camera track.

Focal Length

In addition, because I purchased this footage off of a stock footage website, I did not have the specifications for the lens or camera being used. I used Nuke’s default settings to camera track with the variables set to unknown, and the result seemed promising at first. When I rotated it to align with the edges of the table, however, there was some sliding.

table_frame1

Figure 4 – Frame 1
table_frame125
Figure 5 – Frame 125

This led me to believe that the focal length was not accurate, because the plane appeared too close to the camera. In the settings of the camera track, I could see that Nuke calculated the lens to be a little over 80mm, which would be close to a telephoto lens. It seemed unlikely that the videographer of this clip would use a telephoto lens to shoot inside a crowded vintage shop. Telephoto lenses are usually used to capture a small area from a distance with a shallow depth of field (Nikon, 2012).  This shot was relatively all in focus, but with minimal lens distortion, so it is also unlikely they used a wide-angle lens. They most likely needed a lens that can be close to the objects but would still capture the whole interior. I found a video on the blog NoFilmSchool that explained “lens compression”, or as they call it in the video, “perspective compression” (Renée, 2018).

Figure 6 – NoFilmSchool – “The Lens Compression Myth: What’s Really Happening to Your Images When You Switch Focal Length”

By following this logic, I estimated the lens to be mid-focal length, around 30mm to 50mm. I tested the Camera Track with the focal length set to Approximate Constant at 50mm first, then 30mm.

ViewerCap_50mmCam

Figure 7 – Camera track set to approximately 50mm focal length.

ViewerCap_30mmCam.gif

Figure 8 – Camera track set to approximately 30mm focal length.

In Figure 5, the camera focal length was set to approximately 50mm. It gave a much better result, but the plane is still not in the same perspective as the table. The tracker that was set to 30mm was the most accurate, with the plane in proper perspective on the table.

I then exported the camera track as an FBX to import into Maya, and used the same track for my 3D Cleanup.

 

Cleanup

I wanted to replace the lamp with my violin model on the marble table in the foreground, which meant I had to recreate the background that was behind the lamp. In addition, there is a mirror on the back wall in which you can see the reflection of the cameraman, which also had to be removed.

cleanup_annotated.png

Figure 9 – Objects to remove

2D Cleanup: Mirror Reflections

The mirror reflections are one flat surface, so 2D-trackers were enough for removing them. Two different tracks were needed: one for the inside the reflection, and one for the mirror frame. This way, I can create a clean patch of the reflection, which then I would stencil out by a mask of the frame.

This slideshow requires JavaScript.

Figure 10 – Cleaning reflections in mirror

Another challenge was to keep the smears and dirt on the mirror, which again moved in a different direction from the reflection itself because it is essentially on the same plane as the wall. Some important things to note were:

  • the reflections in the mirror moved slower than the smears on the mirror, because the objects being reflected are technically further away from the camera than the mirror
  • the reflection changes in perspective over time, while the smears remain constant and not in motion (aside from the camera movement)
  • the smears are translucent, meaning the reflections must be seen moving through them

I took a Frame Hold at frame 1 and frame 2 and used Merge(difference) between the two frames. Because the reflections move much slower, they stay relatively in the same position while the smears on the mirror move faster. The difference, therefore, gives me mostly the smears because it is the only thing that moved. I then used the Laplacian filter to extract only the texture of the smears, and used this as a mask for grading my clean patches. By lowering the white point, essentially making the image whiter, I created a similar effect of the reflection looking washed out by white smears. Using the same trackers and Roto masks as above, I applied the smears on top of my clean patches.

laplacian.gif

Figure 11 – Laplacian filtering: Frame 1, Frame 2, difference

 

3D Cleanup: Lamp Removal

Foreground Table

For recreating the foreground marble table, I used the same plane that I used as a reference for the violin placement, resized to fit the table, so that the patch would be consistent with the violin. The shadows and reflection of the lamp had to be removed as well. This patch was then projected onto a card in the scene.

Table

In a similar fashion, I painted a clean patch of the table and projected it onto a card. I generated a Point Cloud to use as a reference for positioning my cards.

pointcloud.png

Figure 12 – Point Cloud

The top of the table and front leg projected on the same card worked out fine. The back leg, however, had to be projected onto a separate geometry to maintain the parallax as it was on screen for a longer duration than the rest of the table. I projected it onto a cylinder to retain the proper perspective change and positioned it relative to the card from above.

Chair

For the chair, I created a clean patch for the whole chair and tried to project it onto different cards to maintain parallax. The perspective of the chair was very difficult to match up when sections of the chair were separated into different cards, so I resorted to using only one card for the whole object, then used Spline Warp to correct any misalignment.

Right Wall

The right wall contained many objects, some of which were never fully on screen from which a clean patch can be extracted. The statue, for example, only has half of it shown in the original, un-trimmed and un-cropped footage. I therefore had to recreate the other half of the statue by a mix of cloning and painting by hand.

statue

Figure 13 – Paint process of statue recreation

Mirror & Back Wall

I separated the projections for the mirror from the back wall because I needed to retain the reflections that I cleaned up earlier and project the whole sequence, not a frame hold. I soon realized that the lamp goes over the mirror, which meant I had to somehow recreate that portion of the mirror along with moving reflections. Because it was for such a short time span, I made the decision to put a frame hold right before the lamp goes over the mirror. The back wall was then projected onto a card, which I then Merged(over) the ScanlineRender of the mirror back on top.

Final Result

CG Compositing

AOVS

 

 

 

Figure 14 – AOVs of violin

For my CG passes, I did my own rendering from Maya 2017 using Arnold renderer. For the lighting, I used a basic area light, along with an IBL that wrapped around a frame from the background footage. It was not an HDR image so the light values were not accurate, but I mainly used it to have the environment show in the reflections on the violin. I also created a plane for the violin to sit on with an Arnold shader called “shadowCatcher”, which is meant for rendering out cast shadows.

I had to render two different sets of passes: one with the room reflected on the violin, and one without. The reason for this is because the reflection of the IBL was baked into the specular passes, and I could not find a way to separate the reflection into its own AOV. Because I wanted to be able to manipulate the reflections on their own, the only solution was to render out the specular with and without the reflections enabled from the IBL, then take it into Nuke to extract the difference, which theoretically should result as the raw reflections.

This slideshow requires JavaScript.

I had a similar issue with the shadowCatcher AOVs. I did not render out two different sets of AOVs for the shadowCatcher, however, simply to save render time. The shadowCatcher material allows you to render out with a transparent diffuse, so that the final beauty can be multiplied onto the footage. Therefore the beauty was only made up of specular, which included reflections, and shadows. Confusingly enough, it did come with a reflection pass, but it appeared to be the exact same as the beauty.

Figure 15 – AOVs of shadowCatcher

In order to separate the reflections, I subtracted the specular passes (where the reflections were in) from the beauty to be left with only the shadows. This way, I could grade the reflections and shadows individually, as well as Merge(plus) the reflections onto the footage and Merge(multiply) only the shadows.

The alpha of the shadowCatcher beauty was the alpha of the shadow, which was not very useful for me. I therefore had to create my own alpha matte to Unpremultiply all of the AOVs. I used the reflection pass’ alpha as a starter matte because it was the closest to a complete alpha, then graded and roto-ed out the shadows to get the full alpha matte.

To make matters more complicated, the geometry of the shadow plane did not match the table. Because of this, when the beauty was multiplied with the background, the passes (including my new alpha) did not extend to the edge of the table. This meant that I had to somehow extend the edges of the beauty, but still maintain the position of the contact points of the shadows and reflections with the violin. Since I already had a tracked camera, I stabilized the beauty pass, extended only the edges with GridWarp, then blurred the edges with Erode and a Roto to create a softer edge for the alpha matte.

fixinggeo_nodegraph.png

Figure 16 – Node tree of camera stabilization, edit, and re-projection

 

gridwarp
Figure 17 – Grid Warp

While testing out mathematics behind the passes, I made a neat discovery with the diffuse albedo pass. This pass is essentially the flat textures without any light or shadow. When I divided the sum of diffuse direct and indirect (i.e., diffuse with light) by the diffuse albedo (i.e., diffuse without light), I was left with only the light sources that are cast on the model.

It appears to be blown out, but when I lowered the gamma in the Viewer, I saw that it had a full range of information still in it. I realized I can edit the light isolated by itself, then multiply back on the textures (diffuse albedo) to change how the light affects the texture. Of course, it can only change the colour and intensity and not the direction, but it was a useful discovery nonetheless.

For both the violin and the plane, I also rendered out an ambient occlusion pass separately. I used this to add a tiny bit more shadow to the scene to help with contact shadows.

Grading

After gathering all the passes that I needed, I began my grading. The unaltered beauty of the violin of course did not match the environment perfectly, even if I used my footage as an IBL.

The background footage is quite faded, and also quite compressed as it was originally an MP4 video file.

For the light intensity, I tinted it a little less red because it was too warm, and then made it slightly brighter but raised the gamma to have lighter shadow tones. I also had to isolate the bridge of the violin because it reflected off too much light for a piece of wood. I used a combination of Keylight and Roto to isolate it and grade it darker.

I then desaturated the diffuse_albedo because the colour of the violin was too bold compared to the background. Similarly, the SSS passes, with contained only the violin stand, were also off-colour. It had a noise issue, which I will cover later, that caused some strange bright, saturated pixels. I lowered the contrast and turned the saturation all the way to zero, then tinted it golden to make the material look more regal.

After separating the reflections from the specular as explained earlier, I graded the specular to be much softer. I also desaturated the yellows in the reflections because the tuners were much too saturated.

The reflections did not include the reflections of the table, so I decided to create some bounce light by using the shadow matte. I combined the matte with some ramps to isolate the parts of the violin I wanted to reflect the table, as well as subtracted the stand so that my bounce light would not affect it.

Sampling & Denoising

Another issue I had with the passes was in the sampling. The main passes, such as diffuse and specular, had less noise because the sampling for them were set higher during render time. The others’ quality were sacrificed to cut down render time. The worst ones were the subsurface scattering passes (SSS).

Due to their screen size, I could not denoise them because the sample area was much too small. I instead used Degrain(Simple) to manually degrain each channel. The result lost a lot of detail and shadow contrast, therefore I used Merge(average) and to smooth out the noise but keep the detail, then adjusted the contrast afterwards to make the shadows bolder.

I also had to denoise the shadow passes and ambient occlusion passes, although these were less prevalent.

Finally, the final result was premultiplied by the alpha.

 

This slideshow requires JavaScript.

Figure 18 – Grading progression

Adding Chromatic Aberration and Artefacts

The original footage is quite compressed, so there are many artefacts present in it. I tried to mimic them by adding some chromatic aberration and compression artefacts to the violin render.

I found this tutorial online for applying chromatic aberration:

Figure 18 – Chromatic aberration tutorial

I used the GodRays method, and actually only applied it to the red channels for both directions because the original footage has a reddish tint in the edges. This gave me a red and aqua colour aberration.

In addition, I used the Sharpen node and then Blur node on top to purposely lose quality in the CG render. On the back wall, the pixels where the mirror meets the wall has some yellowish tint to it. Therefore, in the Sharpen, I only applied it to red and blue, essentially only darkening these channels and therefore leaving green to be brighter. The Blur is just one pixel, to break up the sharp edges and make the edge integrate more seamlessly.

Finally, although the footage does not have that much grain (I think it is hidden by the more prevalent artefacts), I added a little bit of grain to add some more imperfections to the render.

Figure 19 – Final Render

Conclusion

From cleanup to CG compositing, this shot required a lot of different skills, with many more issues and challenges than anticipated. The cleanup was more straightforward and less problem-solving than the CG compositing, but it nonetheless was a lot of work. It was very interesting to work with AOVs and explore all the possible uses of them when combined and calculated in different ways. I had the most issues with the shadowCatcher AOVs, simply because I did not understand them very well, also stemming from my lack of knowledge in the material I used to render it with. It was nice to have the control of rendering my own passes, but I certainly felt that I would need to do a lot more research to render out the passes correctly. (Or find a CG artist to do it for me.) I am happy that I was able to find workarounds, but it did use up time that could have been spent elsewhere.

For next steps, I would like to add a look to this shot to make it look closer to what my original inspirations were. There were other AOVs that I simply did not have the time to play around with, such as the depth passes, which I would like to explore more.

 


Bibliograpy

 

Beffrey, P. and Spitzak, B. (2018). Nuke. London: The Foundry.

Maya. (2018). Autodesk, Inc.

Nikonusa.com. (2018). Focal Length | Understanding Camera Zoom & Lens Focal Length | Nikon from Nikon. [online] Available at: https://www.nikonusa.com/en/learn-and-explore/a/tips-and-techniques/understanding-focal-length.html [Accessed 1 Jun. 2018].

Pluralsight Creative (2018). NUKE Top Tip: Add Realistic Chromatic Aberration to Your CG Composites. Available at: https://www.youtube.com/watch?v=kvHIl5mI8O4 [Accessed 1 Jun. 2018].

Renée, V. (2018). The Lens Compression Myth: What’s Really Happening to Your Images When You Switch Focal Length. [online] No Film School. Available at: https://nofilmschool.com/2018/05/lens-compression-myth-whats-really-happening-your-images-when-you-switch-focal-length [Accessed 1 Jun. 2018].

 


 

Module 01: Rotoscoping

Introduction

Prior to this assignment and this course, I had no experience rotoscoping in Silhouette or Nuke. (The softwares themselves were new to me.) Coming from a background consisting mostly of 2D animation, the term “rotoscoping” to me simply meant to trace the outline of a subject frame by frame, rather than animating by hand. The birth of rotoscoping, after all, happened with animation when Fleischer Studios created cartoons with rotoscoped characters. (Bratt 2018, p.1). I can see the relation between the different uses of the term, and found it rather interesting how animation techniques can be applied to rotoscoping mattes (more on this later). It was certainly a learning curve, but this assignment turned out to be quite straight-forward, albeit time consuming. There is nothing more to expect from a task as arduous as rotoscoping!

Shot Selection

The shot I selected is originally a 15-second shot which I trimmed down to about 80 frames.

Figure 01 – Original footage

I chose this shot because I felt it had adequate variety in motion–with slow versus fast movement and rotation–and additionally it included both organic and inorganic objects. My task was to rotoscope the woman, as well as the virtual reality headset as a separate matte (including the inside edges). Having the headset as a separate matte also gives me more opportunities to manipulate the footage if I choose to composite it with other elements in the future. Seeing how it is a virtual reality headset, it could easily be made into a mock-up commercial by compositing in a virtual UI with motion graphics and the likes.

Research

Artistic Influences

Speaking of commercials, when I looked into current virtual reality headset commercials, I found several examples of commercials with similar ideas. Some were of lower quality in production than others, which could be improved if a rotoscoped matte was provided. All of them consisted some form of rotoscoping or keying in order to replace the background around the actors.

Figure 02 – VR Commercial example

Figure 02 (VR Headset Commercial, 2017) showcases a little bit of rotoscoping at 1:06, when the actress’ arm and a portion of her body goes over the VR interface. It also shows a lack of polishing at 1:50 in the transition of the headset fading away from the actress’ face. The cut is most obvious in the hair. A shot like this could benefit from having a separate matte of the headset, which could have been recreated in 3D, tracked and composited onto the actress to make the transition seamless.

Figure 03 – Windows Mixed Reality Commercial

Figure 03 (Introducing Windows Mixed Reality, 2017) demonstrates an idea similar to mine, by having the actual headset reveal what the viewer is watching, although they do not show the revealed environment in the same perspective as the viewer’s (which in my case, I would like it to be).

Figure 04 – Oculus Rift Commercial

Figure 04 (Oculus Rift|Step into Rift, 2017) shows a higher-end production commercial, evident by its more sophisticated effects. There are several shots where the actor was composited into a different environment, which would require some rotoing and keying. A project like this could also benefit from having a separate headset matte by simply giving the compositor more control over colour grading, lighting, and alterations.

My main inspiration, however, is from Iron Man. The interfaces in Iron Man are highly detailed and the user interacts with them. I particularly like this scene in Figure 05 (Iron Man 2 Amazing Interfaces & Holograms (Pt. 2 of 3), 2011), especially when the interface fully envelopes around Tony Stark at 3:07.

Figure 05 – Iron Man virtual UI scene

This is similar to what I envision for my footage. Since the actress in my shot is also doing hand-swiping motions, it gives great potential for compositing in interactive UIs like those in Iron Man.

Other Resources

My main resource for this project was Rotoscoping: Techniques & Tools for the Aspiring Artist (2018) by Benjamin Bratt. From this book I learned the concepts of rotoing and what common mistakes to avoid, such as using large, complicated shapes instead of breaking up a section into smaller, simpler shapes (2018, p.61). I also liked to compare Bratt’s techniques to those of animation, specifically ones that are covered in Richard William’s The Animator’s Survival Kit (2001). 

One useful parallel I drew between the two is in the ways to approach keyframing. Out of the two main keyframing techniques Bratt outlines, bifurcation and motion-based, I knew I preferred to work with the motion-based keyframing, which is similar to Williams’ “pose-to-pose” animation method. Pose-to-pose animation means to plan the extreme keys and add in the breakdowns (transitional poses between extreme poses), and then finally the in-betweens (Williams 2001, p.62). Both authors describe the advantages of this method similarly: the movement is smoother and predictable by having a structure in the timing, which mimics real-time motion. Bratt also mentions that motion-based keyframing is a lot more efficient than bifurcation because it requires less keyframes and produces more natural movement (Bratt 2018, p. 91).

arcs_pg97

Figure 06 – Animator’s Survival Kit, Richard Williams, pg. 91

In Figure 06 (Williams, 2001, p. 91), Williams discusses keeping arcs in in-between frames to create the most natural movement in animation. While this is about traditional animation, it can also be applied to rotoscoping. When keyframing extremes, if the motion path of the points do not follow an arc, they are interpolated as straight, linear movements, which generally never happen in real-life situations. Even robotic objects are unable to move completely in a straight line due to environmental factors like gravity, friction, wind, weight, momentum–the list can go on.

Another vital piece of information I found in Bratt’s book is that in Silhouette, the spline shape points are interpolated individually as vertices, and not as a whole shape, such as in other software like After Effects (Bratt, 2018, p. 132). This affects the in-betweens that Silhouette calculates between set keyframes, and consequentially the motion blur as well.

vertex_interpolation

Figure 07 – Vertex/Point-based Interpolation

object_interpolation

Figure 08 – Object-based Interpolation

Figure 07 and 08 show the difference between point interpolation and object interpolation. Silhouette calculates in-betweens using the former method. (These examples I created in Nuke for demonstration purposes.) I mostly animated my shapes “pose-to-pose”, therefore this information was useful to know. When there are multiple keyframes, and especially when there is rotation, the points’ motion paths that are interpolated between user-defined keyframes can distort the shape. In order to keep the splines retain its general shape, additional keyframes must be added in between to create a smooth arc between keys.

Rotoscoping

Woman

I started my shot by rotoscoping the woman, which is the organic element of the shot. She generally moves quite slowly in the section I chose, with the only part containing motion blur being when she raises her arm. Her hair, of course, was the most challenging bit. Her hair is obviously not naturally straight, so while the ends are generally smooth, near the roots, her hair has many small bumps that shift in perspective when she turns her head. Her face remains static, but it also has the perspective change as well.

Face

To my great advantage, in Chapter 14, Bratt covers how to rotoscope head rotations. The example he shows in his book is very similar movement to mine, so I imitated his shapes and was able to get a good result because of it.

Screen Shot 2018-04-18 at 1.09.38 AM

Figure 09 – Benjamin Bratt’s shapes for head turn. (2018, p. 244)

L_FaceShapes

Figure 10 – My shapes for head turn

I used some tracking on the cheeks to aid me with the perspective change, and to catch the subtle wobbling movements of the head.

cheek_trackers_cropped

Figure 11 – Trackers for Cheek

At first I tried to used a 4-point tracker (using trackers Cheek1 to 4, shown in green in Figure 11) and apply it to my shape as a corner pin, but it resulted in too much warping. I then attempted to use only two of the trackers (Cheek1 and Cheek4) with the rotation matrix disabled, but then the tracking jittered. I finally averaged the top two trackers, Cheek1 and Cheek4, which resulted in a much smoother track with no jitter. I did the same with the bottom two trackers, Cheek2 and Cheek3, and so I had two new trackers, shown in yellow in Figure 11. I then applied these averaged trackers with rotation enabled to catch the wobbling of the jaw.

Hands, Sleeves, & Shirt

Rotoscoping the hand was a lot less difficult than I expected, most likely because her hand stays quite rigid during my chosen segment of the shot. I created shapes based on the joints of the fingers, as well as the different parts of the palm.

AllOutline.0389

Figure 12 – Hand shapes

For the sleeves and shoulders, apart from a few wrinkles, it also was not a complicated shape to rotoscope. I did, however, come across some trouble after importing the Silhouette shapes into Nuke. The motion blur from Silhouette was not transferring properly in Nuke and was causing some distortions in the alpha.

sleeve_motionblur_glitch

Figure 13 – Motion blur glitch in shoulder

I experimented with the motion blur settings and found that it was the right edge of the shape causing the distortion. When I set the shutter offest to “end” instead of “center”, it removed the distortion but the motion blur then ate into the shape.

shoulder_motionblur_glitch_fix1

Figure 14 – Shutter Offset set to “end”

I instead had to keyframe the motion blur to turn off for the last few frames. I decided this was acceptable because she no longer moves enough to cause motion blur for the end segment of this clip.

Hair

The following are Bratt’s rules for rotoscoping hair (2018, p. 231).

  • break down hair shapes to “base” and “standout” shapes first;

This means to rotoscope the more solid parts of the hair, as well as any loose strands ones that are visibly clear in the original footage.

  • if a hair disappears and reappears in the footage, it is best to keep it in the roto matte to have accurate motion blur;

It is also important to note that to the the human eye, flickering shapes appearing and disappearing is much easier to spot than a slight hue change within a shape.

  • in addition to keeping the shapes in the matte, it is more important to keep the positions of shapes accurate to produce the correct angle and amount of motion blur

Hair strands will require sub-object manipulation given their flimsy, malleable structure. The difficult part is to keep the points’ position as consistent as possible on the hair strand.

  • lastly, use blur sparingly!

In all honestly, I did not follow this rule, particularly because my footage is quite blurry to begin with. It has a shallow depth of field, so while the face and headset are in focus, the shoulders, the hand, and certain bits of the hair are not. I applied a blur of 0.5-1 pixels to all of these shapes to keep the alpha edge uniform but still respect the defocus of the original footage.

Beginning with the top of the head, I used a lot of mocha tracking for the portion of the hair. After some successes and failures, I came to the conclusion that mocha trackers work very well on highly textured surfaces. It worked wonders on the hair, for example, but did not track as accurately on skin. All of the shapes I created for the hair had tracking applied to it. I also extracted tracking data from a hair shape that I manually animated by creating corner pin trackers from the object (Figure 15).

create_cornerpinsfromshape.png

Figure 15 – Menu for extracting tracking data

For the individual hair strands, I used open spline shapes with a stroke applied to them. I discovered I can individually alter the width at each point and break up the uniformity of the stroke (Figure 16). This was of great help to create more natural looking strokes that mimicked the way hair strands behave.

This slideshow requires JavaScript.

Figure 16 – Sub-object stroke width manipulation

Importing open spline shapes into Nuke proved to be very problematic. Below is what my footage looked like in Nuke:

alpha_unfixed

Figure 17 – Alpha matte in Nuke after importing SFX shapes

It so happens that when the points of the spline shape are too close, the tangents become tangled and that is what causes the matte to glitch like in Figure 17. All that needed to be done was to nudge the points further away, and the tangents became straight again. It also helped to change the smoothing of the point in Nuke by changing it to “cusp-de-smooth”. This set the tangents to be perpendicular to the stroke.

 

This slideshow requires JavaScript.

Figure 18 – Before and after nudging points on open spline

Headset

Rotoscoping the headset was a different challenge, of course, being an inorganic object. I preferred to use bezier shapes over splined because it was much easier to control straight lines and tight corners. It did leave more room for error, however, in sliding points. The following diagram from the Silhouette manual shows the differences between the shapes:

Screen Shot 2018-04-18 at 3.32.19 AM

Figure 19 – Types of shapes in Sihouette

As shown in Figure 17, bezier shapes have more individual controls, but the tangents also have to be animated as well, which could add another level of sliding and jitter. B-Splines are smoother but require more points for sharp corners and are difficult to handle when at these extreme angles.  X-Spline are a third type of shape which I did not try, but it is a shape that can combine all types of points: cardinal, corner, and B-spline (Silhouette, p. 303). It could certainly be useful in the future.

Visor

I decided to rotoscope the visor (flat part of the eye mask) on its own in case I wanted to composite something inside the headset. Keeping the edges consistent and “glued” to the object was the most difficult part of rotoscoping this element. Tracking did not help, but instead made it hard to keep the shape still when inaccurate tracking data was applied to the object. Therefore, I manually animated the shape after stabilizing the footage.

Headset parts: side pieces, bands, straps, & wires

The side pieces of the headset worked better with a two-point track, similar to the way I rotoscoped the face. Sometimes the lighting was too poor and made it difficult to decipher how the individual parts of the visor were connected. I did some image searching and realized that the model in this footage is the Oculus Rift (with the logo of the headset removed, interestingly enough). Once I found several reference photos, keeping the anatomy of the object was very helpful.

 

Figure 20 – Oculus Rift reference photos

The earpiece tracked perfectly with mocha tracker, after which only two manual keyframes were necessary. I assume this is also because the ear piece has small holes which create a highly contrasted texture.

The band that wraps around the head was more difficult because it is basically cylindrical, following the circumference of a human head’s shape. The sides also get hidden as the woman turns her head, and because I was creating a separate matte for the headset, I could not rely on the matte of the woman to hide the inner portions of the shapes. I used the same two-point trackers as the side bands even though I knew the results would not have accurate foreshortening, because they are on the same plane in 3D space and I wanted to capture the subtle rotations of the head. It also helps to have the same trackers applied to pieces that are attached together so that they move uniformly.

For the tip of the strap at the top of the head, I thought that extracting the tracking data from the visor shape and applying it would be sufficient. I soon realized, however, that although they face the same direction, the foreshortening of the visor was on the x-axis (horizontal), while the foreshortening of the strap was on the z-axis (front to back). Therefore, it actually required me to extract tracking information from one of the side bands to gain the right foreshortening matrix. The downside to using manually animated objects are trackers, though, is that the trackers can result in with bumpy motion paths, because the animation may not have been smooth in the first place. To smoothen the motion path, I deleted every other keyframe in the layer’s matrix at certain parts that needed it. This allowed for more interpolation done by the computer, which would be linear and follow an arc more smoothly.

Conclusion

All in all, it was learning experience, of course. After I gained some momentum, I established my general workflow for creating shapes and animating them. First I start by tracking the shape if possible, as that speeds up the process immensely when successful. It was clear that tracking is best for slow movements, because the motion blur in fast movements makes the trackers go askew. I then animate the shape pose-to-pose, or by motion-based keyframing, then run through the same section again to add in-betweens. If I used tracking, sometimes I would have to animate against it to compensate for tracking discrepancies. Finally, I would go frame by frame to check edge consistency, and animate in sub-object mode if necessary.

For my next rotoscoping shot, I would like to explore more of the Silhouette’s tools. I felt I only relied on the basic tools, so I could, perhaps, work more efficiently with the other tools, such as X-Splines and using IK. I’d also like try footage with faster action, as this one was quite slow.

My next goals for this shot is to improve the edge at the top of the head. The hair here was quite fuzzy, so I know using smaller shapes would work in my favour. There are also quite a few more hair strands to be added, which would add more detail to the matte. I look forward finishing this matte and being able to use it for compositing this shot.

Final Footage

Figure 21 – Playlist of all passes: Original, Overlay, Grey, and Matte

Bibliography

Beffrey, P. and Spitzak, B. (2018). Nuke. London: The Foundry.

BrashVideo (2017). VR Headset Commercial. Available at: https://youtu.be/CLduxNedAOs [Accessed 18 Apr. 2018].

BRATT, B. (2018). ROTOSCOPING. [S.l.]: CRC PRESS.

Kivolowitz, P., Miller, P., Moyer, P. and Paolini, M. (2018). Silhouette FX. Silhouette FX.

Oculus Rift (2017). Oculus Rift|Step into Rift. Available at: https://youtu.be/5q6BcQq_yhw [Accessed 18 Apr. 2018].

Prop-Art Studios (2011). Iron Man 2 Amazing Interfaces & Holograms (Pt. 2 of 3). Available at: https://youtu.be/P5k-4-OEuTk [Accessed 18 Apr. 2018].

Silhouette 5 User Guide. (2018). [ebook] Available at: https://www.silhouettefx.com/ [Accessed 19 Apr. 2018].

Williams, R. (2009). The animator’s survival kit. New York: Faver & Faver.

Windows (2017). Windows Mixed Reality. Available at: https://youtu.be/0AWhsBNU1jU [Accessed 18 Apr. 2018].

Continue reading “Module 01: Rotoscoping”