As an architect, Kay John Yim is busy with his daily work. But with his passion for CG art, he has taught himself more than 30 CG software and plug-ins in 2-3 years and has created many fantastic CG works in his spare time. His artworks are rich in detail, magnificent, delicate, and full of romantic imagination.
John's recent works © Kay John Yim
Kay John Yim
Chartered Architect & CGI Artist
John grew up in Hong Kong, and graduated from the University of Bath (UK) with a degree in Science/Architectural Studies. And he was an exchange student in Architecture at Delft University of Technology (Netherlands). After graduation, he studied architecture at the Architectural Association School of Architecture. He is currently an architect at Spink Partners, a well-known British architectural design firm.
Kay John Yim’s personal site: https://johnyim.com/
ArtStation: https://www.artstation.com/johnyim
The making-of tutorial article of "Ballerina" was wrote by Kay John Yim for Fox Renderfarm, which is a leading cloud rendering service provider and GPU&CPU render farm:
Project "Ballerina" is a 30-second full CG animation, my first personal project to feature an animated photorealistic CG character staged within a grand Baroque rotunda lounge.
Ballerina © Kay John Yim
The animation is a representation of my inner struggles in all artistic pursuits, both metaphorically and literally.
Ballet, an art form widely known to have stringent standards of beauty and highly susceptible to public and self-criticism, is the metaphor of my daily professional and artistic practice. As an architect by day, I work on architectural visualizations, where every detail is being scrutinized by my colleagues, senior architects and clients. As an artist by night, I work on personal CG projects, of which I would do hundreds and up to thousands of iterations to get the perfect compositions and color schemes. No matter how proficient I become in my professional and artistic skills, the inner struggle never fades away.
Ballerina © Kay John Yim
The project is also literally a technical struggle - every step of the CG character creation process was alien to me. When I started working on the project, I struggled to find a comprehensive guide for creating photorealistic character animation - almost every article or tutorial I came across were either too specialized or too impractical for an indie CG artist to follow.
Through months of trial and error, I have since learned a lot about efficient character animation and rendering. This article is an intermediate guide for any indie artists like myself who want to take their CG art to the next level. As much as I would love to cater the guide for everyone, it is practically impossible to cover the nuts and bolts of every piece of software I use, hence I have included links to tutorials or resources wherever possible for beginners to follow along.
Ballerina © Kay John Yim
The guide is divided into 4 main parts:
- The Architecture
- The Character
- The Animation
- Rendering
Software I used include:
- Rhino
- Moment of Inspiration 4 (MOI)
- Cinema4D (C4D)
- Redshift (RS)
- Character Creator 3 (CC3)
- iClone
- ZBrush & ZWrap
- XNormal
- Marvelous Designer 11 (MD)
- Houdini
1. THE ARCHITECTURE
My primary software for architectural modeling is Rhino.
There are many different ways to approach architectural modeling. Having used dozens of CAD and DCC software as an Architect, Rhino is arguably the best architectural modeling software for its accuracy and versatility. Rhino's main advantage over some other more popular DCCs like Cinema4D (C4D) or Houdini is its capability in handling very detailed curves in large quantities.
As an Architect, every model I built always started with a curve, usually in the shape of a wall section, cornice or skirting section, swept along another curve of a plan. Rhino's command list might seem overwhelming at first, but I almost exclusively used a dozen of them to turn curves into 3D geometry:
- Rebuild
- Trim
- Blend
- Sweep
- Extrude
- Sweep 2 Rails
- Flow Along Surface
- Surface from Network of Curves
The key to architectural modeling is to always use reference wherever possible. I always have PureRef open at the right bottom corner of my screen to make sure I model in correct proportions and scale. This usually includes actual photos and architectural drawings.
For this particular project I used the Amalienburg Hunting Lounge in Munich as my primary reference for the architecture.
PureRef board for the project
I downloaded as many high-res references as possible, which included photos of different camera angles, different lighting and weather conditions. This gave me a wide range of details to work with, as well as a general idea of the space relative to human scale.
While the architecture consisted of 3 parts - the rotunda, the hallway and the end wall - they were essentially the same module. Hence I initially modeled one wall module consisting of a mirror and a window, duplicated and bent along a circle to get the walls of the rotunda.
Rhino modeling always begins with curves
wall module duplicated and bent along a curve
The module was reused for both the hallway and the end wall to save time and (rendering) memory.
Having built up a library of architectural profiles and ornaments over the past year, I was able to reuse and recycle profiles and ornaments for the modeling of the architecture.
Ornament modeling could be a daunting task, but with a couple of ornaments modeled I simply duplicated and rearranged them geometrically to get unique shapes.
Rhino ornament placement
The ceiling ornament for instance, was basically a single ornament that covered 1/8 of the dome surface, but radially duplicated 8 times to cover the entire ceiling. The same technique also applies to modeling of the chandelier.
All the objects within Rhino were then assigned to different layers by material; this made material assignment a lot easier later on in C4D.
assigning objects to layers by material
Notes:
The best way to get familiar with Rhino navigation is to model small-scale objects. Simply Rhino has a great beginner's series in modeling a teapot in Rhino:
I have posted a few WIP montages on my Youtube channel, while not meant to be tutorials, one should be able to get an overview of my modeling process: https://www.youtube.com/c/jyjohnyim
A detailed Rhino tutorial for modeling ornaments:
For anyone in a pinch, there are pre-built ornaments for purchase on 3D model stores like Textures.com; some ornament manufactures have free models available for download on Sketchfab and 3dsky.
Exporting from Rhino to C4D
After 4 days of architectural modeling, the Rhino model eventually consisted of 50% NURBS and 50% mesh. I used NURBS mostly for the primary architectural elements (walls, cornices, skirtings) and mesh for the ornaments.
Rhino is primarily a NURBS (Non-Uniform Rational B-Splines) software; and although NURBS models are very accurate in representing curve and surface data, most render engines or DCCs do not support NURBS.
For this reason I exported the NURBS and MESHES to .3dm and .FBX respectively, and used Moment of Inspiration (MOI) to convert the NURBS model to a mesh.
MOI has the best NURBS to quad mesh conversion(over Rhino or any other DCCs) - it always gives a clean mesh that could then be easily edited or UV-mapped for rendering.
exporting from MOI
Importing into C4D
Importing the FBX file into C4D was relatively straightforward, but there were a couple of things I paid attention to, notably the import settings, the model orientation and file unit, listed below in order of operation:
- open up a new project in C4D (project unit in cm);
- merge FBX;
- check "Geometry" and "Material" in the merge panel;
- change imported geometry orientation (P) by -90 degree in the Y-axis;
- use script "AT Group All Materials" to automatically organize Rhino materials into different groups.
importing FBX exported from MOI
importing FBX exported directly from Rhino
I modeled half of the architecture in rhino and then mirrored it as an instance in C4D, since everything is symmetrical.
C4D instance & mirroring
The floor (Versailles Parquet tiles) was modeled using photo-texturing method, most widely touted by CG artist Ian Hubert. I applied a Versailles Parquet tile photo as texture on a plane, then sliced up the plane with a "knife" tool to get the reflection roughness variations along the tile grouts. This allowed me to add subtle color and dirt variations with Curvature in Redshift.
The floor tile was then placed under a Cloner to be duplicated and spanned over the entire floor.
Cloning floor tiles
Notes:
C4D and Rhino use different Y and Z orientations, hence FBX directly exported from Rhino has to be rotated in C4D.
Download link for "AT Group all materials" script: http://www.architwister.com/portfolio/c4d-script-group-materials/
Ian Hubert's Youtube Channel has a lot of useful and efficient CG techniques, photo-texturing being one of the most popular:
https://www.youtube.com/c/mrdodobird/videos
Architectural Shading (Cinema4D + Redshift)
Since I grouped all the meshes by materials in advance, assigning materials was just as simple as dragging and dropping to the material groups as cubic maps or Tri-planar maps.
I used Textures.com, Greyscalegorilla's EMC material pack and Quixel Megascans as base materials for all my shaders.
For ACES to work correctly within Redshift, every texture has to be manually assigned to the correct color space in the RS Texture Node; generally diffuse/albedo maps belong to "sRGB", and the rest (roughness, displacement, normal maps) belong to "Raw".
My architectural shaders were mostly a 50/50 mix of photo texture and "dirt" texture to give an extra hint of realism.
RS Shader Graph of the wall material
2. THE CHARACTER
The base character was created in Character Creator 3 (CC3) with Ultimate Morphs and SkinGen plugins - both of which were very artist friendly with self-explanatory parameters.
Ultimate Morphs provided precise slider controls to every bone and muscle size of the character, while SkinGen gave a wide range of presets for skin color, skin texture detail and makeup.
I also used CC3's Hair Builder to apply a game-ready hair mesh to my character.
CC3 morphing & Hair Builder
Face Texturing
Face was the one of the most important parts of the CG character that required extra attention. The best workflow I found to add photorealistic detail was the "Killer workflow" using Texturing XYZ's VFace model and Zwrap.
VFACE is a collection of state-of-the-art photogrammetry human head models produced by Texturing XYZ; every VFACE comes with 16K of photoscanned skin textures, displacement and utility maps; Zwrap is a ZBrush plugin that allows one to automatically fit a pre-existing topology to a custom model.
The "Killer workflow" essentially matches the VFACE mesh shape to the CC3 head model; using the Killer workflow, I was able to bake all the VFACE details down to the CC3 head model once the 2 mesh shapes are matched up.
My adaptation of the "Killer workflow" can be broken down as follow:
- export T-posed character from CC3 to C4D;
- delete all polygons except the head of the CC3 character;
- export both CC3 head model and VFACE model to ZBrush;
- use MOVE/Smooth brush to maneuverer VFACE model to fit as closely as possible to the CC3 head model;
- launch ZWRAP, click and match as many points as possible, notably around the nose, eyes, mouth and ears;
- let ZWRAP process the matched up points;
- ZWRARP should then be able to output a VFACE model that matches perfectly to the CC3 head model;
- feed both models into XNormal and bake the VFACE textures to the CC3 head model.
matching points of VFACE (left) & CC3 HEADS (right) in ZWRAP
Notes:
Full "Killer Workflow" Tutorial on Textureing.XYZ's official Youtube channel:
I recommend save the matching points in ZWRAP before processing.
I also recommend baking all the VFACE maps individually in XNormal as they are very high-res and could crash XNormal when baked in batch.
Skin Shading (Cinema4D + Redshift)
Once I had the XYZ texture maps ready, I then exported the rest of the character texture maps from CC3.
After that, I imported the character into C4D, and converted all the materials to Redshift materials.
At the time of writing, Redshift unfortunately did not yet support Randomwalk SSS (a very realistic and physically accurate subsurface scattering model found in other renderers like Arnold), hence required a lot more tweaking when it came to rendering skin.
The 3 levels of subsurface scattering were driven by a single diffuse material with different "Color Correct" settings.
RS Shader Graph of "Leg" material
The head shader was a mix of both the CC3 textures and VFACE textures; the VFACE multichannel displacement was blended with the "microskin" CC3 displacements map.
RS Shader Graph of "Head" material
Character look-dev
close-up render of the character
A “Redshift Object” was applied to the character to enable displacement - only then would the VFACE displacements show up in render.
Note:
Skin shading is one of the most advanced aspects in rendering. Linked below one of the most helpful tutorial for Redshift Skin shading:
Hair Shading
Having experimented with grooming using C4D Ornatrix, Maya Xgen and Houdini, I decided that using the baked hair mesh from CC3 for project "Ballerina" was leaps and bounds more efficient down the line.
I use a Redshift "glass" material with CC3 hair textures maps fed into the "reflection" and "refraction" color slots, as hair (in real life) reacts to light like tiny glass tubes.
Note:
For anyone interested in taking the CC3 hair to the next level of realism, CGcircuit has a great vellum tutorial dedicated to hair generation and simulation.
early test of CC3 mesh hair to hair geometry conversion in Houdini
3. THE ANIMATION
Character Animation (iClone)
I then exported the CC3 Character to iClone for animation.
I considered a couple of ways to approach realistic character animation, these included:
- using off-the-shelf mocap data (Mixamo, Reallusion Actorcore);
- comissioning a mocap studio to do bespoke mocap animation;
- using a mocap suit (e.g. Rokoko or Xsens) for custom mocap animation;
- old-school keyframing.
Having experimented with various off-the-shelf mocap data, I found Mixamo mocaps to be way too generic, most of which look very robotic; Reallusion Actorcore had some very realistic motions, but I could not find exactly what I needed for the project.
With no budget and (my) very specific character motion requirements, option 2 and 3 were out of the picture. This led me to old-school keyframing.
First I screen-captured videos of ballet performances and laid them out frame by frame in PureRef. I then overlaid the PureRef reference (in half opacity) over iClone, and adjusted every character joint to match my reference using “Edit Motion Layer”.
Pose 1
Pose 2
final character animation
The animated characters were then exported to Alembic files.
NOTE:
While my final project concept depicted ballerinas in slow motion, my original idea was actually to keyframe a 20-second ballet dance, which I very quickly realized to be bad idea for a number of reasons:
- in slow motion a lot of frames could be interpolated, but real time motion involved a lot of unique frames and hence required a lot more tweaking;
- subsequently more unique frames meant more rendering problems (flickering, tessellation issues etc.).
early test render of my original idea
Considering this as my first character animation project, I came to the conclusion of doing a slow-motion style sequence instead - 2 unique poses with 160 frames of motion each.
Garment Simulation
Cloth simulation was by far the most challenging part of the project.
The two major cloth simulation/solvers that I considered were Marvelous Designer (MD) and Houdini Vellum.
While Houdini Vellum was much more versatile and more reliable than Marvelous Designer, I personally found it to be way too slow and therefore impractical without a farm (one frame of cloth simulation could take up to 3 minutes in Houdini Vellum vs. 30 seconds in Marvelous Designer on a Threadripper PRO 3955WX with 128GBs ram).
Cloth simulation in MD, while generally a lot quicker to setup than Houdini vellum, was not as straightforward as I imagined.
Simulated garments in MD always came with some form of glitches; this included cloth jittering, piercing through character or just complete dislocations. Below are some of the settings I tweaked to minimize glitches:
- using "Tack" to attach parts of the garment to the character;
- increasing cloth "Density" and "Air Damping" to prevent garment from moving too fast and subsequently move out of place;
- simulate parts of the garment in isolation - though not physically accurate, allowed me to iterate and debug a lot quicker.
I also reduced "Gravity" in addition to the above tweaks to achieve a slow-motion look.
MD Simulation Settings
MD simulation
Note:
Due to the license agreement of a sewing pattern I used, I am not able to share screenshots of my garment creation process. However the official Marvelous Designer Youtube channel has a lot of garment modeling livestreams which I find to be the most helpful resource for learning MD:
Alternatively there are a lot of readily available 3D garment online (notably on Marvelous Designer's official site or Artstation Marketplace) which I used as a basis for a lot of my projects.
MD is extremely prone to crashing, there is also a bug in both MD10 and MD11 that prevents saving of simulated garments 90% of the time, so always export simulated garment as Alembic files rather than relying on MD to save the simulation.
Simulation Clean-up
After dozens of simulations, I would then import the MD exported Alembic files into Houdini, where I did a lot of manual cleanups, this included:
- manually fixing collided cloth and character with "Soft Transform";
- reducing simulation glitches with "Attribute Blur";
- blending together preferable simulations from different alembic files with "Time Blend".
cleaning up simulated cloth in Houdini with "Soft Transform"
There are two tutorials that explain the Houdini cloth cleanup process in great detail, which I watched on a loop while working on the project:
Cloth Production in Houdini: https://www.cgcircuit.com/tutorial/houdini-cloth-in-production
Houdini Vellum Creature Setup: https://www.cgcircuit.com/tutorial/houdini-vellum-creature-setup
The cleaned-up cloth simulation was then exported as Alembic to C4D.
Alternative to Garment Simulation
For anyone frustrated by the impractical Houdini Vellum cloth simulation times and MD glitches, an alternative would be to literally attach the garment to the character's skin in CC3 - a technique most commonly found in game production.
attaching garment to character in CC3
While this is a great time-saver alternative, garment created in CC3 lacks realistic cloth movements and wrinkles; I recommend only using this method for objects tightly attached to the character (shoes) or only as a last resort for garment if MD cloth simulation keeps failing.
Note:
Linked below Reallusion's official guide for creating game-ready garments: https://manual.reallusion.com/Character_Creator_3/ENU/3/Content/Character_Creator_3/3/08_Cloth/Creating_Custom_Clothes_OBJ.htm
Garment Baking and Shading
Once I was done with cloth simulation in MD and clean-up in Houdini, I imported the Alembic file into C4D.
MD Alembic files always show up in C4D as one alembic object without any selection sets; this makes material assigning impossible.
This was where C4D baking came to play - a process I used for converting the Alembic file into C4D object with PLA (Point Level Animation):
- drag the alembic object into C4D timeline;
- go to "Functions";
- "Bake Objects";
- check "PLA";
- then bake.
Going through the steps above I was able to get a baked down C4D object that I could easily select polygons and assign multiple materials using selection sets.
I then exported an OBJ file from MD with materials, imported into C4D and dragged the selection sets directly onto the baked down garment object. This eliminated the need to manually reassign materials in C4D.
I used a blend of linen texture maps (from Quixel Megascans Bridge) and Redshift Car Shader to emulate sequins fabric (think "blink") found in a lot of professional ballet tutu dresses.
close-up render of the fabric material
Note: Youtube Travis Davis has a tutorial demonstrating the exact procedures:
WARNING: do not use AO or Curvature nodes for the simulated garment materials (or any animated object), as they could potentially produce glitches in final renders.
4. RENDERING
Lighting & Environment
Although I tried to keep my lighting as minimal as possible, project "Ballerina" inevitably required a lot of tinkering due to the nighttime setting.
The nighttime HDRI did not provide sufficient ambient light to the interior space, and the chandelier bulbs were way too dim as the primary light source. Ultimately I placed an invisible spot light under the center chandelier and used a fake spot light that only affected all the architectural ornaments. The fake light provided an extra level of bounce light that gave just the right amount of illumination without ruining the moody atmosphere.
I also added a "Redshift Environment" controlled in Z axis multiplied with "Maxon Noise" to give more depth to the scene.
Exterior-wise, I scattered 2 variations of Dogwood Trees with C4D "Matrix" in the surrounding area. They were lit from ground up in the scene to give extra depth.
In summary lighting of the scene includes:
- Dome light (nighttime HDRI) x 1
- chandelier (mesh lights) x 3
- Spot Light (center) x 1
- exterior Area Lights x 4
- fake Area Light positioned under chandelier (includes architectural ornaments only)
RS lights
Notes:
Redshift has a very good tutorial on Youtube on controlling the Redshift Environment:
The trees were generated with SpeedTree.
Lighting takes a lot of consistent practice to master; apart from my daily CG practice, I spent a lot of time watching b-rolls/breakdowns of movies - for instance I took a lot of inspiration from Roger Deakin's lighting and cinematography, as well as Wes Anderson's frame composition and color combinations.
Camera Movements
All my camera movements were very subtle. This included dolly, camera roll and panning shots, all driven with Greyscalegorilla's C4D plugin Signal.
I personally prefer using Signal for its non-destructive nature, but old-school key-framing would work just fine for similar camera movements.
Signal Graph
Draft Renders
Once I had the character animations, cloth simulations and camera movements ready, I began to do low-res test renders to make sure that I would not get any surprises during the final renders, this included:
- flipbook (openGL) renders to ensure the timing of the animations were optimal;
- low-res low-sample full sequence renders to ensure there were no glitches;
- full-res (2K) high-sample still renders with AOVs (diffuse, reflection, refraction, volume) to check what contributed to the prevalent noise if any;
- submitting test render to Fox Renderfarm to ensure the final renders matched up with my local renders.
This process lasted over 2 months with iterations and iterations of renders and corrections.
close-up shot I
close-up shot II
final shot
Final Renders & Denoising
I used a relatively high-sample render setting for the final renders, as interior scenes in Redshift were generally prone to noise.
RS final render settings
I also had motion blur and bokeh turned on for the final renders - in general motion blurs and bokehs look better (more physically accurate) in-render compared to motion blurs and bokehs added via compositing.
Half of the final 2K sequence was rendered on a local workstation, while the rest was rendered on Fox Renderfarm, totalling about 6840 hours of render time on dual RTX 3090 machines.
I used Neat Video for denoising the final shot, whereas the closeup shots were denoised using Single Altus (in Redshift).
Note:
Always turn “Random Noise Pattern” off under Redshift “Unified Sampling” when using “Altus Single” for denoising.
Redshift Rendering GI Trick
Redshift's GI Irradiance Cache calculation could be quite costly; my final renders for instance have an average of 5 minutes of GI Irradiance Caching time for each frame.
In Vray there was an option in the IR/LC setting named "use camera path", designed specifically for scenes where the camera would move through a still scene. Once "use camera path" was enabled Vray would then only calculate one frame of GI cache for an entire sequence.
There is a Redshift Forum post written by Andrian that explains how he was able to replicate the same function in Redshift.
Borrowing a page from Vray, I use the following motion blur settings to calculate the first frame of Irradiance Cache:
RS rendering GI trick motion blur setting
The one Irradiance Cache is then used to render the entire sequence. Two shots of the project were rendered using one single GI cache, resulting in a 10% faster render time overall.
NOTE:
The GI trick only applies to shots with very little motion; when applied to the 2 closeup shots of project "Ballerina" for example, I got light patches and ghosting on the character skin.
Conclusion
Having spent months working on the project, I have gained an appreciation for traditional character animators - I never realized the amount of effort involved in crafting character animations, and the subtlety of details required to bring convincing CG characters to live.
Though I would not consider myself to be a character artist, I personally think Character Animations are really powerful in making CG environments relatable, and therefore would still be an essential part of my personal CG pursuit moving forward.