top of page
  • Mason Smigel

This post will outline the process I used to build a post-rig deformation system for better shaping of characters.

The goal of this system is to provide higher control over the shape and silhouette of characters, after the main pose. I saw a similar system from a paper written at Dreamworks called the "Contour Cage." I wanted to try to create something similar within Maya.



GOALS

Deformations like this could pretty easily be achieved by creating a local rig and piping the deformation into the render mesh via a blendshape, however, a solution like this would be rather expensive, and unable to take advantage of Maya's parallel evaluation.

With that in mind, I had a couple of main goals when developing this system:

1. The setup itself should be fast and fully optimized for parallel evaluation

2. To speed up the creation and maintain editability the only input would be a poly mesh. We can build the controls and skinning from this!

3. The system should be flexible enough to add onto most rigs



Overview

To give a rough overview of the system works:

  1. First model an input mesh, we can skin this mesh and use it to drive a series of controls and bind joints.

  2. The bind joints will be skinned to a duplicate of the mesh, and using the bindPreMatrix attribute of the joints deform the skin based on the offset between the bpm(bindPreMatrix) joint and the bind joint.

  3. This skin cluster can finally be stacked on top of the existing skinCluster to produce the final deformation.


Data flow overview


The Input mesh

The control points for the rig are built from an input mesh called the cage mesh. This is a manually modeled mesh that fits the shape of the character, with resolution around the areas you want to control. Of course, there is a trade-off with the number of vertices and the speed of the rig later on so it's helpful to keep this mesh really low. I found that using N-gons can be a great trick to adding a bit more resolution where you need it (similar to the profile curves used at Pixar).


Deform Cage geometry for Diagoro


Later on, we will use the cage to create and drive the control points for the rig, but to make the system more intuitive it would be helpful to have them follow along with the rig. For this, we can skin the deformCage to the bind joints of the main rig. An important note here is that each vertex should maintain a maximum influence of 2. This will important later when we drive the controls from the cage.




The control Cage

The control cage will be used to drive the final deformation. Each control point has a small hierarchy consisting of an offset, control as well as two joints, a bind and a bpm (bindPreMatrix), these will come into play later!

Control Hierarchy for each control point


We can then pretty easily create a series of control points based on the vertex position and orient them to the vertex normals.


Now comes the fun part! Since we only skinned each vertex to a maximum of two joints we can now get the influences of the vertex and create a blended matrix constraint between the two influences with weights based on the weights from our skin cluster.


Each control point is weighted based on the skin cluster weights


Notice the skinCluster influence joints are connected as inputs to the blend matrix node, and the skin weight is used as a blend between the influences. Now when the rig is deformed the controls "stick" to the vertices, but they have a completely independent DG graph from the geometry!


While it may look like these controls are connected to the verts, they're only driven by their offset parent matrix.



One final thing I added to the controls was a way to display the connectivity of vertices, without this the controls tend to get lost and can be confusing as to what part of the body they control.

Even Daigoro doesn't know what these controls do!


For this, we can create a new NURBS curve with CVs connected to each control point of the rig. The result is a bunch of NURBS control points curves with connections to the controlPoint.position of the curve. Each Control point of the curve follows the appropriate rig Control, resulting in something that looks a bit like a lattice.


Complete connectivity display for Diagoro



Skinning and connecting

Finally, we need to connect our control rig points to the actual mesh. There are really two major parts to this step, skinning the deformation cage and connecting that deformation to the existing skinning.


Luckily we modeled a super nice low poly proxy earlier that we can also use for skinning the deformation cage. By smoothing it a couple of times and skinning it to the control cage joints we can get some really nice and smooth skinning.

Output high res-skinning proxy

We can now copy the skinning from the high res proxy geometry we created to the final mesh. Here the bpm joints we created earlier become super important. If we look back at the hierarchy they are parented under the offset but not the control. That means they follow along with the matrix connection to the bind joints, but not when we move the control.


If we connect the worldInverseMatrix of the bpm joints to the BindPreMatrix attribute of the skin cluster (following the order in which the influences were connected) we can essentially offset the 'bind pose' of the joints.


Example bpm matrix connections. For a real skin cluster you'd have tons of inputs so its easier to script this.


Connecting the bpm joints makes the deformation relative to the distance between the start(bpm) and end (bind). I like to think about this as converting the deformation from being based on a point, to a vector.


By repeating this setup on the skinned mesh we get a result that looks like this:

Deform cage skinCluster is calculated only from relative offsets. so it looks 'disconnected' from the main rig.


While this may look a bit awkward on its own, once this skin cluster is stacked on top of the existing skinning it works really nicely with the existing deformations. (for more information about stacking skin clusters check out Charles Waldraws article and course on rigging Dojo)



If you want to use this in your own projects or just dig into the code this is all written into my larger rigging system Rigamajig2 which you can check out here!

Updated: Jan 17, 2023

This post will outline some of the challenges and techniques I used when rigging Daigoro for the upcoming SCAD Senior Film Goro Goro.



Facial rig test animated by Ryo Sawada


Getting Started


Early into production, I built a proxy rig using simple geometry so we could start working on testing his proportions in motion and get the animators' feedback on the rig setup. Because I used rigamajig we were able to copy the exact same joint setup over to the final model and copy the skin weights as a solid starting point.



Proxy rig animated by Mick Bransfield


While most of the body was transferable the face was not. Because of his simple proportions and wide range of motion we wanted to hit I started off by doing some concept sculpts to test the ranges of the model and give back any early notes.



The facial rig


The facial setup consists of a mixture of joints and blendshapes in order to maximize time while maintaining control over the rig. He was an interesting character because of the simple designs some elements such as the brows were very straightforward, but other areas like the lips were much more challenging.


I always like to start with the mouth corners because I feel like it really helps to express who that character so I started by sculpting those shapes. I found that utilizing a wire deformer to create the blend shapes helped to create a more natural and appealing curve while still warping around the boxy shape of his head.



I also wrote a new tool to split blendshapes based on skin cluster weights so I could more easily paint and edit masks for the lip controllers.


This video shows the tool being used to split the left and right shapes however it can be used for any number of influences.


For the eyes, I worked with the animators to find the right blend of automation to control. In the end, we developed a system that uses joints for overall deformation with a geometry switch for the closed-eye shape.




Layering Deformations


In order to provide the granularity needed for the rig I added a system to Rigamajig to handle what I call 'deformation layers'


Each deformation layer stacks ontop of the layer before it and allows the user to layer in specific functionality without impacting other areas of the rig.



Because Daigoro is so cartoony we really wanted to exaggerate the squash and stretch, as well as having a middle stretch control. For this I found the best method to be creating a single Ik spline chain with volume preservation and layering it into the deformation layer after the main skinning.


This allows some deformation like the Jaw, to be contained to a skinCluster while the squash and stretch is contained to its own skincluster. In order to remove the double transforms the deform layer tool creates a pre bind matrix connection for each input in the skinCluster so only relative deformation is applied to the final rig.


Model Variations

The story requires two model variations with different rigging needs. To reuse as much as possible I worked on a method to re-order the vertex orders so that we can use a single "headsplit_geo" shape as a live blendshape target into all three model variations so a single facial rig can be used across the other models without impacting the different body rigs.



The "headSplit" model blendshapes directly into all three character models without affecting the body geo


PosE Space Deformations (PSDS)


As a final step to polish up the body rig I used rigamajig's pose reader system to drive a series of corrective pose space blendshapes to help maintain a clean and clear silhouette. This was super important for daigoro specifically because of his chubby proportions.


This video shows the impact the PSDs make on the overall silhouette of the character.

And most importantly the butt variation.





Updated: Nov 14, 2022

Overview

My goal with this project was to learn the basics of crowd simulation for an upcoming project at SCAD animation studio. The film will include the simulation of lots of re-animated skeletons, so zombies seemed like a fun choice.



The crowd simulation consists of two components agents, and simulation.

Agents are packed representations of the character instanced in the simulation. Crowd simulation requires specific properties added to the objects used in the simulation, this is stored on primitive attributes. All this information is stored in an agent definition. An agent definition includes:

• The character model (as packed geo)

• Clips ( animations the character can perform)

• Transitions (information about how agents transition to other clips)

• Ragdoll colliders and configuration

• Agent properties


The crowd simulation utilizes the agent definition to inform how that agent should behave when acted upon by forces. Much like other simulations the crowd solver responds to different external inputs. However instead of driving only positional information it controls the agent behaves. Behaviors are known as states and can be triggered by a variety of inputs, such as time, POP forces, RBDs, proximity, or VEX.


Technical guide: Agents

For this project the character rig and mocap data was downloaded from mixmo.


The first step of building the agent was to add clips. Agent clips can be added from motion clips, a packed representation of each frame of an animation (new in KineFx). To build a motion clip the input animation must be matched in scale to the target Skeleton and the connections between the source skeleton (the one with animation) and the target (skeleton that deforms your character). This outputs a possible skeleton, this can be animated and tweaked using KineFx for Ik’s or using a RigPose node.


Fig2 & 3. Motion clips for walking and standA animations.


The zombie agent has 5 clips, walk, standA, standB, deadA, and deadB. To add more variation each of these poses are mirrored and added a new clips. Mirroring is accomplished using a rig mirror pose with the position matching set to naming and similarity.



Fig4. Node network to setup and mirror motion clips


Another key element of the clips is locomotion. Instead of using a static value to drive the position in the simulation we can extract the Z and X movement and apply that motion instead of static values. In most cases locomotion is extracted from the hips.



Fig 5. Extracted locomotion from the hips of the walk.


Rag doll colliders can also be added in the agent definition as well using an agentCollisionLayer node. Farther downstream the limits of each joint are controlled using an agentConfigureJoints node. The rag doll is not part of the crowd solver, but the collision objects must be added in the agent.


Fig 6 & 7. Ragdoll colliders and joint limits.


The final step of configuring the agent is to define the transitions between clips. Using the

agentTransitionGraph node you can define which clips blend into which and the frames transitions can occur. By default Houdini tries to generate this graph automatically however I had to manually set it up because of the differences between poses I needed to blend.



Fig 8 & 9. Crowd transition graph and parameters for a transition.



Technical guide: Crowd Simulation

To begin the crowd simulation you must generate a crowd. This is accomplished with a crowdSource node. The crowdSource can take in input surface, and even cooler you can paint a density attribute to control distribution of the crowd.


Fig 10. SOP network of the crowd. The output feeds into a DOP net. Fig 11. Paintable density


The agent constraints are generated through a shelf button I used to incorporate the ragdoll.


Inside the DOP net the simulation is pretty straightforward, each agent has three states, Dead, Walk and Ragdoll. When triggered to talk they begin to transition to that state, animating any clips between the current and end clip. Meaning when we transition from dead to walk it actually transitions from dead to stand to walk.



Fig 12. DOP network.


Within this simulation only two tiggers are needed for the desired results. First a time based trigger wakes up the zombies, causing them to stand and walk toward a target point (this is accomplished through a POP seek node and POP avoid obstacle). The second trigger causes them to rag doll when they enter a bounding box around the end of the ground plane, gravity forces them to fall off the cliff.


The rag doll is not part of the crowd solver, instead it is bullet solver, so Houdini uses a multi solver to swap between the crowd and bullet solver when rag doll is triggered. For the Ragdoll simulation I needed to add the brick ground, because the ground is dense I created a new lightweight collision using VDB conversions.


Fig 13. RBD colliders on zombie agents and static collider


Description of problems encountered and solutions:

Problem: Tons of clipping! The crowd solver has an avoidance to avoid agents intersecting, I was also using a POP steer to keep them from colliding with the pillars. In some cases both conditions could not be maintained so they would clip through either another zombie or pillar

Solution: *Not a production solution* I played with the force weight as well as the number of zombies and spawning locations to try to minimize two agents being super close when moving between pillars. Problem: Foot sliding! Solution: A lot of this is handled by the foot locking options on the crowd solver. Another huge benefit came from using a single foot as the locomotion driver instead of the hips. I think using a more standard clip (non- dragging feet) would help fix this, or finding a way to specify when the foot is planted.


Problem: FBX animation goes below the ground plane, especially in standing clips.

Solution: Using the rigPose Node I was able to manually layer in fixes to the clips that fixed these issues.


Problem: Ragdolls causing flipping, crazy deformation and super high speed movement.

Solution: I added a bit of stiffness to the ragdolls and that had a huge benefit to the the crazy flipping. This way the zombies would try to semi maintain the pose of the clip at first blending into full rag doll. The crazy over 180 degree twist was handled by adding limits to the agent joints using the agentConfigureJoints node.


Problem: Rendering in redshift causes the crowdsource icon to show as geometry

Solution: Object merge the output geometry from the crowd into a clean container and hide the container with the crowd source.


Problem: Cooking times. Super slow to adjust things in the Agent or super slow playback from DOP sim. Loading the scene was super slow from re-cooking the ground.

Solution: Cache out things whenever possible. The agent definition can be cached,


Fig 14. Cached out files. Used in agent definition, environment geo, and DOP.




Final thoughts:

This was an incredibly fun project! There are definitely things I could have improved on, but I feel prepared to attack another crowd simulation. Moving forward I would like to use one of my rigs as the agent rig, I think that will be pretty straightforward to export as an FBX with only the skin and skeleton. I would also like to learn more about the mocap process and how I might be able to incorporate it in other areas of my career. If I were to use this in a production I would try to cleanup the Mocap in Maya first to fix issues there.



Fig 15. The bottom of the cliff the zombies fall down. (AKA really good deformations…)


bottom of page