Over the past couple weeks I’ve realized how beneficial the blogs of others are to me when I’m working on a really tough problem. There’s not very many of us in the virtual reality/virtual human community, and when we solve big implementation problems they’re often buried in academic papers that are impossible to find instead of shared for all to benefit from. So when I find a post online that’s remotely related to what I’m doing, it offers help and hope, and often without it I couldn’t have found the solution on my own. So this is my attempt to contribute to the work of my colleagues in a really practical way.
The motivation for this series of posts and this whole project is this: I am not an artist. Once I tried 3D modeling. It yielded Beatrice’s skirt in Shakespearean Karaoke.
It’s not the worst looking thing you’ve ever seen, but then again it’s not the best either. Ever since then, I have simply refused to do modeling unless it was the kind of thing that could be built using primitives. It takes me ten times longer than someone proficient, and the end product is low quality.
Considering I work in virtual reality, this is somewhat of a problem. Of course, there are hundreds of models for purchase, but with a limited research budget you can only ask for so much. Now, for rooms and household objects, you can limp by with the free stuff you find online. But with virtual characters? Now that’s where the modelers make their money–and rightfully so! The human body is difficult to model and rig correctly. And on top of the body and its skeleton, you have to animate it. It’s a lot of work.
Problems with Current Virtual Character Approaches
We’ve tried a lot of variations on our virtual characters, and each option has been painful. In the very beginning, we were using a product called Haptek, which I am surprised still has a website. As far as I can tell that website hasn’t been updated since I first used it in 2006. In my understanding, the Haptek folks then got hired by Boston Dynamics to work on the facial expression module for DI-Guy. The big problem with these two systems is that they were expensive and they were closed–it was exceedingly difficult, if not impossible, for you to create your own characters. In the case of Haptek, that limited you to about two or three characters (virtual human researchers, think of how many papers you’ve seen an image of these characters in!). For DI-Guy, the majority of the characters were military related and designed for crowd simulation, so if your character strayed far from that and you were unwilling to pay for new characters, your characters looked awkward. In our first implementation of the virtual pediatric patient system, we used a customized Mom character but a standard child character, and we got lots of comments about how the child didn’t look right. Additionally, there were little fine grained things that DI-Guy simply didn’t offer, such as animation of hands. Same holds true for systems like ICT’s Virtual Human Toolkit: while the provided models look great, it’s hard to get your own content in and out.
So we moved forward and started creating and rigging characters ourselves. (And by “we”, I mean my wonderful labmates Toni Bloodworth Pence and Jeff Bertrand, who have infinitely more patience than I do with modeling.) We started with models from Poser and Daz 3D, worked with them for animations, decimation, and piecing-together through Blender, and imported those models into Unity. Now, this works fine, but is crazy tedious, and takes a lot of skill (that I don’t have). But it is currently used for pretty much every project in our research group at the moment.
There are still several problems with this pipeline. First of all, as you may have guessed by the half-dressed alluring women on the sites, most commercial virtual human modelers do not have the same noble research goals as we do. The humans are usually the same demographic: white, young, skinny, adult men and women (or orcs, elves, and other fantasy characters, which also doesn’t work for us). Second, up until Unity 4 (which I’ll get to in a minute), animations had to be tied to a specific character, so each character had to be animated individually or had to have the same armature as other characters and the animations imported over. Third, all these models are super high resolution, so there has to be a lot of time spent decimating the model down to where it can run reasonably in realtime. Finally, it poses a huge bottleneck. Jeff is incredibly talented in modeling, and for that reason I think he’s worked on every project in the VE Group ever. Any time we need a new character, we have to go through Jeff. That’s not fair to Jeff. And when I get a job away from Clemson, Jeff won’t be there. I needed to figure out a way I could do this by myself.
Motivation to Change
Several things finally pushed me to figure this out. First, for our SIDNIE system, my project partner Toni and I each moving in research directions where we need a bunch more virtual characters. We’re both trying to graduate May 2015, and we realized that if we made the characters ourselves about half that time would be spent wrestling with the modeling and rigging. Additionally, for some of our experiments, we want characters that fall outside that typical demographic available for purchase online–particularly obese virtual characters.
Second, with the recent release of Unity 4, they unveiled a brand new character animation retargeting system called Mecanim, where characters with similar-enough structures to their internal structure could be automatically mapped and could reuse animations across characters! Developers around the world rejoiced. (Or at least I did.) So not having to recreate animations for each character would help cut the workload a lot.
Still, the pipeline didn’t change much: buy content > edit in Blender > import into Unity. I effectively wanted to skip the first two steps. I am incompetent at Blender, so I don’t want to have to touch it there. Instead of buying content, I wanted to create it–but with no artistic skill involved. As a part of a larger vision, I wanted anyone to be able to specify what kind of character they wanted (nine year old female, average height, slightly overweight, African-American) and for that character to automatically work in Unity. In realtime.
The other critical inspiration for this new pipeline is a great little open-source project I’ve been following for some time now called MakeHuman. They have ALL the art talent that I missed, and have taken the time to do some really excellent work modeling all the different ways a real person can look. Within their application you can control all those attributes with sliders and get some nice-looking characters. It is open-source, so the code is freely available, and additionally any models exported fall under the most-free license possible, so it is perfect for research.
Putting it All Together
With all those pieces, my goal was this: write a Unity application that would allow someone to specify a character demographic, then would automatically produce that character, import into Unity (at runtime) using Mecanim, and then put it into the Unity application to be used as any other character. So for example, in our SIDNIE system, this could mean run-time generation of new pediatric patients for each interaction session based on nurse specification. It also opens the door to generate virtual characters quickly in many other applications and to increase realism (every single person in a crowd can look different!).
This series will cover how I have done just that. My hope is that the virtual human community will find it useful, and that this will at long last give us the power to put virtual character creation into the hands of novices, expanding the use of virtual characters beyond the one-time exposures we can generate using traditional methods in our research lab.