the enterpreneur’s wife

My husband recently started a small business with two of his research partners. They’re doing really great things (check out http://www.recovrllc.com !)

Recently we both read the article “The Psychological Price of Enterpreneurship”. We though it was spot-on. Our favorite quote from the article is:

” [a startup] is like a man riding a lion. People think, ‘This guy’s brave.’ And he’s thinking, ‘How the heck did I get on a lion, and how do I keep from getting eaten?”

That’s from the perspective of the enterpreneur. Following along with the same analogy, what do you think the enterpreneur’s wife is doing?

Well, from my perspective it seems that the enterpreneur’s wife is standing somewhere on the sidelines watching the whole spectacle and desperately clutching a first aid kit, waiting for the moment the husband gets thrown off the lion or gets bitten. She’s trying to shout encouragement over all the noise. And she’s well aware that the circus trainers trying to train this lion are not terribly concerned about her well being, and is sometimes afraid that the lion might come and try to eat her, too.

And if the wife is pursuing a Ph.D., maybe she’s in the rink too, riding an camel or something less dangerous, less glorious, a bit more slow-moving, but no less absurd. And then they’re trying to coordinate the lion and the camel to move in the same direction, at the same speed, and not to hurt each other, but what’s the destination they’re trying to move towards anyhow?

All that to say: I’m still alive. Our marriage is still strong. The Ph.D. and the projects are moving a little slower than I wanted. But I do intend to write more about the process. I’ve never been a consistent blogger but I’m going to try to be better.

Art-free Character Generation for Unity3D: Part 2/Integration with MakeHuman: What Didn’t Work

Update: Per request by the MakeHuman admins, I would like to communicate that using the basemesh and .target files in any proprietary software is a violation of the AGPL. This post was not meant to promote any kind of license-breaking. (It didn’t work to begin with!) For more details, please see Mr. Haquier’s comments below.

Original post below.

Documenting what didn’t work is as valuable as documenting what did work in my book, so I’d like to describe the many failed iterations of making this whole thing work.

My first idea for making this work was pretty simple. After sifting through the MakeHuman repository, I realized that every possible change in the character’s appearance was dictated by a “target” file, which was basically a list of vertices and the offsets it would take to make that particular change of appearance happen in “full strength”. That’s easy (although tedious!) math, so I thought I’d do the following:

1) Import the “base” MakeHuman model into my Unity project, using Mecanim to match up the rigging, get all the texture imports right, etc.
2) Use those .target files and do the math myself to change the mesh around the armature to change the mesh only in realtime.
3) To save a character, just save the stack of the targets applied to it. You could then load in the file easily later.

More or less, I was looking to replicate the existing MakeHuman interface, only inside of Unity. There were many challenges with this approach, and ultimately it did not end up working, although I maintain that my work with this still might come in handy later. And I learned a lot about the inner workings on Unity, so that’s valuable.

Problem 1: The Unity Import Pipeline. The way Unity imports in meshes is different from the way the meshes are actually stored. This is fairly well-documented, but it still can wreak havoc. If a vertex has multiple normals, the vertex is split into multiple co-located vertices with one normal a piece. The way the target files work is that they are lists of vertex numbers and offsets. Since a vertex in the file is split into multiple vertices in Unity, if you only move the vertex listed in the file, the rest of the co-located vertices don’t move with it, which is problematic. This can be solved by preprocessing the vertex list alongside the input file, using a lookup table to determine all the co-located vertices, and then when one vertex in the group moves, moving the rest of the vertices with it. The other problematic import setting is that Unity, by default, changes the order of the vertices for optimal performance, which would of course cause strange things to happen when you apply the targets. To make it not change those orderings, you have to uncheck “Optimize Mesh” on the import settings.

Problem 2: Changing Relationships Between Targets. The relationships between all the targets is difficult to derive and constantly changing. At first I just copied and pasted the entire “targets” folder from the MakeHuman project into my codebase and tried to use the folder hierarchy to form some kind of organization on that basis. The folders are fairly descriptive (armslegs, cheek, chin, etc.) , the filenames help (for example l for left, r for right),  and there are icons for most of the targets. However, to divide it up in to manageable chunks that make sense (as in the existing MakeHuman interface), you have to impose some kind of extra order. In the “0_modeling_0_modifiers” plugin, you can find that the authors have hard-coded in a hierarchy and the relationships between all the different targets. To further complicate things, some targets have a dual (increase/decrease, in/out) while some do not (roundness of face, for example).

Well, that should have been my first hint that this wasn’t a terribly good idea. But I persisted. I copied and pasted all that hierarchy data into a text file and wrote a parser to read in all that stuff and make sense of it. This actually worked for the majority of the targets. Then two problems arose.  The first problem came when I got to the “macro” targets, where the value of one target influences another. These were the primary targets I was interested in–things like age, gender, height, proportion, race. Obviously these things influence each other–the maximum height of a female will be shorter than the maximum height of a male; the age of an individual will affect the way gender differences show up, etc. These targets also influence shapekey changes, like visemes and facial expressions. So when I went to implement in the macro targets and figure out all those dependencies, I basically hit a brick wall. It appears that the dependencies are half hard-coded in, half filename derived. I punted on that for the moment and went to solve other problems. Then, the second, happy problem arose–MakeHuman is a constantly changing and growing codebase, and new targets arrive from time to time! For example, as I was working on this, a “pregnant woman” target appeared on a nightly update. It’d be nice to keep up to date with the targets as they’re added, and that’s simply impossible to do with all the copy-pasting, dependency-hard coding that I was trying to do.

Problem 3: Lack of a Rigged Base Model. There is no “base model” representation of the plain vanilla human that you get when you open up MakeHuman that’s rigged quite right. There’s a base.obj file with the skeleton representation, but Unity3D doesn’t recognize the skeleton of that obj for the Mecanim system, which is the core of what I’m trying to do. So what I wanted to do was to export out an equivalent, rigged “base.fbx” model. But when you open up Makehuman, there are already targets applied by default. I dug into the code to find where those  targets were applied, disabled them, and then imported the result in as a FBX. But when I applied the targets to that result, the result was not the same as in the MakeHuman codebase. I’m not sure why this is, and by this point, I was reconsidering my approach, so I didn’t take the time to find out.

Problem 4: Implementation of Clothes & Hair. Even if I were able to accommodate for all those problems with coding (which I did to a large degree), all of these target applications would work just fine for the body, but would not apply for any “add-ons” like hair, teeth, tongue, clothes, shoes, or eyes. All of those objects’ positions and locations are represented as offsets from the body mesh’s vertices. So with enough work you could sort that all out, but it seemed a shame to reimplement a lot of what MakeHuman had already implemented, only in Unity. And as soon as MakeHuman changed its file formatting or algorithms, my project would have to be remade.

Problem 5: Non-Dynamic Rigging. My assumption was that once the rigging was set up, I could change the mesh around the rigging however I wanted and the rigging would still stay valid. This is simply not the case (and is an assumption I made without much understanding of how rigging works). When I made vertex changes, especially big ones as would be needed to change the base mesh to a child, the rigging simply didn’t work as it should have. 

So, with all that considered, I decided that an alternate approach was better. I wanted to keep as much of the MakeHuman code intact as possible and rely on that implementation to make all the mesh and rigging changes, so that way I wouldn’t have to be constantly changing my project as MakeHuman changed, and I wouldn’t have to re-solve the difficult problems that already had solutions in the MakeHuman project. There are a couple disadvantages. Every created model will be saved in the StreamingAssets folder, but in this case disk space is not an issue. Additionally, there will be a slow-down since the MakeHuman application will have to be loaded and run every time a new character is needed, but for my particular application it’s not a big deal–all my characters can be generated before the character interaction actually starts.

My working approach is to modify the MakeHuman source minimally to enable command-line generation of models, then place those models in the StreamingAssets folder of the Unity project. This means that Unity needs to call MakeHuman to make the virtual character, then load in the virtual character that MakeHuman spits out, all at runtime. In the next few posts I’ll describe how I implemented those two parts.

Art-free Character Generation for Unity3D: Part 1/Motivation

Over the past couple weeks I’ve realized how beneficial the blogs of others are to me when I’m working on a really tough problem. There’s not very many of us in the virtual reality/virtual human community, and when we solve big implementation problems they’re often buried in academic papers that are impossible to find instead of shared for all to benefit from. So when I find a post online that’s remotely related to what I’m doing, it offers help and hope, and often without it I couldn’t have found the solution on my own. So this is my attempt to contribute to the work of my colleagues in a really practical way.

The motivation for this series of posts and this whole project is this: I am not an artist. Once I tried 3D modeling. It yielded Beatrice’s skirt in Shakespearean Karaoke.

Picture

It’s not the worst looking thing you’ve ever seen, but then again it’s not the best either. Ever since then, I have simply refused to do modeling unless it was the kind of thing that could be built using primitives. It takes me ten times longer than someone proficient, and the end product is low quality.

Considering I work in virtual reality, this is somewhat of a problem. Of course, there are hundreds of models for purchase, but with a limited research budget you can only ask for so much. Now, for rooms and household objects, you can limp by with the free stuff you find online. But with virtual characters? Now that’s where the modelers make their money–and rightfully so! The human body is difficult to model and rig correctly. And on top of the body and its skeleton, you have to animate it. It’s a lot of work.

Problems with Current Virtual Character Approaches

 We’ve tried a lot of variations on our virtual characters, and each option has been painful. In the very beginning, we were using a product called Haptek, which I am surprised still has a website. As far as I can tell that website hasn’t been updated since I first used it in 2006. In my understanding, the Haptek folks then got hired by Boston Dynamics to work on the facial expression module for DI-Guy. The big problem with these two systems is that they were expensive and they were closed–it was exceedingly difficult, if not impossible, for you to create your own characters. In the case of Haptek, that limited you to about two or three characters (virtual human researchers, think of how many papers you’ve seen an image of these characters in!). For DI-Guy, the majority of the characters were military related and designed for crowd simulation, so if your character strayed far from that and you were unwilling to pay for new characters, your characters looked awkward. In our first implementation of the virtual pediatric patient system, we used a customized Mom character but a standard child character, and we got lots of comments about how the child didn’t look right. Additionally, there were little fine grained things that DI-Guy simply didn’t offer, such as animation of hands. Same holds true for systems like ICT’s Virtual Human Toolkit: while the provided models look great, it’s hard to get your own content in and out.

So we moved forward and started creating and rigging characters ourselves. (And by “we”, I mean my wonderful labmates Toni Bloodworth Pence and Jeff Bertrand, who have infinitely more patience than I do with modeling.) We started with models from Poser and Daz 3D, worked with them for animations, decimation, and piecing-together through Blender, and imported those models into Unity. Now, this works fine, but is crazy tedious, and takes a lot of skill (that I don’t have). But it is currently used for pretty much every project in our research group at the moment.

There are still several problems with this pipeline. First of all, as you may have guessed by the half-dressed alluring women on the sites, most commercial virtual human modelers do not have the same noble research goals as we do. The humans are usually the same demographic: white, young, skinny, adult men and women (or orcs, elves, and other fantasy characters, which also doesn’t work for us). Second, up until Unity 4 (which I’ll get to in a minute), animations had to be tied to a specific character, so each character had to be animated individually or had to have the same armature as other characters and the animations imported over. Third, all these models are super high resolution, so there has to be a lot of time spent decimating the model down to where it can run reasonably in realtime. Finally, it poses a huge bottleneck. Jeff is incredibly talented in modeling, and for that reason I think he’s worked on every project in the VE Group ever. Any time we need a new character, we have to go through Jeff. That’s not fair to Jeff. And when I get a job away from Clemson, Jeff won’t be there. I needed to figure out a way I could do this by myself.

Motivation to Change

Several things finally pushed me to figure this out. First, for our SIDNIE system, my project partner Toni and I each moving in research directions where we need a bunch more virtual characters. We’re both trying to graduate May 2015, and we realized that if we made the characters ourselves about half that time would be spent wrestling with the modeling and rigging. Additionally, for some of our experiments, we want characters that fall outside that typical demographic available for purchase online–particularly obese virtual characters.

Second, with the recent release of Unity 4, they unveiled a brand new character animation retargeting system called Mecanim, where characters with similar-enough structures to their internal structure could be automatically mapped and could reuse animations across characters! Developers around the world rejoiced. (Or at least I did.) So not having to recreate animations for each character would help cut the workload a lot.

Still, the pipeline didn’t change much: buy content > edit in Blender > import into Unity. I effectively wanted to skip the first two steps. I am incompetent at Blender, so I don’t want to have to touch it there. Instead of buying content, I wanted to create it–but with no artistic skill involved. As a part of a larger vision, I wanted anyone to be able to specify what kind of character they wanted (nine year old female, average height, slightly overweight, African-American) and for that character to automatically work in Unity. In realtime.

The other critical inspiration for this new pipeline is a great little open-source project I’ve been following for some time now called MakeHuman. They have ALL the art talent that I missed, and have taken the time to do some really excellent work modeling all the different ways a real person can look. Within their application you can control all those attributes with sliders and get some nice-looking characters. It is open-source, so the code is freely available, and additionally any models exported fall under the most-free license possible, so it is perfect for research.

Putting it All Together

With all those pieces, my goal was this: write a Unity application that would allow someone to specify a character demographic, then would automatically produce that character, import into Unity (at runtime) using Mecanim, and then put it into the Unity application to be used as any other character. So for example, in our SIDNIE system, this could mean run-time generation of new pediatric patients for each interaction session based on nurse specification. It also opens the door to generate virtual characters quickly in many other applications and to increase realism (every single person in a crowd can look different!).

This series will cover how I have done just that. My hope is that the virtual human community will find it useful, and that this will at long last give us the power to put virtual character creation into the hands of novices, expanding the use of virtual characters beyond the one-time exposures we can generate using traditional methods in our research lab.

Stay tuned.