FaceGen Frequently Asked Questions

Is there a way to modify individual controls without affecting any of the others ?

No, the various controls are statistically correlated and cannot be perfectly isolated.

Can I adjust the number of polygons in the mesh ?

Yes. If you don't like any of the meshes included with Modeller, you can add your own with Customizer.

Can I adjust the resolution of the exported texture maps ?

This is done automatically based on the amount of detail in the source photo, up to a max size of 2048x2048 (32-bit) or 4096x4096 (64-bit version).

Can I use the hairstyles from Modeller with 3D Print ?

No. Hairstyles for 3D printing must be specially designed for that purpose.

Are there plans for hair generation ?

We will be adding more hair models over time. You can also add your own using the Integration Tools or the older Customizer.

Are there plans for a 'BodyGen' ?

Not currently.

Are there plans for an OS X version ?

Not currently.

Is there a way to automatically generate and export face models ?

This requires our SDK

Can I do children with FaceGen ?

The age control goes down to age 15, and the photofit works optimally down to about age 12, below which results may vary. We have no near-term plans to extend the range.

How do I animate the face ?

FaceGen is a static face modelling technology which creates a personalized head mesh along with morph targets for animation. It does not produce any kind of timeline data so another program must be used to do the actual animation.

What is the statistical basis for the slider controls ?

The Shape and Color tab sliders are linear projections into our 'face space' consisting of 50 dimensions of symmetric shape, 30 dimensions of asymmetric shape and 50 dimensions of symmetric color. The age, gender and racial group sliders are linear regressions on our data set in that space. The caricature and asymmetry sliders are difference magnitudes from the mean for the given age, gender and racial group. Our face space was created by principal components analysis from a data set consisting of 273 high-resolution 3D face scans (scan demographics). We have not published a paper detailing our methodology.