Feature-based image metamorphosis constants

Have most of a hierarchical morph system running now. The results will follow in another post. Until then, here is an exploration of some of the paramaters in the paper "Feature-based image metamorphosis" (pdf | doi).

Open source morph code (Java) is here. Still a bit rough and bulging with unimplemented features.

The user defines a set of lines on the two objects that they want to morph, and sets a blend ratio (at which location between the two input images should the result be).

experimental outlines

Linear tweening is used on the points to find the destination of one set of features (pink and blue lines in the above). Then each image is contorted so that it's features are in this tweened location. Finally both images are blended together.

So the contorting is the difficult bit. Each pixel's new location is calculated relative to the features (lines). First it's location relative to the defined lines is found. Then the same combination is applied with the lines in the new location to describe it's destination point. The weighting method (how the line's influence on the pixels changes) is controlled with the following formula (dist = distance from line, length = length of line)
weight =  /lengthp \ b

I was curious as to how these changed the output, so here are my test videos. (p = 1, b = 2, a = 1 when the value isn't changing, which seems like about the right values)