[edit:] a newer post about morphing, with links to code, is here.

Last week I made the mistake of wanting to morph some images together in code. All wanted was to build a fish...


It should have been trivial to build a tree of components (body- mouth - ventral fins -eyes) and each for each component to have a range of options to morph between. Then a little bit of hackery to stick the components on in the correct place (tail on the donkey stuff*) and I'd have a way of evaluating the 2d images of a subset of the Teleostei morphospace.

This is just the same technique as used in video games - morphing 3d meshes between (or even beyond) extremes-

This works by deforming a mesh (with a set topology) between a set of extremes, with soft masks so that features only get applied in certain ways. Of course this means all sorts of problems about not knowing exactly where a person's lips are when animating them. But video games are still crude enough to get away with this, and hopefully motion retargetting will eventually be powerful enough to fill in the gaps. Of course the morph also has to applied to tessellating body parts such as neck and hair.

While this doesn't really count as procedural graphics (we're evaluating morphospace between artist-defined points, rather than evaluating new a new area of space) it does show how far we can get without resorting to constructive geometry.

Morph 2D images has a long history, as this overview paper points out, the mighty Industrial Light and Magic defined a type of 2D morph in "A Two-Pass Mesh Warping Algorithm for Object Transformation and Image Interpolation" (which is a really hard to find paper btw). ILM used the morphing in a movie for the first time - Willow

This involved laying an identical mesh on two images, editing the vertex points of that mesh for each image, then morphing the location, size and colour of each rectangle to obtain an new image between the pair. The vertex points identify features that we want to see move from the first image to the second in the morph.

Another interesting technique was morphing the outline of convex shapes using (follow the link for a cute java applet) 'the two dimensional Gaussian image'. By exploiting the fact that the shape is convex, you can store each edge by it's distance from the centre and length. Using just a list of these two values makes it easy to create morphing algorithm between shapes with different numbers of sides. But it doesn't specify anything about the interior of the shape, or work on concave polygons.

These two techniques aren't quite what was needed because fish aren't very rectilinear or convex, "feature based morphing" was closer. The idea here is to take your two fish and add pairs of corresponding edges/outlines on the two images. You can then triangulate both images, and morph each triangle from it's start texture, position and colour to it's destination. So there are two problems:
  • finding a compatible triangulation between both outlines
In order to get clean & tidy morphs, it's desirable both triangulations should be topologically identical. As one triangulation may not be valid for both outlines, this is rather tricky ("invalid triangulation" in the below image).
  • finding a way to morph the triangulations
Again the naive answer - linear interpolation of the points, gives ropey results, triangles can disappear/flip or the outline might not be a simple-polygon throughout the morph. Imagine morphing two polygons, one rotated 180 degrees to the other. At the mid-point in the morph all points would be at the same place. We should really use a technique that guarantees that each polygon retains as much of it's area as possible at all points in the morph. (Solutions to this problem involved some involved maths that made me run away)

I read a few papers about these two problems. But there is a huge quantity of material on the subject. What follows is incomplete, but gives a taste of the solutions involved.

This paper shows that a triangulation must exist by combining a valid triangulation of the first and second polygons. It's easiest to imagine on by unfolding both polygons and their triangulations to the same regular representation and then intersecting all the edges.

compatible triangulation

The problem with this method is that it introduces many aditional vertices, the worst case is adding O(n2) Steiner vertices. So there is a good deal of reasearch that seems to just use huristics to find compatible tesselations with fewer Steiner vertices.

Steiner vertices are a term stolen from Stiener trees, which is a similar construction to the minimum spanning tree, except we're allowed to add aditional points to decrease the length of the tree.

A better triangulation technique uses the minimum-link-path distance as a heuristic to find pairs of vertices on both shapes that would introduce the fewest additional vertices. This led me on a bit of a false-lead investigating how to calculate these link paths. One early solution by Subhash Suri is outlined in A linear time algorithm for minimum link paths inside a simple polygon -

minimum link path

The algorithm starts by triangulating the polygon (any-old way), and since the connectivity between the triangles forms a tree (each chord splits the polygon in two) there is one path of triangles from the start to the end (1...10) in the above. The path will not go below 1 or above 10 (i :shown as 'x'), so can be discarded (ii). Then draw the visibility polygon (light green) from the starting point to find the window (dark green) (ii). The window intersects some of the numbered triangles, and we use the direction (of enumeration) of the highest numbered triangles intersected to decide which side of the visibility polygon the window is (in some cases it will have two or more sides, not shown). This is repeated (iii) , finding the visibility of consecutive windows until we can backtrack from the end to the start (iv) to create a minimum link path.

However (newbie research error - reading the mathsey bits of the paper and drawing the above took a good while) Suri later published a more effective mechanism.

At this point I gave up researching the area and went back to trying to find an outline morphing library (I never did find one - anyone know of a good opensource image morphing lib that isn't xmporph?). I learnt a lot about geometry and more importantly I learnt my lesson about not to blindly wandering into a rabbit warrens of papers.

afternote: One area that I didn't come across was how to morph curves that bound an area, this would let you define a boundary using a b-spline or similar and it would still be able to morph the underlying texture. Most modelling tools rely on vertex morphing. I wonder if there is any work to be done morphing 3D curves (NURBS etc...)?

* hey that's an idea morphing donkeys!