## 26.5.09

### misc. architecture papers

Some of the recent paper's I've browsed on procedural urban modelling and tangential subjects.

Detailed Building Facades ( doi | pdf | ieee | 2008 )
Dieter Finkenzeller

This is a procedural system that uses two separate geometry trees, one for the course model and one for the style.

The course system starts with a floorplan - this comprises a concave set of abutting polygons that represents the main building, a wing or a balcony. Apparently the concave condition is required when generating the roof. These are combined using union operations.

A floor plan like this is created for each floor of the building. At each height the user has the option to add, remove or subdivide one of these polygons (to add a balcony for example)

Cornaces (decorative moulding strips) are extruded from a sequence of paths and curves. Consoles are blocks placed along the Cornaces, that you can stick arbitrary meshes (gargoyles!) onto. Cornaces-cross sections are defined use a turtle-like grammar. This is also used to define the perimeter of doors and windows.

Roofs are either flat, or constructed using a ridge-placing algorithm. The single, central ridge is defined, and each corner of the floorplan is connected to the nearest end of the roof. The straight skeleton is rejected as it is unique (not that I'm biased towards that skeleton or anything ;) however all the shapes described could be produced using the weighted straight skeleton, and it would have worked on concave shapes).

The second hierarchy of geometry is for the style, the finer components, such as wall subdivisons or windows/doors (windoors) (I've visited this several times, and the abstracting is going to be quite useful, and can be extended to the general idea of a border/frame). The different components of the windoors are created by a multitude of cornice and frame elements.

After the entire semantic tree is produced, it is traversed by a geometry engine that creates polygonal output.

Inclined walls aren't supported, and a lot of use input is required to define the features of each floor of the building.

Affective scene generation ( pdf | doi | acm | 2006 )
Carl Hultquist, James Gain, David Cairns

The idea here is to infer from adjectives, the parameters to a procedural system.

Merry et. al. apparently couldn't get it to work (but didn't show any results), and there's a thesis by Polichroniadis (cl@cam.ac.uk) that describes a similar method for animation.

There are two massive spaces involved
• The descriptive space - a set of adjectives, one for each axis of the space, and a value for each axis.
• The model space - one axis for each parameter, and a value for each axis
The game is to find a mapping between these spaces. It's likely that the mapping is different from person to person.

The approach to solving this is via learning algorithms, in particular a neural network (of the radial basis network function variety - a one perception deep network).

The procedural world is generated using:
• Landscape: Perlin noise
• Trees: meshes
• Sky: Coloured based on time of day
• Clouds/Rain - Perlin noise clouds.
Current controlling adjectives are wet, sparse, tropical, cloudy, light, mountainous and undulating. Users are shown images and have to assign a real (0..1) value to each adjective for a set of test scenes. This trains up the neural network. This seems to work reasonably. There wasn't much more to this paper, except the results, above.

It's a afrigraph paper, and there's an interesting emphasis on technologies for developing nations with lower skill sets.

3d modeling for non-expert users with the castle construction kit v0. 5 ( pdf | vast | 2005 )
Gerth, B, Berndt, R, Havemann, S, Fellner, D W

These guys build a nicely interactive castle editor. Particular emphasis is placed on the internal representations used.
• GML is a postscript like way of describing 3D models using a stack. It's a nice description because it's so flexible - functions can take an arbitrary number of inputs from the stack etc...
• Open Scene Graph is another scene graph engine, with a decent feature list - it represents geometry as number arrays etc...
The two packages were combined in the expected way - GML is used for the local descriptions, and fitted into the scene graph as components. There's a little munging of data types between the two packages, but they seem like a reasonable fit.

There's an reference to an excellent looking book about churches and cathederals by Kotch (Baustilkunde : das Standardwerk zur europäischen Baukunst von der Antike bis zur Gegenwart). After recognising that castles are often on hills, they proceed to make castles on flat-ground.

Wall editing takes place via a set of handles (red things in the above images), one for each corner. The need for constrains on the handle positions is discussed - angles on walls shouldn't be too large, houses shouldn intersect walls (but be aligned to them) etc...

Out of context quote-of-the-day:
As it turns out a whole number of objects can be suitably represented by polygons.

Finally they say that they want to advance the work towards non-flat terrains and castles. This is actually a remarkable uncluttered topic in the literature, and looks like a promising area of research.

Here's a thesis talking about GML (in German).

The FL-system: a functional L-system for procedural geometric modeling ( pdf | doi | springer | 2005 )
Jean-Eudes Marvie, Julien Perret, Kadi Bouatouch

The paper presents a variation on the theme of L-Systems and applies the result to architecture.

There are a couple of useful additions to the parameterized L-System

A(n) → for(i = 0; i <= n; i + +) B

Describes a repeated element. This replaces the decreasing-parameter trick used previously. It's only syntactic sugar, but given how twisted L-system grammars become it's quite welcome.

The terminals of the grammar are functions that can evaluate to geometrical primitives (window/door meshes). These can be modified by the parameters of the system at a given time.

The system outputs geometry at many grammar depths (rather than the tradditional post-evaluation using a turtle) . Because of the implicit evaluation order of L-Systems it's relatively simple to stop evaluation early to construct different levels of detail (LOD). These LODs are made more efficient by sharing geometry.

The system is integrated with the VRML97 standard, using embedded scripts.

An example grammar is constructed that creates a town house using standard façade decomposition techniques.

The above image shows a few result - the three buildings on the left are procedurally derived from the same grammar, with different texture sets.

The more I read about these system, the more I'm convinced that traditional formal grammars are very much the wrong way to go. There's no way artist's will use these grammars, and artist's can't be limited to domain specific languages.

Creating models of truss structures with optimization ( doi | pdf | acm | 2002 )
Jeffrey Smith, Jessica Hodgins, Irving Oppenheim, Andrew Witkin,

The idea here is to construct truss structures (think Eiffel tower) by taking a set of contact points (above, left, cones) and load bearing points (above, left green balls) and finding an optimal (for some meaning of the word) mesh (above, left, green and brown mesh). The system is given an arbitrary set of input beams, and then iterates under user guidance to find an optimal solution.

Truss structures are meshes of connected beams designed to hold a certain load. They're used in bridges, towers etc... The fun thing here is that while physical constraints are used to converge on a solution, a physical model isn't the primary goal. Rather these guys aim for a graphically pleasing representation. There is a large body of engineering work given to engineering trusses dating from the early 1900's, using various forms of numerical optimizations.

The optimization starts with a "rich" (overly generous) set of beams (above - top, left shows how a generous configuration (left) is whittled down (middle) to an efficient structure, in this case mirroring the standard solution (right) in this case of a Michell truss). There a set of build in fudge constraints to specify beams above or below the load bearning nodes, or to constrain beams to a vertical plane.

The optmization uses a simple physical model of the forces involved, and work to minimize some function, typically the total weight of the structure. This constraints (apparently) make the problem non-linear, so the chosen solver is of the sequential quadratic programming variety.

There are a misc. set of other constrains, such as obstical avoidence (above, bottom right, shows a truss holding the small green sphere, but avoiding the large red sphere in it's starting state (left) and two solutions (middle, right)), or max beam length.

After the system is solved, very short beams or those carrying no force are removed. The user then decides if the solution is viable, and the process is repeated until a desirable truss is produced. The results tend to look very similar to those of engineered trusses. In the case of some of their Eiffel tower models, almost too good to be true! - great results for a numerical solver.

## 24.5.09

### inkscape, pdf and posters

SICSA asked everyone to make a big poster to make their opening do more academic (here's the result). This was the first time I'd used inkscape to produce an A0 pdf, and there were some hiccups due to the large number of vector illustrations I included [edit - some glitch with scribd, click zoom out to view] -

The first time I exported to pdf, all the non-screen grab figures were vector line art. This gave a nice 3Mb PDF file that would take a massive 3 minutes to open, and without Inkscape's nice blurs. This wasn't really good enough. I decided to turn those diagrams into bitmaps to render nicely. The Glasgow guidelines for creation of PDF posters (as well as specifiying sensible font sizes) says that 300 dpi is the target printing resolution - so this is what I should target in my conversions.

 the rest of this post is for an old version of inkscape. Newer versions export blurs to pdf automatically (save copy as..., select .pdf, in dialogue box "rasterise filter effects" is selected automatically), and you can set the "Create Bitmap Copy" resolution in file -> inkscape preferences -> bitmaps -> resolution for Create Bitmap Copy.

The default image res was 90 dpi, not enough for printing. To set inkscape's default conversion dpi to 300 - I followed instructions here & then restarted inkscape-

The resolution or size of the created bitmaps can be set in `preferences.xml` (no GUI yet). In , specifying `minsize=` gives the minimum size of the generated bitmap in pixels (regardless of the object size), while `resolution=` sets the constant resolution (different pixel size for different object sizes).

Then it was a case of selecting all figures in the document and using Edit->Make bitmap copy. Because of the bounds of the outlines in inkscape doesn't include blurs, sometimes it was necessary to add a large white background rectangle before creating the bitmap copy. Once the copy is made, the original vector art can be deleted.

There were a couple of bugs were the converted figures had a small transparent border that rendered grey in Adobe's PDF reader, but this was remedied by editing the bitmap copies in the gimp and painting out the border, the re-opening the inkscape document.

After this the Save As... -> pdf -> OK worked like a charm. The result was 8Mb, but loaded much quicker and with less glitches.

The PDF included the above image in vector form - it was created from a 3D model that was imported into blender (from my prototype code). The poster was made on a bit of a deadline and rendering out this geometry nicely would have taken too long. Thankfully the render-to-svg script for blender worked like a charm and created vector art from the blender scene. Then a nice blur was added from the input geometry.

Here's another poster I made in inkscape - waaay better than some of the powerpoint mush you get out there! The inkscape file is here (23Mb, don't open in browser, use inkscape!).

Finally, if you've got lots of graphics getting the poster to print can be a little difficut - our department has a HP DesignJet (500 plus) connected to an old print server. I've wasted a ton of ink on buffer-underflows trying to print over the network - and "out of memory - data lost" errors, leaving big missing portions of the poster (white rectangles). The solution was to spool from a laptop connected straight to the printer. Here're the settings I used to print a landscape poster sideways:

## 20.5.09

### skeletons on non-planar input

The standard straight skeleton algorithm relies on a sweep plane. By adding points in at different heights we should be able to grow nice houses on hills. We should also be able to preserve semantic data from the skeleton. The naive method has some problems:

It's not too clear in the above, but lower points and edges can intersect with higher edges, because of changes to the points after events. There's a couple of ways out of this - we could say "user beware" and run away, or we could automate something. Turns out this isn't so hard -

[edit: turns out this is hard - it relies on intersecting two parallel vertical planes to a single line, gah! but given a topological approach to the skeleton algorithm, it should eventually work]

We project subterranean edges with a vertical gradient until they come to the surface. this guarantees that the current polygon on the sweep plane.

On a different track, there's a lovely idea of using non-planar sweep planes, such as fields, to define some really interesting geometries. As long as the sweep plane never intersects itself at a previous time, the geometry will remain non-intersecting. If we bound the sweep surface (no polygons allowed outside the limits), or define a Poincare disk/Escher -like vanishing horizon on the surface we could get some really interesting fields. These might be suitable for modelling growth in plants (or animals...?). I'd better find that book on differential geometry.

## 13.5.09

### smoke me a kipper, I'll be back for breakfast

Attention Glasgow: URE DOING IT WRONG

am working at the prism lab at arizona state for 6 Months :)

## 12.5.09

### ambiguous weighted skeleton

I've just come up with a better example of the straight skeleton's ambiguity:

The top figure shows a situation where we need to make a decision. The bottom figures, a,b,c are examples of different resolution methods, that might be one of the following:
• use the nearest intersection (raytracing!)
• invent a priority system that pleases us (artist driven)
• use an average weighting of the two edges (b, above)
Of course this follows from the fact that a weighted skeleton with consecutive parallel edges of different gradients is invalid. I'm not certain there's a clean way to resolve it...I'm looking...

This is probably similar to the ambiguity problem in 3d, described in Straight Skeletons of Three-Dimensional Polyhedra ( doi | pdf | 2008 )

## 1.5.09

### engineering a weighted straight skeleton algorithm

This post details an implementation of the Straight Skeleton in the method of Felkel ( pdf | Felkel, Obdrzalek | 1998 ), modified to construct weighted skeletons. Source code is available here. [edit: At the time I wrote the post, I didn't understand a few things about some of the strange cases that occur in the weighted straight skeleton - you can read more about them, here.]

It contributes a method for dealing with multiple co-sited "split events" in a floating point environment, as well as a resolution strategy for horizontal roof edges, this leads to a degenerate case in which the weighted straight skeleton is ambiguous (detailed here). In this post I concentrate on showing an algorithm exists, rather than finding it's speed.

The straight skeleton takes a simple polygon as input and outputs a graph connecting the input vertices as output.

As defined by A Novel Type of Skeleton for Polygons ( doi | pdf | Aichholzer, Alberts, Aurenhammer, Gärtner | 1995 ) the straight skeleton shrinks in the edges of a polygon, tracing out the lines made by the corners of the polygon. These traced lines form the skeleton.

Strictly the skeleton is a 2D construction, however here I regard as a 3D "roof" - given the floorplan of a house, the skeleton is the gutters and crest lines of a roof when viewed from above (4, figure below).

Some properties of the skeleton include:
• Unique for a given input polygon
• Related to the medial axis and voronoi tesselations for convex shapes
• The skeleton of convex shapes is a directed, acyclic graph (DAG) with the input corners as leaves
• Each face is monotonic in a direction perpendicular to it's defining edge
Overview

The algorithm uses a sweep plane - staring from the input polygon as the lowest set of points, intersections between planes are considered in height order (perpendicularly to the input polygon's plane). The planes are defined by the gradient associated with each edge of the input polygon, in the unweighed case there are all assumed to be the same value.

There are collisions that occur between faces that aren't valid. These must be ignored.

After all collisions have been accounted for, the algorithm is complete.

Loops of corners

The primary structure used is a set of corners. Each corner has pointers to:
• the adjacent (next and previous) input edges (I assume a counter clockwise ordering of points).
A partially evaluated skeleton is shown in the following diagram.

Via the normal graphic conventions, holes are represented by backwards (clockwise) loops of corners. There are additional structures that list all input edges, current edges and current corners. Each edge keeps track of the skeleton's output edges.

As the algorithm runs there is one loop of corner pointers for every volume that intersects the current sweep plane. If a skeleton produces a roof with two peaks, there would be two loops of corner points at the corresponding heights. Here it is important to note that an edge can then belong to two different corners. This is the cause of much grief later on.

Collisions between three edges

When the planes defined by edges collide to a point we have to modify the data structures accordingly. Often we will remove old corners, insert new corners or add output edges to the polygon. There are only so many ways that three edges can collide:

In 1) two adjacent edges meet another edge in it's centre, 2) shows three consecutive edges colliding in 3) we see a variation where three consecutive edges that form a loop collide. There are two other important cases:

When there is only one plane at a point, it is just a plane(!?). When there are two, it is just an edge (above, green). These aren't registered as events and no action is taken. 1,2 & 3) covered all possible topologies except three unconnected edges colliding (red, above). The arrangement is impossible (in the simple unweighted straight skeleton case) as there must be something between the edges that also collides at that point, at which point it is a degenerate case. I'll get onto these in a bit.

Each plane defined by each input edge cannot collide with just any old other edge-plane. Eppstein (Raising Roofs, Crashing Cycles, and Playing Pool (doi | web | Eppstein, Erickson | 1999 )) gives more ideas on a fast approach, but to just understand correctness we can start by noting that when two planes collide it forms an edge. We need three planes to collide to form a point; Before three planes can collide, two of these planes must already be adjacent. (Unlike Felkel's approach I'm less interested in the "bisectors" defined by each corner's pair of edges - this quickly becomes misleading in the weighted case). Using the above list of possible intersections, we can see we should only be testing for collisions between pairs of adjacent edges and one other edge.

This digram gives an overview of the data structure additions for each three plane case (the manipulations implemented are detailed in the following section):

These three types of collisions are sufficient the calculate the straight skeleton of almost all random input polygons (see following video or image). The other cases are degenerate and are discussed in the next section.

Collisions must occur on the faces

One way to view the algorithm, is that after the first event, we calculate a "new input polygon" at the given height and start again - so we might see a the algorithms executing on a pile of contours, one for every event, fig. 1 below.

So far the collisions have been presented as infinite planes, however we are really colliding a set of three sided polygons or faces. For example the back edge (bold, 2) in the above figure changes it's topology (red, 2) after some of the events. After event b (view the above larger) the face belongs to two volumes, and hence gives two polygons to collide against. This means that for all of the three faces involved in a collision we must ensure that a collision occurs within it's bounds.

[diagram showing what happens when we don't bother with these checks]

There are two obvious places to perform this check:
• Store a list of events between polygons - After an event, find new events and remove old events involving each edge or bisector that's changed.
• Store a list of events between planes (planes don't have bounds). Before evaluating an event, ensure it occurs within all three faces. This is the simpler method I implemented.
One trick here (as Felkel describes) is to realize that if two adjacent edges are involved, then the collision is within the bounds for those faces. The faces are adjacent on a surface of the skeleton volume, so they intersect only along this line (and any intersection must also occur on this line). These adjacent edges form Felkel's "bisectors", and as Eppstein explains reduces this to a ray-casting problem that has very fast known solutions. Each of the three types of collisions identified involves a pair of adjacent edges colliding.

These "three sided" faces that we're colliding are sometime triangles, but sometimes they are unbounded, when the adjacent edge move away from each other. (They are triangles with infinite area!)

Floating point calculations sometimes miss collisions that we expect, topologically, to happen. However we can increase the chances of getting a topologically sound result if we increase the margin (epsilon-error range) until we catch all collisions, even if we pick up some additional errors. There are two places where we need to be careful - in checking that a collision really happens on an edge's face and in finding events that happen "at the same height" or "in the same place". I use a cylinder shaped volume to accumulate events that occur at a given height range and within a certain distance to a centre point.

Collisions between many edges

In a degenerate case, more than three planes can meet at one point. This happens in the center of regular polygons, as well as examples such as the following image (where five or eight faces meet single points). Here I present a resolution algorithm that processes pairs of adjacent corners, for any number of edges meeting at a point.

It is important that these occasions are detected and treated with respect, as we assume that the linked-list of corners and their locations form a simple polygon (when projected down onto the input plane). This takes a little patience when working with floating point calculations. Luckily we can generalise some rules for handling point collisions, whether three, or any number of planes meet at a point.

The collision resolution technique takes a set of planes that meet at a point as input. It processes these into lists of chains of adjacent corners that reference these edges. While constructing these chains we perform the face collisions to determine which (if any) corner references the correct portion of the edge.

Chains consist of ordered lists of edges meeting at a point. If the edges are adjacent, they are in the same chain.

Remember that one edge can be referenced by more that one corner, when there are are more than one "peaks" in the output roof shape.

In figure 1 we see the input structure - a chain of edges abc and their associated corners (dark blue circles). We now process each pair of edges that are adjacent (orange lines) as in figures 2 and 3:
• We remove each of these corners from the master-list. (note that inter-corner pointers, green arrows, don't need to be adjusted).
• We add in output edges to the skeleton.
Then we process the non-adjacent edge pairs as in figure 4, adding in a new input corner and finally adjusting the inter-corner pointers, (cutting out the internal corners). However non-adjacent edge pairs take a little more thought as multiple chains can collide at one point.

As the above shows, once the inter-chain edges are processed (figure 2, above) we can then move onto intra-chain collisions (figure 3). We still process corners by pairs, but this time we process the three pairs that are non-adjacent, the red, yellow and orange edges in figure 3. We order these around the collision point (in the unweighted case this can be done by the angle any corner on the chain makes with the collision point, but see later comments on the weighted case), and process the pair made by first corner in the first chain, with the last corner of the next.

For each of these pairs we add one additional corner that is adjacent to each edge. This corner is at the location of the collision, and cuts out the inter-chain corners from any edge loops.

Horizontal roof edges

When evaluating rectilinear shapes, such as the above cross-shape, we often see horizontal skeleton lines. This is another degenerate case that we often see along side the >3 planes at a point case.

We can note that before processing events at one height, there should be no adjacent edges that form a horizontal edge in the skeleton. Also that after processing there should be no such edges.

Because every horizontal skeleton edge must have an event at it's start and end, we can be sure that if we process all the events at one height together, we will have closed all horizontal edges. [edit: this assumption is wrong - see later post on ambiguous cases]

As all events co-heighted occur simultaneously there is no interaction between events and the algorithm is to be deterministic. There is no way to sort co-heighted events, so we let them occur in an arbitrary order.

After one event occurs we may be in the situation where two consecutive, adjacent edges form a horizontal edge. Because this edge has no height, there is no way to determine which events it causes can occur first:

In the above figure, there is a collision at x, then at the same height, faces a,b,c &d collide at y. Face a should be rejected because we're outside the bisector. However as the edge forms a horizontal skeleton line, the bisector is poorly defined (green boundaries), so we would have to accept that all four faces collide, resulting in deformed output.

We do not track events created by consecutive horizontal edges, safe in the knowledge that after we've dealt with all events at the same height there won't be be any left. In a similar way, we can't ascertain whether collisions between planes occur within the current face boundaries.

For this reason all face-collision checks for events at one height are performed before any are evaluated. I do this by caching a list of chains. This brings with it the problem that the corner used in a chain may have been superseded, or "cut off" from it's intended collision (see figure below) by an earlier event at the same height.

The resolution strategy here depends on the chain type:
• If the chain has more than one corner in it, it represents the intersection of two faces, so there will always be a corner lying on this intersection whose adjacencies will lead to the correct corner.
• If the chain is of length 1, we must resort to more geometric calculations to project all referenced corners onto the input edge, and choose the preceding corner.

In the above figure 1, faces a,b,c,d and e collide at point x. We find their positions in the given volumes by locating the corresponding corners. As co-heighted events occur in an arbitrary order, by the time figure 2 occurs the corners (u & v) are no longer valid.

In the case of u (or any chain with 2+ edges), we can can use the following corner's previous corner to find u'. This is because the corner references two edges used in a collision, so must take part it (and horizontal events are independent - they can't effect each other's edges or validity).

This trick won't work to find v' as it is part of a chain of length 1. We project all corners that reference the edge onto the collision line. We then use the point (v') that is just before the projected collision as the previous corner. We ignore any points that reference the edge before the found corner. This is valid as all these points lie on the edge's plane, and all the points occur at the same height, so their order is defined by their projection. There is no question that the event occurs as there is no interaction between co-heighted events.

By changing the weight or gradient of an edge, we can drastically change the output shape. There is even the possibility of giving the edges a negative (outwards) gradient, to create "overhangs".

No modifications are required to the given algorithm to allow these gradients.

It is fun to note that negating the gradient (reversed polarity result in the figure below) produces the same shape as reversing the polarity (loop direction) of the input, but with a different final meaning - no longer is the result the top of the roof-shape (a surface whose normals point upwards), it becomes the ceiling of the building from the inside (a surface whose normals point downwards).

When edges have negative gradients, it becomes much harder to guarantee termination of the skeleton - it may define an infinite volume! Sometimes it will terminate, others not. Determining which polygons will terminate and which won't would be a good future research direction. Of course it is always possible to specify an arbitrarily high horizontal plane, which terminates the algorithm.

The following animation places several weighted straight skeletons with varying gradients on top of each other to create some primitive architecture.

Determining the order of chains

Determining the order of corners within a chain can be accomplished by following adjacency pointers to other corners. But between chains it becomes harder. In the weighted skeleton case we must project the corners to the collision height, before determining the order.

Angles around the intersection point are less than a full circle. If they weren't they would have overlapped collided at a lower height.

If two skeleton edges are parallel (here I mean edges going in the same direction, not opposites), the intersection of their planes with a third is no longer a point (it is either a line or not an intersection at all). This leaves the straight skeleton poorly defined. This condition can occur in two places
1. the input polygon
2. when intermediate edges are removed to being two parallel edges together
This also causes much "instability" or chaotic behavior in the algorithm. In the unweighted case it is possible to define the collision of two consecutive parallel edge-planes as the gradient vector starting from the intersection point. This is not possible if the edges have different weights, see the following video as the weight changes:

There has to be an artificial resolution method for parallel consecutive edges. The simplest is to remove one of the offending edges - this choice can be arbitrary and is justified by the limiting case (as we approach Pi degrees from either side, we resolve to one edge or the other). This works in the first case, parallel consecutives in the input polygon.

In the above figure edge a and b are parallel and have the same weight, they are not consecutive at the start, but become consecutive after an event. Theoretically any line that lies on a and b's face could become the bisector after the collision, however it must be consistently chosen throughout the algorithm. As I'm interested in clean 3D solids, the outcome in the figure is most desirable. By merging the two faces together we can see that the input to the algorithm shouldn't have been the two separate edges a and b, but rather one edge referenced twice. This is the approach I take.

On the subject of chaotic elements of the straight skeleton, another situation deserves to be pointed out. If concave faces' bisectors pass each other ("split events" in Felkel's terminology), there is often a sudden shift in the topology of the skeleton.

Adjacent and horizontal faces form an ambiguous case

When the skeleton is weighted (different gradients on each edge), there is an ill defined condition:

In figure 1, we see the input, a weighted skeleton where edge a has a lower gradient than edge b. These two edges meet to form a horizontal edge. At this point there are two resolution possibilities that make topological sense, 2a, letting the lower gradient take priority and 2b, letting the higher gradient take priority.

[edit: this is wrong, either 2a or 2b, above, are equally valid] A sensible implementation should choose to give the steeper edge priority (figure 2a) for two reasons:
1. If we look at the limiting case as edges a and b become parallel, as we reduce the angle between them, we get output topology favoring the steeper edge.
2. Topology 2b leads to the situation where one horizontal event causes another, breaking earlier assumptions in the implementation. This would mean that we could no longer order events by height.
However this is not consistent with the concept of the skeleton being a "wavefront".

Aichholzer et al. do not describe the weighted skeleton, and so do not come across this situation in their text. I'm not aware of a description of the weighted skeleton that avoids this ambiguity.

Building faces

Once the edges of the skeleton are known, it is desirable to be able to define faces (we might want to tile our roof). There is one edge for each face.

The implemented algorithm keeps a single graph (a hash-map) of all edges in the skeleton, and for each input edge there is a list of corners that once referenced that edge. This information is sufficient to re-construct each face:

As there may be parallel edges with the same gradient, the algorithm needs to be a little smarter:

### mudslingin'

The other project I'd forgotten about (until I read this paper) was Mudslingin', the second year undergrad (5 year old), 5 person "group" project. It's a two player, network enabled game in which you try to drown your opponent by throwing balloons full of mud or water at them. Your opponent has bombs to blow up your dams - great fun for the whole family. It has quite a lot of features now I look back on it, even it it was rough around the edges.

There are lots of bugs, many to do with a 40 minute conversion into the webstart format. Sound, save and networking probably won't work in the webstart edition. The options menu seems to freeze everything. You also need another person to play against! Keys:

Player 1: Up, Down, Left, Right
Player 2: K,L,O,M
Quit: Escape

Go "start game" then pick one of the levels like 'blood fountain'.

Files are here. Including a group report and the source files. I'm releasing them under a WTFPL). Spot the (bizarre) perl server!

I think we came top of the year, but my memory is hazy...

Now this is published, I can pack my computer up so it can go into storage.