Wednesday, December 02, 2009

skeleton project progress


I can't say a lot about the skeleton project*, however, we got a very rushed paper together that was rejected, but we got some nice renders (and a whole bunch of feedback) about on the work. A few sneaky pictures follow :) as ever, they were all made with the blender render template I've made, and rendered in yafaray. Basic idea is a straight-skeleton based rapid prototyping tool for architecture. At a stretch it could even be called procedural.

house of horrors

I was worried the paper-reviewers would complain about the artsy backgrounds to the renders, however that was the one part of the work that wasn't panned. Most of the backgrounds were my own photos, many from this flickr set.

multi

Something that took longer than it should have, but really makes these renders special is the tiling. Each tile on the roof is a polygon, and it basically involved writing an old-school rasterizer so each roof edge could position it's tiles. Never got around to creating tiles for the roof edges tho - so it looks like they are all made out of cardboard, which is really cute and make them look like real architectural models.




*that's one thing about working with collaborators, I've got to protect their investments as well as mine. Yes that means we're going to make this project even more awesome and try again.

Thursday, November 19, 2009

higher dimensional skeletons


from Weighted skeletons and fixed-share decomposition ( Aurenhammer | 2007 | ps | doi )

Higher dimensions

The concept of weighted skeleton (or weighted medial axis) is not limited to two dimensions. Given a convex polytope P in Ęd and an assignment of positive weights to its facets, a unique convex cell complex inside P can be defined, either by the respective shrinking process, or based on weighted distances to the hyperplanes that support P. In fact, the decomposition and optimality results presented in Section 3 and Section 4, respectively, directly generalize to higher dimensions.
Thusly we may draw:

dimensionality

Fun thing here is to compare what happens through the dimensions and see how high you can go before your head explodes. Also what the procedural modeling ramifications of each step are.

Saturday, November 07, 2009

very dull video of the sony cw



Here's a long, uninteresting video of the Sony Vaio CW booting Ubuntu (64 bit, wifi not a problem, but the video took a little work for the nVidia drivers how to in this post), then it shows the splash-top an instant-on web browser (that takes about to long to boot as the other operating systems take to come out of standby). This works well (flash videos play fullscreen), and I wave the screen around a bit to show the not-so-great viewing angle, but great brightness compared to my old powerbook laptop. Finally we see it booting windows 7, which is very pretty and does everything it should do.

What I didn't manage to show was the rattling battery. There's half a millimeter or so of space around it, and it doesn't lock into place properly. You don't notice it when you're using the laptop, but it's annoying when you're packing the laptop up.

Still really liking this 'top. A bit to much shiny-toy plastic, but generally responsive and enjoyable to use.

Video was taken inside, with indoor lighting, on a canon 500d. Sorry for the shaky bits.

[edit:] One year on, and generally I've loved this laptop. It now runs only windows (*nix is for work, windows for play), for video games. But just before the (international!) year's warranty ran out I had to get the the screen and the power-socket on the laptop fixed. The screen was graying out when you moved the monitor and it suffered from the classic Vaio fault of needing the power lead held to one side to charge. I was, however, impressed by the speed and ease of the returns process tho (apart from the premium rate number you had to call to arrange collection).

Thursday, October 29, 2009

sony cw "notebook"/laptop first thoughts

[video here]

It's a bit early for a full review, but here are some of my first thoughts about this shiny machine. I decided I couldn't justify a mac this time, so I got a Sony CW with 4Gb memory, a nVidia GT230M and the Core 2 Duo T6600 2.2G. This is a replacement for my 12" PowerBook G4, so there'll be some unfair comparisions to a five year old machine. After not giving us Java 6 for powerPC's and having the mac repaired 4 or 5 times, I didn't think all the shinyness was worth the 50% price premium. For a 15% premium I can get a Sony machine! This was written roughly in the order that I experienced it.

sappy or appony?

IMG_0443
old mac on the left, new sony on the right (bigger)

Unboxing - okay, so sony isn't apple. The box was a little goofy - the logo on the top is upside down when you figure out how to open it. The box then unfolds like a crab, before throwing the power adaptor and battery onto the floor. A couple of slim manuals, but no CD's in the box - guess there's no chance of a windows reinstall then. Academic Alliance (aka "the please don't use linux fund") to the rescue.

IMG_0407

IMG_0415
That's all there is in the box. On my bed ;)

I managed to fumble and push the "web" button on the laptop after installing the battery. After a couple of seconds pause, it booked to the spashtop (a linux distro running firefox?) very quickly. The browser was quite active and functional - happily playing youtube videos. Hopefully this will make running Ubuntu easier as there have to be drivers around.

The keyboard is great - not quite enough travel on the keys but very spacious and mac-esque (except the funtion and control buttons have switched place again!). The key-letters look a bit stuck on, and I'm sure they use too many fonts over the front of the computer for my typographic sensibilities. Just the usual range of garish stickers next to it. I wonder what it'll take to get them off... I've just come back to the paragraph, from the future to say I really like the keyboard :) Since this is going to be a coding machine, it's a good thing too.

Something strange about the delete rate - it removes entire sentences beforewith a light touch. I'm sure there's a setting somewehre to fix it. Ah yeah, in the Vaio settings, it's turned up all the way...


IMG_0452

So just the usual keys on the board, plus four little ones under the screen (photo above) - a display-off button that seems to kill the (LED) backlight, a "Vaio button" that when I pushed just now started installing Sony's Media Gallery software, that then started probing my network for new media...won't do that again. The "web" button in windows seems to close the tab that I'm using to write this blog post and take me to Chrome's homepage. When the computer is off, the web button starts the nix splashtop for quickie web browsing sessions. I think they can all be remapped.

Trackpad is a little unresponsive compared to the mac, and doesn't seem to do multitouch (edit: thanks anon commenter - it does multitouch - it's just well hidden, and no nice two fingered scrolling) gestures, but decent enough. The right hand side of the pad does scrolling, but it seems a bit unsensitive (and brings up a really ugly mouse cursor in windows). In Chrome scrolling jumps down the screen, but in IE it's smooth but feels like there's quite a bit of lag.

IMG_0426

Wifi reception is great, the splashtop found a bunch of networks I'd never seen on the mac and the reception seems decent enough. Seems to stay quite cool too, as long as you don't spin that graphics card up! There are some quirks with the splashtop setup - the keymapping is wonky (@ and " are reversed, but since I alternate between keyboards with both conventions, it's easy enough to ignore) and if your laptop is in hibernating (doesn't work at all if in standby), it doesn't record your history. Also (as noted in the comments) there's no brightness control.

After a restart the Windows7 desktop came up there was a nice suprise - google chrome was installed by default! Take that European Commission :)


Some random sony rubbish, like the choice to get the "Sony" theme on the default home page (is this the reward from google for bundling with chrome?). .

Windows 7. Not really a fan of windows in general, but it hasn't got in my way yet. It even looks quite pretty. Apart from a crash the first time it booted (well i did shut the lid as it was booting - i had to look at the shiny case one last time), I've rather enjoyed it. Still I'd rather not run the thing until the first service pack (50Mb of urgent updates already....), will get going on Ubuntu later today. I think windows is still running NTFS, so should be able to share a big partition between the two.

Next up was putting the graphics card through it's paces. It's a nVidia GT 230M behind that beast of a vent on the left hand side, since I do a fair bit of graphics I thought it was worth the $100 upgrade to a card that could at least pretend to do 3d. Best thing is it can run Cuda code - that might be useful in the future. DirectX wasn't isntalled (strangely enough - surprised it isn't part of window these day). The Medusa demo from nVidea ran at between 7 and 20fps. Tho mostly 7fps. I guess this would improve with some updated drivers, and after nVidia has a chance to figure out Windows7. But it still looked mighty impressive on screen.

Having some issues with the charger - after i installed some windows update it complained about low battery, and however i giggled the charger, and whatever i plugged/unplugged it wouldn't charge. I brought it into the kitchen, and restarted and it seems fine now. Ah, got it - the led on the power brick takes a long time to go out, just need to fight a tighter fitting wall socket!


IMG_0429

Screen is impressive, the LED backlight makes a real difference, and the brightness is comparable to the screen on my desktop machine. Not sure I like the glossyness, I suspect it's a ploy to get you to buy a screen filter, but it's what all the cool kids are using so I guess I'll have to get used to it. It's going to take a few weeks before I know that I like the widescreen format, but movies look awsome on it! The HDMI out is enough to drive my 24" monitor with the right cables.


IMG_0436

Hmmm.... all those little sticky labels that come on a new laptop have very different levels of stickability. The energy star and windows7 came off easily, the nVidia one was a bit more stubborn, but the intel one left a nasty glob on the glossy surface that's going to take some detergent to get off property.

So already it's covered in fingerprints, but that was expected, so I'm not too concerned. My gear tends to gets used, so I'm not going to complain again a few prints. It's also picking up white cat hair like there's no tomorrow, but I guess that's my fault for not ordering the white one. Who knows why i would want the red one - chronic nose bleeds?

Theres not much bloatware at all - jsut the usual MS office trial and a couple of antivirus trials. And there are actually useful apps like acrobat reader and Chrome already installed. But there is a fair bit of marketing pomp. I'm now fed up with the Vaio logo (it's everywhere) and there's a strange button on the start bar that brings up a screen that announces that it's a "Sony Vaio CW series with Virbrant color inside and out", and tells me how much it weighs. I don't think it's got a weight sensor (am now pushing on it to make sure).

Sleep works nicely - it suspends in a couple of seconds and comes back when you press the power button.

I can't quite get used to the two USB ports on the left hand side being toward the front of the computer. I'm used to holding onto the computer there. I guess they got moved forward to make room for the massive (gpu?) vent on theleft side at the back - This baby's much wider towards the back than the front, and this makes for nice photoshoots, and makes it feel smaller than it is. It also slopes the keyboard toward you a little, but this isn't really a bad thing.

Overall the build quality is decent, not quite a unibody mac, but I wasn't expecting it. It looks like it'll take the shocks and spills that I'll no doubt give it. Everything feels very rigid, with a bit of give in the wrist rest, but not a lot. Speakers aren't up to much. System is real quiet, so you really notice when that CD/DVD (there's no way i was paying for blue ray drive) drive spins up.

Final word: so far it's exactly what I hoped for. With a decent screen :)

Comedy quote from the control panel - "Using a white color for your desktop saves power" - Hold on, you're telling me that it uses less power to emit more energy?! This LED technology is going to take some getting used to!


[edit:]
Now have it set up nicely to boot Ubuntu as well as win7!

Tuesday, October 27, 2009

symmetry papers

Symmetry is a very much under-explored area in procedural modelling. Here's some papers talking about things you can do with it.



Constructing Regularity Feature Trees for Solid Models ( doi | springer | 2006 )M. Li, F. C. Langbein and R. R. Martin

"Reverse Engineering" is a phrase I normally associate with software engineering, but in the geometric-meaning of the word, it's used to describe recovering the structure from a model. Typically, given a mesh, you want to find out which primitives were combined to create the mesh.

This paper takes the approach that you have a 3D mesh and want to recreate the constructive-solid-geometry primitives and operations used to create that mesh.

The idea is to identify recoverable vertices, edges and faces that the original mesh hints at. These might be external to the volume defined by the mesh, or internal. This is done by intersecting existing features (such as edges or faces) to recovered features (such as vertices or edges).

There's an additional note that gives more credit to Leyton's book.



Partial and approximate symmetry detection for 3D geometry ( pdf | web | doi | video | 2006 )
Niloy J. Mitra, Leonidas J. Guibas, Mark Pauly


The idea here is to define the symmetry as a set of parameters that we can define as points, and then cluster, to reveal the strongest symmetries. For example the equation of a line when looking for 2D reflection symmetries. These parameters can then be clustered and analysed for density to find the most common symmetry constraints. This would give an O(n^2) algorithm in the number of points analysed, however by quickly rejecting pairs biased on the local features (eg: the normals around a point aren't symmetrical over the proposed symmetry line), the number of points to compare is reduced by an order of magnitude.

Mean shift-clustering is used to merge nearby points. This walks uphill on a landscape of Gaussian-splatted data points. Because the Euclidean symmetry space used is 7 dimensional, there is little chance of the maxima converging. Clustering is needed because noisy input mesh is rarely perfectly symmetrical.

To identify mesh patches from the located point pairs, corresponding pairs of input points are grown outwards while the symmetrical surface is within a given tolerance.



As the number of possible symmetries increase, it is desirable to compress the required symmetries to a minimal set. For example two "translate by 10" symmetries should be expressed as two "translate by 5" symmetries. This is process is referred to as "finding a reduced symmetry basis" (frame image, above).

Convincing results, such as the decimation of the above castle mesh to 14% of it's original size (transparent boxes are removed data) make this an interesting technique.




Discovering Structural Regularity in 3D Geometry ( pdf | doi | web | 2008 )
Mark Pauly, Niloy J. Mitra, Johannes Wallner, Helmut Pottmann, Leonidas Guibas

This is an extension of the previous paper (Partial and approximate symmetry detection for 3D geometry, above), presented a few years later. They use the symmetry matching techniques developed to reconstruct data from scattered point samples.

We only consider scale, rotations and translations using one or two parameters - this assumption nicely decreases the number of degrees of freedom to compute. By identifying repeated patches, and accumulating the possible transforms, it is possible to identify the parameters of the transform, even in noisy data. The paper discusses these as "generator functions", that take one (below left four examples) or two (right three examples) parameters, as the basis for symmetry.





A set of random patches from the surface are grouped according the their local feature descriptors (polynomial fitting of osculating jets). Each of these are then locally aligned, and analysed in a cunningly designed space such that their two parameters will be revealed in a lattice (below in one dimension). By plotting all the found dimensions, the most frequent transform (T) becomes the most plotted location. We know the identity transform must be a member of the set of generator functions, and this limits the search space. An energy minimisation function/RANSAC is then used to identify the lattice.

The results include reconstructing a castle and the Colosseum from partial point cloud data.



Symmetry Detection Using Line Features ( pdf | doi | web )
M. Bokeloh A. Berner M. Wand H.-P. Seidel & A. Schilling



The above papers cluster symmetries to identify them. However if two different symmetries are clustered together in symmetry space, they it becomes very difficult to separate.

The routine take a cloud of points-with-normals and create feature identifiers, then then run a couple of loops through these identifiers looks for similar clusters of points to call symmetrical.

The first step Poisson down-sample the input cloud to find points with one linear degree of slippage locally. That is they search for extruded (elongated) features, such as the edge of window frame or column. These are locally clustered into bases, non-slippable graphs of local features. A RANSAC-like algorithm is then used to find similar bases over the model, while flipping a couple of the coordinates to account for mirroring. The features' curvatures are compared between bases to find compatible bases. Region growing maximizes each of these regions, and a geometry matching stage ensures that the base abstraction really matches the input point set. Because of the simplicity of the bases, and their tendency to be localized, many comparisons can be made.

Once symmetry has between detected, the usual things can be done - holes filled, missing sections reconstructed, and data compressed.

The system doesn't really find nested symmeteries, just repeated instances or flipped instances.

Very curiously, in future work these guys mention
In future work, we would like to build morphable statistical models of symmetric parts to better represent subtle variations beyond the accuracy of the region growing algorithm
which is one of the thoughts I've been having a lot recently for generative, rather that reconstructive procedural modelling.






Tuesday, October 06, 2009

accelerating the straight skeleton



I've been investigating ways to accelerate the straight skeleton computation. In my implementation, the main problem is finding the next point where three sides of the "roof-shape" collide. In the below there are two such points ((abc) and (adc)), but the complexity becomes a killer on larger shapes. Edges are removed, and added (by some of my contortions to the algorithm) as the algorithm continues.

examining_collisions

0) Naive! Best start at the beginning, what my code does at the moment is intersect all pairs of adjacent edges against all other edges. This takes lots of time (n2) and space (n2).

1) Conga Lines. As Eppstein points out(Raising Roofs, Crashing Cycles, and Playing Pool (doi | web | informal | Eppstein, Erickson | 1999 )), a conga line data structure provides a great way to track closest pair problems (not the only way tho!). In this situation the pair is made up from a bisector (two adjacent edges that collide to form a ray) and a quad (the face that the bisector collides with). The "distance" between these two is the height at which they intersect.

The key observation is that we can form log(n) set of chains of things. Within each chain each object is chained to it's nearest object. Then inserting an item takes O(n log (n)) and removing takes O( n log^2 n).

2) Divide and conquer. Given the (below, left) dark grey input shape (and blue corners).

segment

We can subdivide it into a set of cells, each of which is a weighted skeleton, such as the one shown in blurry green. The image on the right shows one such cell in 3D. Interior edges have a zero (vertical) weight, shown in red. We can then find the next event by locating the lowest collision within each cell. Each of the above cells has only two edges, but I'm not sure that's an optimum number.

When events hit the boundary of cells, they are propagated into the neighbouring cells. If there are too many edges in one cell then the cell gets subdivided. This means we only have to consider local collisions when calculating the skeleton, and maybe pushes another log(n) into the complexity for the straight skeleton (I'm still trying to figure this one out, along with the fact that the weighted skeleton is ambiguous...).

straight skeleton index page

Monday, October 05, 2009

matte renders in blender

This post is in response to a professor saying that it is hard to get students to make nice (publishable) renders. It isn't! There's a few bits of theory you should know (colour theory, basic lighting setup, rule of thirds, global illumination) but you can mostly fake it by borrowing other people's work, such as this this template for getting nice blender renders...

blender lighting demo


So getting a nice render out of blender takes a little tweaking. Here's a ready rolled blender scene and demo video for getting nice matte-white light-box renders with global illumination. Grab this template file here, and follow the video, or the following instructions.




First grab: blender, yafray and the gimp.

In linux, something like this:
sudo apt-get install blender yafray gimp

In windows, go to the websites and install (haven't checked that this works for yafray yet...)

Then start up blender. Load the template file. Delete the house-like thing you can see by right clicking on it, and pressing the delete key, and answering ok to the pop up box. Import the object you want to render. Obj is an easy format to work with when there's no textures. File->import->Wavefront(obj). There's a tutorial on moving stuff with blender here, otherwise select the object you want to move and use 'control-shift-g' to bring up the control widget, then left click and drag to move it around. "control-shift-r" (who makes up keystrokes like that?) for the rotate widget, and "control-shift-s" for scaling. The viewpoint when editing (not the camera) can be moved using a combination of control, shift and alt together with the middle mouse button (or tutorial here). To view the scene from the camera's pov, use the '0' key on the keypad (or view -> cameras -> camera).

This is the beast-of-a-render-panel:



After setting the aliasing quality (1), and the percentage render size (2), you can choose the frame ratio (3), before rendering (4). The combo box directly below the render button ("Blender internal" selects the renderer - in the example file it's set to yafray). At this point blender looks like it's crashed (if you're using yafray). After it's rendered (can take a while - go get a strong cup of something) push F3 with the output window selected to save in the format dictated by combo box (5).

You probably want to do a low quality render to check the details, before doing a full one (waiting 20 minutes just to find out that the top of the model is clipped is kinda annoying).

Once you've got the render, you probably want to open it in the gimp, and use colors -> adjust levels to get the exposure right for your medium (print, web etc...). Then save out the final image.

Thursday, July 16, 2009

orthogonal basis transform

basis_transform
Here's a trick I learnt from Strang - to convert (rotation/translation) between two arbitrary frames:
  1. convert both to orthonormal bases (A,B above), ignoring the translation component for now.
  2. use the inverse == transpose trick to find the inverse of the first transform (AT), then apply (multiply by) the second transform(B)
  3. (...fudge the translation component (not shown)...)
It's even better if you want to axis-align something, you just need the transposed matrix. This got a post of it's own because I've been told by several professors over the years to move it to the origin, calculate the x and y rotational components and multiply together - this takes lots of multiplication, and you end up with a hideous zombie-like matrix.

The transpose is equal to the inverse only if it's an orthonormal matrix - to project vector line onto another, it's a simple dot product. What's more each axis in an orthogonal frame is independent, so that's all we need - 9 dot products, or one matrix multiplication.

Monday, July 06, 2009

a couple of procedural facades

A few weeks back I put together these two as examples of how to create a procedural facades from a static facade image.


This image is from the fantastic project from Washington. They had a project to stitch images, and were good enough to publish some hi res street fronts. Charleston, South Carolina:





The second image is of the willard hotel - around the corner from the White House (image by dbking):


The Willard Hotel




Some of the issues that came up:
  • There is a limit to how far you procedural-ize. For example, windows in the above were atomic units. Tricks like content aware image resizing (seam carving et al) might help do this automatically.
  • How do we tell if an element should scale or repeat with size? should some elements keep the same size and others be scaled relative to each other?
  • In the vanishing case (as the facade become small) which elements disappear first?
  • Negative space often plays a part (first met this concept in latex)
  • Many different schemes for layouts (see Java's Swing for lots of examples) - only one data point (the input image) which do we use?
  • Conflict between top-down (layout downwards - space subdivision at each step) and bottom up (components know how big they should be, eg a window, rather than a side of a house, knows how big it should be). Similar in principle to top down vs bottom up design arguments
  • Many features to balance, and everyone's goals are different - feels a lot like software engineering.

phd insanities


Every house I see has me trying to model it procedurally in my head. This is really driving me insane. Perhaps it's time for a holiday. How about the roof windows on the family guy house? Why doesn't it have any guttering? Why do some windows have shutters and others don't?

then ASU has buildings like this - did the tree come first and they built the house around it? Did the tree grow through the gap?


integrated architecture

Tuesday, June 09, 2009

phoenix weather warning


Hmmm.... turns out that storm cloud on the google homepage that means "it'll piss down all day" in Scotland means "10 mins of drizzle" in the desert. Well...you know...one thousand words for snow and the rest of it.

Monday, June 08, 2009

semantic deformation

Last week I did a rather slap-dash job at presenting Semantic Deformation Transfer ( pdf | acm | 2009 ) by Baran, Vlasic, Grinspun and Popović. I didn't do a very good job, but here're the slides:





There are typos in the document. It says re-encoding the base pose doesn't give the base pose, when the paper says it does. I think I meant that they don't show re-encoding an arbitrary input pose.

The videos are missing from the presentation, they can be found


and here:



Sunday, June 07, 2009

Spotify


Am a bit of a fan of spotify and am just jotting down some experiences of using it abroad. I signed up while in the UK, and was happy enough with the service (even if they kept increasing the number of adverts). It worked through the notorious Glasgow University proxy (after some tweaking) and had a large enough selection of music for me (if not my obscure-scandinavian-rock-loving brother).

When I moved to Arizona, spotify continued to work...but the adverts stopped! So I'm getting the 10 pound per month premium service for free. (Even though it doesn't seem to hard to circumvent, even from the UK) This is really strange - if I'm on holiday why not just serve me British adverts? I guess they're preparing for their US launch. 
 
The service abroad stops working every two weeks. But you just need a friend to log in for you (or know the correct tunneling incantations) to log in via a UK ip adress (see notes on the notorious Glasgow University Proxy).

Tuesday, May 26, 2009

misc. architecture papers

Some of the recent paper's I've browsed on procedural urban modelling and tangential subjects.



Detailed Building Facades ( doi | pdf | ieee | 2008 )
Dieter Finkenzeller

This is a procedural system that uses two separate geometry trees, one for the course model and one for the style.

The course system starts with a floorplan - this comprises a concave set of abutting polygons that represents the main building, a wing or a balcony. Apparently the concave condition is required when generating the roof. These are combined using union operations.

A floor plan like this is created for each floor of the building. At each height the user has the option to add, remove or subdivide one of these polygons (to add a balcony for example)

Cornaces (decorative moulding strips) are extruded from a sequence of paths and curves. Consoles are blocks placed along the Cornaces, that you can stick arbitrary meshes (gargoyles!) onto. Cornaces-cross sections are defined use a turtle-like grammar. This is also used to define the perimeter of doors and windows.

Roofs are either flat, or constructed using a ridge-placing algorithm. The single, central ridge is defined, and each corner of the floorplan is connected to the nearest end of the roof. The straight skeleton is rejected as it is unique (not that I'm biased towards that skeleton or anything ;) however all the shapes described could be produced using the weighted straight skeleton, and it would have worked on concave shapes).

The second hierarchy of geometry is for the style, the finer components, such as wall subdivisons or windows/doors (windoors) (I've visited this several times, and the abstracting is going to be quite useful, and can be extended to the general idea of a border/frame). The different components of the windoors are created by a multitude of cornice and frame elements.

After the entire semantic tree is produced, it is traversed by a geometry engine that creates polygonal output.

Inclined walls aren't supported, and a lot of use input is required to define the features of each floor of the building.




Affective scene generation ( pdf | doi | acm | 2006 )
Carl Hultquist, James Gain, David Cairns


The idea here is to infer from adjectives, the parameters to a procedural system.

Merry et. al. apparently couldn't get it to work (but didn't show any results), and there's a thesis by Polichroniadis (cl@cam.ac.uk) that describes a similar method for animation.

There are two massive spaces involved
  • The descriptive space - a set of adjectives, one for each axis of the space, and a value for each axis.
  • The model space - one axis for each parameter, and a value for each axis
The game is to find a mapping between these spaces. It's likely that the mapping is different from person to person.

The approach to solving this is via learning algorithms, in particular a neural network (of the radial basis network function variety - a one perception deep network).

The procedural world is generated using:
  • Landscape: Perlin noise
  • Trees: meshes
  • Sky: Coloured based on time of day
  • Clouds/Rain - Perlin noise clouds.
Current controlling adjectives are wet, sparse, tropical, cloudy, light, mountainous and undulating. Users are shown images and have to assign a real (0..1) value to each adjective for a set of test scenes. This trains up the neural network. This seems to work reasonably. There wasn't much more to this paper, except the results, above.

It's a afrigraph paper, and there's an interesting emphasis on technologies for developing nations with lower skill sets.



3d modeling for non-expert users with the castle construction kit v0. 5 ( pdf | vast | 2005 )
Gerth, B, Berndt, R, Havemann, S, Fellner, D W


These guys build a nicely interactive castle editor. Particular emphasis is placed on the internal representations used.
  • GML is a postscript like way of describing 3D models using a stack. It's a nice description because it's so flexible - functions can take an arbitrary number of inputs from the stack etc...
  • Open Scene Graph is another scene graph engine, with a decent feature list - it represents geometry as number arrays etc...
The two packages were combined in the expected way - GML is used for the local descriptions, and fitted into the scene graph as components. There's a little munging of data types between the two packages, but they seem like a reasonable fit.

There's an reference to an excellent looking book about churches and cathederals by Kotch (Baustilkunde : das Standardwerk zur europäischen Baukunst von der Antike bis zur Gegenwart). After recognising that castles are often on hills, they proceed to make castles on flat-ground.

Wall editing takes place via a set of handles (red things in the above images), one for each corner. The need for constrains on the handle positions is discussed - angles on walls shouldn't be too large, houses shouldn intersect walls (but be aligned to them) etc...

Out of context quote-of-the-day:
As it turns out a whole number of objects can be suitably represented by polygons.

Finally they say that they want to advance the work towards non-flat terrains and castles. This is actually a remarkable uncluttered topic in the literature, and looks like a promising area of research.

Here's a thesis talking about GML (in German).



The FL-system: a functional L-system for procedural geometric modeling ( pdf | doi | springer | 2005 )
Jean-Eudes Marvie, Julien Perret, Kadi Bouatouch


The paper presents a variation on the theme of L-Systems and applies the result to architecture.

There are a couple of useful additions to the parameterized L-System

A(n) → for(i = 0; i <= n; i + +) B


Describes a repeated element. This replaces the decreasing-parameter trick used previously. It's only syntactic sugar, but given how twisted L-system grammars become it's quite welcome.

The terminals of the grammar are functions that can evaluate to geometrical primitives (window/door meshes). These can be modified by the parameters of the system at a given time.

The system outputs geometry at many grammar depths (rather than the tradditional post-evaluation using a turtle) . Because of the implicit evaluation order of L-Systems it's relatively simple to stop evaluation early to construct different levels of detail (LOD). These LODs are made more efficient by sharing geometry.

The system is integrated with the VRML97 standard, using embedded scripts.

An example grammar is constructed that creates a town house using standard façade decomposition techniques.

The above image shows a few result - the three buildings on the left are procedurally derived from the same grammar, with different texture sets.

The more I read about these system, the more I'm convinced that traditional formal grammars are very much the wrong way to go. There's no way artist's will use these grammars, and artist's can't be limited to domain specific languages.



Creating models of truss structures with optimization ( doi | pdf | acm | 2002 )
Jeffrey Smith, Jessica Hodgins, Irving Oppenheim, Andrew Witkin,


The idea here is to construct truss structures (think Eiffel tower) by taking a set of contact points (above, left, cones) and load bearing points (above, left green balls) and finding an optimal (for some meaning of the word) mesh (above, left, green and brown mesh). The system is given an arbitrary set of input beams, and then iterates under user guidance to find an optimal solution.

Truss structures are meshes of connected beams designed to hold a certain load. They're used in bridges, towers etc... The fun thing here is that while physical constraints are used to converge on a solution, a physical model isn't the primary goal. Rather these guys aim for a graphically pleasing representation. There is a large body of engineering work given to engineering trusses dating from the early 1900's, using various forms of numerical optimizations.

The optimization starts with a "rich" (overly generous) set of beams (above - top, left shows how a generous configuration (left) is whittled down (middle) to an efficient structure, in this case mirroring the standard solution (right) in this case of a Michell truss). There a set of build in fudge constraints to specify beams above or below the load bearning nodes, or to constrain beams to a vertical plane.

The optmization uses a simple physical model of the forces involved, and work to minimize some function, typically the total weight of the structure. This constraints (apparently) make the problem non-linear, so the chosen solver is of the sequential quadratic programming variety.

There are a misc. set of other constrains, such as obstical avoidence (above, bottom right, shows a truss holding the small green sphere, but avoiding the large red sphere in it's starting state (left) and two solutions (middle, right)), or max beam length.

After the system is solved, very short beams or those carrying no force are removed. The user then decides if the solution is viable, and the process is repeated until a desirable truss is produced. The results tend to look very similar to those of engineered trusses. In the case of some of their Eiffel tower models, almost too good to be true! - great results for a numerical solver.


Sunday, May 24, 2009

inkscape, pdf and posters

SICSA asked everyone to make a big poster to make their opening do more academic (here's the result). This was the first time I'd used inkscape to produce an A0 pdf, and there were some hiccups due to the large number of vector illustrations I included [edit - some glitch with scribd, click zoom out to view] -



The first time I exported to pdf, all the non-screen grab figures were vector line art. This gave a nice 3Mb PDF file that would take a massive 3 minutes to open, and without Inkscape's nice blurs. This wasn't really good enough. I decided to turn those diagrams into bitmaps to render nicely. The Glasgow guidelines for creation of PDF posters (as well as specifiying sensible font sizes) says that 300 dpi is the target printing resolution - so this is what I should target in my conversions.

[edit] the rest of this post is for an old version of inkscape. Newer versions export blurs to pdf automatically (save copy as..., select .pdf, in dialogue box "rasterise filter effects" is selected automatically), and you can set the "Create Bitmap Copy" resolution in file -> inkscape preferences -> bitmaps -> resolution for Create Bitmap Copy.

The default image res was 90 dpi, not enough for printing. To set inkscape's default conversion dpi to 300 - I followed instructions here & then restarted inkscape-

The resolution or size of the created bitmaps can be set in preferences.xml (no GUI yet). In , specifying minsize= gives the minimum size of the generated bitmap in pixels (regardless of the object size), while resolution= sets the constant resolution (different pixel size for different object sizes).

Then it was a case of selecting all figures in the document and using Edit->Make bitmap copy. Because of the bounds of the outlines in inkscape doesn't include blurs, sometimes it was necessary to add a large white background rectangle before creating the bitmap copy. Once the copy is made, the original vector art can be deleted.

There were a couple of bugs were the converted figures had a small transparent border that rendered grey in Adobe's PDF reader, but this was remedied by editing the bitmap copies in the gimp and painting out the border, the re-opening the inkscape document.

After this the Save As... -> pdf -> OK worked like a charm. The result was 8Mb, but loaded much quicker and with less glitches.


sicsa logo

The PDF included the above image in vector form - it was created from a 3D model that was imported into blender (from my prototype code). The poster was made on a bit of a deadline and rendering out this geometry nicely would have taken too long. Thankfully the render-to-svg script for blender worked like a charm and created vector art from the blender scene. Then a nice blur was added from the input geometry.

Here's another poster I made in inkscape - waaay better than some of the powerpoint mush you get out there! The inkscape file is here (23Mb, don't open in browser, use inkscape!).

city sculpt - ict pioneers poster

Finally, if you've got lots of graphics getting the poster to print can be a little difficut - our department has a HP DesignJet (500 plus) connected to an old print server. I've wasted a ton of ink on buffer-underflows trying to print over the network - and "out of memory - data lost" errors, leaving big missing portions of the poster (white rectangles). The solution was to spool from a laptop connected straight to the printer. Here're the settings I used to print a landscape poster sideways:

designjet (settings for landscape poster)

Wednesday, May 20, 2009

skeletons on non-planar input



three hills

The standard straight skeleton algorithm relies on a sweep plane. By adding points in at different heights we should be able to grow nice houses on hills. We should also be able to preserve semantic data from the skeleton. The naive method has some problems:

naive

It's not too clear in the above, but lower points and edges can intersect with higher edges, because of changes to the points after events. There's a couple of ways out of this - we could say "user beware" and run away, or we could automate something. Turns out this isn't so hard -

[edit: turns out this is hard - it relies on intersecting two parallel vertical planes to a single line, gah! but given a topological approach to the skeleton algorithm, it should eventually work]

robust

We project subterranean edges with a vertical gradient until they come to the surface. this guarantees that the current polygon on the sweep plane.

On a different track, there's a lovely idea of using non-planar sweep planes, such as fields, to define some really interesting geometries. As long as the sweep plane never intersects itself at a previous time, the geometry will remain non-intersecting. If we bound the sweep surface (no polygons allowed outside the limits), or define a Poincare disk/Escher -like vanishing horizon on the surface we could get some really interesting fields. These might be suitable for modelling growth in plants (or animals...?). I'd better find that book on differential geometry.

Wednesday, May 13, 2009

smoke me a kipper, I'll be back for breakfast

Attention Glasgow: URE DOING IT WRONG

am working at the prism lab at arizona state for 6 Months :)

flightpath

Tuesday, May 12, 2009

ambiguous weighted skeleton



I've just come up with a better example of the straight skeleton's ambiguity:

ambig_neu

The top figure shows a situation where we need to make a decision. The bottom figures, a,b,c are examples of different resolution methods, that might be one of the following:
  • use the nearest intersection (raytracing!)
  • invent a priority system that pleases us (artist driven)
  • use an average weighting of the two edges (b, above)
Of course this follows from the fact that a weighted skeleton with consecutive parallel edges of different gradients is invalid. I'm not certain there's a clean way to resolve it...I'm looking...

This is probably similar to the ambiguity problem in 3d, described in Straight Skeletons of Three-Dimensional Polyhedra ( doi | pdf | 2008 )

Friday, May 01, 2009

engineering a weighted straight skeleton algorithm



This post details an implementation of the Straight Skeleton in the method of Felkel ( pdf | Felkel, Obdrzalek | 1998 ), modified to construct weighted skeletons. Source code is available here. [edit: At the time I wrote the post, I didn't understand a few things about some of the strange cases that occur in the weighted straight skeleton - you can read more about them, here.]

It contributes a method for dealing with multiple co-sited "split events" in a floating point environment, as well as a resolution strategy for horizontal roof edges, this leads to a degenerate case in which the weighted straight skeleton is ambiguous (detailed here). In this post I concentrate on showing an algorithm exists, rather than finding it's speed.

The straight skeleton takes a simple polygon as input and outputs a graph connecting the input vertices as output.

As defined by A Novel Type of Skeleton for Polygons ( doi | pdf | Aichholzer, Alberts, Aurenhammer, Gärtner | 1995 ) the straight skeleton shrinks in the edges of a polygon, tracing out the lines made by the corners of the polygon. These traced lines form the skeleton.

Strictly the skeleton is a 2D construction, however here I regard as a 3D "roof" - given the floorplan of a house, the skeleton is the gutters and crest lines of a roof when viewed from above (4, figure below).

Some properties of the skeleton include:
  • Unique for a given input polygon
  • Related to the medial axis and voronoi tesselations for convex shapes
  • The skeleton of convex shapes is a directed, acyclic graph (DAG) with the input corners as leaves
  • Each face is monotonic in a direction perpendicular to it's defining edge
Overview

The algorithm uses a sweep plane - staring from the input polygon as the lowest set of points, intersections between planes are considered in height order (perpendicularly to the input polygon's plane). The planes are defined by the gradient associated with each edge of the input polygon, in the unweighed case there are all assumed to be the same value.

examining_collisions

There are collisions that occur between faces that aren't valid. These must be ignored.

dont happen

After all collisions have been accounted for, the algorithm is complete.

Loops of corners

The primary structure used is a set of corners. Each corner has pointers to:
  • the adjacent (next and previous) input edges (I assume a counter clockwise ordering of points).
  • the adjacent corners
A partially evaluated skeleton is shown in the following diagram.

skeleton_pointer_config


Via the normal graphic conventions, holes are represented by backwards (clockwise) loops of corners. There are additional structures that list all input edges, current edges and current corners. Each edge keeps track of the skeleton's output edges.

As the algorithm runs there is one loop of corner pointers for every volume that intersects the current sweep plane. If a skeleton produces a roof with two peaks, there would be two loops of corner points at the corresponding heights. Here it is important to note that an edge can then belong to two different corners. This is the cause of much grief later on.

Collisions between three edges

When the planes defined by edges collide to a point we have to modify the data structures accordingly. Often we will remove old corners, insert new corners or add output edges to the polygon. There are only so many ways that three edges can collide:

three way


In 1) two adjacent edges meet another edge in it's centre, 2) shows three consecutive edges colliding in 3) we see a variation where three consecutive edges that form a loop collide. There are two other important cases:

green_red


When there is only one plane at a point, it is just a plane(!?). When there are two, it is just an edge (above, green). These aren't registered as events and no action is taken. 1,2 & 3) covered all possible topologies except three unconnected edges colliding (red, above). The arrangement is impossible (in the simple unweighted straight skeleton case) as there must be something between the edges that also collides at that point, at which point it is a degenerate case. I'll get onto these in a bit.

Each plane defined by each input edge cannot collide with just any old other edge-plane. Eppstein (Raising Roofs, Crashing Cycles, and Playing Pool (doi | web | Eppstein, Erickson | 1999 )) gives more ideas on a fast approach, but to just understand correctness we can start by noting that when two planes collide it forms an edge. We need three planes to collide to form a point; Before three planes can collide, two of these planes must already be adjacent. (Unlike Felkel's approach I'm less interested in the "bisectors" defined by each corner's pair of edges - this quickly becomes misleading in the weighted case). Using the above list of possible intersections, we can see we should only be testing for collisions between pairs of adjacent edges and one other edge.

This digram gives an overview of the data structure additions for each three plane case (the manipulations implemented are detailed in the following section):

pointer twiddling


These three types of collisions are sufficient the calculate the straight skeleton of almost all random input polygons (see following video or image). The other cases are degenerate and are discussed in the next section.


rect26802

Collisions must occur on the faces

One way to view the algorithm, is that after the first event, we calculate a "new input polygon" at the given height and start again - so we might see a the algorithms executing on a pile of contours, one for every event, fig. 1 below.

redo


So far the collisions have been presented as infinite planes, however we are really colliding a set of three sided polygons or faces. For example the back edge (bold, 2) in the above figure changes it's topology (red, 2) after some of the events. After event b (view the above larger) the face belongs to two volumes, and hence gives two polygons to collide against. This means that for all of the three faces involved in a collision we must ensure that a collision occurs within it's bounds.

[diagram showing what happens when we don't bother with these checks]

There are two obvious places to perform this check:
  • Store a list of events between polygons - After an event, find new events and remove old events involving each edge or bisector that's changed.
  • Store a list of events between planes (planes don't have bounds). Before evaluating an event, ensure it occurs within all three faces. This is the simpler method I implemented.
One trick here (as Felkel describes) is to realize that if two adjacent edges are involved, then the collision is within the bounds for those faces. The faces are adjacent on a surface of the skeleton volume, so they intersect only along this line (and any intersection must also occur on this line). These adjacent edges form Felkel's "bisectors", and as Eppstein explains reduces this to a ray-casting problem that has very fast known solutions. Each of the three types of collisions identified involves a pair of adjacent edges colliding.

These "three sided" faces that we're colliding are sometime triangles, but sometimes they are unbounded, when the adjacent edge move away from each other. (They are triangles with infinite area!)

Floating point calculations sometimes miss collisions that we expect, topologically, to happen. However we can increase the chances of getting a topologically sound result if we increase the margin (epsilon-error range) until we catch all collisions, even if we pick up some additional errors. There are two places where we need to be careful - in checking that a collision really happens on an edge's face and in finding events that happen "at the same height" or "in the same place". I use a cylinder shaped volume to accumulate events that occur at a given height range and within a certain distance to a centre point.

Collisions between many edges


In a degenerate case, more than three planes can meet at one point. This happens in the center of regular polygons, as well as examples such as the following image (where five or eight faces meet single points). Here I present a resolution algorithm that processes pairs of adjacent corners, for any number of edges meeting at a point.

rect27677


It is important that these occasions are detected and treated with respect, as we assume that the linked-list of corners and their locations form a simple polygon (when projected down onto the input plane). This takes a little patience when working with floating point calculations. Luckily we can generalise some rules for handling point collisions, whether three, or any number of planes meet at a point.

The collision resolution technique takes a set of planes that meet at a point as input. It processes these into lists of chains of adjacent corners that reference these edges. While constructing these chains we perform the face collisions to determine which (if any) corner references the correct portion of the edge.

Chains consist of ordered lists of edges meeting at a point. If the edges are adjacent, they are in the same chain.

Remember that one edge can be referenced by more that one corner, when there are are more than one "peaks" in the output roof shape.

pairwise


In figure 1 we see the input structure - a chain of edges abc and their associated corners (dark blue circles). We now process each pair of edges that are adjacent (orange lines) as in figures 2 and 3:
  • We remove each of these corners from the master-list. (note that inter-corner pointers, green arrows, don't need to be adjusted).
  • We add in output edges to the skeleton.
Then we process the non-adjacent edge pairs as in figure 4, adding in a new input corner and finally adjusting the inter-corner pointers, (cutting out the internal corners). However non-adjacent edge pairs take a little more thought as multiple chains can collide at one point.

pairwise_intra

As the above shows, once the inter-chain edges are processed (figure 2, above) we can then move onto intra-chain collisions (figure 3). We still process corners by pairs, but this time we process the three pairs that are non-adjacent, the red, yellow and orange edges in figure 3. We order these around the collision point (in the unweighted case this can be done by the angle any corner on the chain makes with the collision point, but see later comments on the weighted case), and process the pair made by first corner in the first chain, with the last corner of the next.

For each of these pairs we add one additional corner that is adjacent to each edge. This corner is at the location of the collision, and cuts out the inter-chain corners from any edge loops.


Horizontal roof edges

When evaluating rectilinear shapes, such as the above cross-shape, we often see horizontal skeleton lines. This is another degenerate case that we often see along side the >3 planes at a point case.

We can note that before processing events at one height, there should be no adjacent edges that form a horizontal edge in the skeleton. Also that after processing there should be no such edges.

Because every horizontal skeleton edge must have an event at it's start and end, we can be sure that if we process all the events at one height together, we will have closed all horizontal edges. [edit: this assumption is wrong - see later post on ambiguous cases]

As all events co-heighted occur simultaneously there is no interaction between events and the algorithm is to be deterministic. There is no way to sort co-heighted events, so we let them occur in an arbitrary order.

After one event occurs we may be in the situation where two consecutive, adjacent edges form a horizontal edge. Because this edge has no height, there is no way to determine which events it causes can occur first:

horiz bisector barf

In the above figure, there is a collision at x, then at the same height, faces a,b,c &d collide at y. Face a should be rejected because we're outside the bisector. However as the edge forms a horizontal skeleton line, the bisector is poorly defined (green boundaries), so we would have to accept that all four faces collide, resulting in deformed output.

We do not track events created by consecutive horizontal edges, safe in the knowledge that after we've dealt with all events at the same height there won't be be any left. In a similar way, we can't ascertain whether collisions between planes occur within the current face boundaries.

For this reason all face-collision checks for events at one height are performed before any are evaluated. I do this by caching a list of chains. This brings with it the problem that the corner used in a chain may have been superseded, or "cut off" from it's intended collision (see figure below) by an earlier event at the same height.

The resolution strategy here depends on the chain type:
  • If the chain has more than one corner in it, it represents the intersection of two faces, so there will always be a corner lying on this intersection whose adjacencies will lead to the correct corner.
  • If the chain is of length 1, we must resort to more geometric calculations to project all referenced corners onto the input edge, and choose the preceding corner.
validate chains

In the above figure 1, faces a,b,c,d and e collide at point x. We find their positions in the given volumes by locating the corresponding corners. As co-heighted events occur in an arbitrary order, by the time figure 2 occurs the corners (u & v) are no longer valid.

In the case of u (or any chain with 2+ edges), we can can use the following corner's previous corner to find u'. This is because the corner references two edges used in a collision, so must take part it (and horizontal events are independent - they can't effect each other's edges or validity).

This trick won't work to find v' as it is part of a chain of length 1. We project all corners that reference the edge onto the collision line. We then use the point (v') that is just before the projected collision as the previous corner. We ignore any points that reference the edge before the found corner. This is valid as all these points lie on the edge's plane, and all the points occur at the same height, so their order is defined by their projection. There is no question that the event occurs as there is no interaction between co-heighted events.

Negative edge gradients

By changing the weight or gradient of an edge, we can drastically change the output shape. There is even the possibility of giving the edges a negative (outwards) gradient, to create "overhangs".

No modifications are required to the given algorithm to allow these gradients.

It is fun to note that negating the gradient (reversed polarity result in the figure below) produces the same shape as reversing the polarity (loop direction) of the input, but with a different final meaning - no longer is the result the top of the roof-shape (a surface whose normals point upwards), it becomes the ceiling of the building from the inside (a surface whose normals point downwards).

reversed

When edges have negative gradients, it becomes much harder to guarantee termination of the skeleton - it may define an infinite volume! Sometimes it will terminate, others not. Determining which polygons will terminate and which won't would be a good future research direction. Of course it is always possible to specify an arbitrarily high horizontal plane, which terminates the algorithm.

The following animation places several weighted straight skeletons with varying gradients on top of each other to create some primitive architecture.



Determining the order of chains

Determining the order of corners within a chain can be accomplished by following adjacency pointers to other corners. But between chains it becomes harder. In the weighted skeleton case we must project the corners to the collision height, before determining the order.


chain_order

Angles around the intersection point are less than a full circle. If they weren't they would have overlapped collided at a lower height.

Adjacent parallel edges

If two skeleton edges are parallel (here I mean edges going in the same direction, not opposites), the intersection of their planes with a third is no longer a point (it is either a line or not an intersection at all). This leaves the straight skeleton poorly defined. This condition can occur in two places
  1. the input polygon
  2. when intermediate edges are removed to being two parallel edges together
This also causes much "instability" or chaotic behavior in the algorithm. In the unweighted case it is possible to define the collision of two consecutive parallel edge-planes as the gradient vector starting from the intersection point. This is not possible if the edges have different weights, see the following video as the weight changes:

There has to be an artificial resolution method for parallel consecutive edges. The simplest is to remove one of the offending edges - this choice can be arbitrary and is justified by the limiting case (as we approach Pi degrees from either side, we resolve to one edge or the other). This works in the first case, parallel consecutives in the input polygon.

two become one

In the above figure edge a and b are parallel and have the same weight, they are not consecutive at the start, but become consecutive after an event. Theoretically any line that lies on a and b's face could become the bisector after the collision, however it must be consistently chosen throughout the algorithm. As I'm interested in clean 3D solids, the outcome in the figure is most desirable. By merging the two faces together we can see that the input to the algorithm shouldn't have been the two separate edges a and b, but rather one edge referenced twice. This is the approach I take.

On the subject of chaotic elements of the straight skeleton, another situation deserves to be pointed out. If concave faces' bisectors pass each other ("split events" in Felkel's terminology), there is often a sudden shift in the topology of the skeleton.



Adjacent and horizontal faces form an ambiguous case

When the skeleton is weighted (different gradients on each edge), there is an ill defined condition:

ambiguous

In figure 1, we see the input, a weighted skeleton where edge a has a lower gradient than edge b. These two edges meet to form a horizontal edge. At this point there are two resolution possibilities that make topological sense, 2a, letting the lower gradient take priority and 2b, letting the higher gradient take priority.

[edit: this is wrong, either 2a or 2b, above, are equally valid] A sensible implementation should choose to give the steeper edge priority (figure 2a) for two reasons:
  1. If we look at the limiting case as edges a and b become parallel, as we reduce the angle between them, we get output topology favoring the steeper edge.
  2. Topology 2b leads to the situation where one horizontal event causes another, breaking earlier assumptions in the implementation. This would mean that we could no longer order events by height.
However this is not consistent with the concept of the skeleton being a "wavefront".

Aichholzer et al. do not describe the weighted skeleton, and so do not come across this situation in their text. I'm not aware of a description of the weighted skeleton that avoids this ambiguity.

Building faces

Once the edges of the skeleton are known, it is desirable to be able to define faces (we might want to tile our roof). There is one edge for each face.

The implemented algorithm keeps a single graph (a hash-map) of all edges in the skeleton, and for each input edge there is a list of corners that once referenced that edge. This information is sufficient to re-construct each face:

edges_one

As there may be parallel edges with the same gradient, the algorithm needs to be a little smarter:

edges_two