Book notes: the algorithimic beauty of plants

The Algorithimic Beauty of Plants (Prusinkiewicz, Lindenmayer) (pdf, 40mb)

The highlight of this book for me was the picture of Lindenmayer opposite the title page. But the doodles that some pillock has drawn thought Glasgow Uni's copy came a close second, on the plus side he gave up after 40 pages. Sorry about the constant diversions in what follows, there where some interesting ideas sketched out.

Rewriting mechanisms can work on continuous geometry elements (lines, points, hyperplanes etc), discrete arrays of values (matrices, Conway's game of life) or String (Chomsky-esque grammars). Each of these have variations of the rewriting procedures specified.

The difference between Chomsky grammars and the of Lindenmayer(L)-Systems outlined in the book are expressed as "In Chomsky grammars productions are applied sequentially, whereas in L-systems they are applied in parallel" Thus making L-systems a limed form of context-free grammars if there is one symbol on the LHS of a rule, or context-sensitive if there there are more non-terminals on the LHS of thr grammar.

These examples show some nice properties, such as lack-of intersections. These aren't properties of the system, but reather properties of the nicely throughout grammars. Small changes create intersecting systems.

It is easier to construct non-intersecting curve if a recursive mechanism is used. If we allow the parameterization of terminals (eg. giving them a recursion count) we can define a minimal grammar that is non-intersecting, and as long each sub-section guarantees non-intersection and each combinator also guarantees non-intersection for a given maximum size of sub-section (example on page 17?). It is even possible to enumerate the entire set of these non-intersecting tilings for a grid of a given size.

These ideas extend to three dimensions.

We continue onto using bracketed L-systems to represent trees. In these the [bracket] symbol represent pushing or popping the turtle's state. This means that a sub-section of the tree can be evaluated, then the context popped and another subsection ("branch") of the tree, that starts from the same physical coordinates, can be evaluated.

I wrote a crude context sensitive L system library in java, it can export to svg format when linked against the correct libs. Here's some of the output from the demo library - the example 1.31b) over 40-ish iterations:


Because the entire tree is rewritten each time I guess we're evaluating the tree at different points in it's life. Eg: by rewriting the trunk or the tree at each iteration we can broaden it.

The next bastardization of the Lyndenmere system is adding in a stochastic element. He really means random L-systems (but computers can't do random - right?) - perhaps this should be implemented using a real random number generator and submitted to a journal that no one reads...

So we have a non-deterministic L-system (multiple possible rules for each replacement) except that for each replacement, the avilable rules have some probobailty function (the sum of which should be 1). This creates some nice "natural" variations of grammars.

Context sensitive L-Systems are, as you'd expect, systems where the replacement made can be context sensitive (as in context-sensitive-grammar).

We're building a hierarchy of L-systems, very closely related to the hierarchy of grammars/complexity. Jumping to context-sensitive L-system climbs us one step further up the hierarchy. Given that most programming languages are definable by Regular Languages, we must be missing some trick to call apon almost Semi-Thue/RE languages to create a solution to the problem.

However I digress, context sensitive L-systems allow some neat tricks like signal propagation:

w : baaaaa
p1: b <> b
p2: b -> a

Which is a neat trick.It adds a fair bit to the computation of these systems because by the time we've considered the "branch" stack-system the notion of context is taken to mean physical-context (rather than closeness of symblos in the evaluation) so for each rule and each we need to consider the previous context as the parent-branch and the following contexts asall the possible child-branches.

Hogeweg and Hesper's A model study on biomorphological description (science direct | pdf | doi) analyses of the complete (3 thousand odd) evaluations of a binary context sysitive L-system. (w00?) and generate some nice earily biological, but-not-quite results (because of physical limitations etc...) The above image is an evaluation of one of their models. I spent a bit of time playing with random inputs for each of the possible binary rules 0 <> 0, 0 <> 1 etc... and the various inputs and came to the conclusion that most of the rule and input space was very dull - either creating long stringy repetitive trees, or not evaluating to anything at all. Even using longer random string as the axiom/input string didn't make it more likely that interesting output was achieved. This falls back to the old evolutionist line: life on the edge of chaos. Most the universe is dull, some of it is chaotic, and so messy as to be uninteresting. The bit between the two extremes is where my sandwich is. Problem is - most of the L-system input space produces nothing (or so tedius to be close to nothing). Just picking a random set of rules and starting point as in the given paper is very dull. Random walks from interesting patterns become dull quickly. I guess this is a dull, flat domain with sparse interesting peaks.

A typical random ruleset and staritng point will often just get bogged down -

random ruleset produces

Simple L-systems seem to give great insight into the structure of plants. But they themselves have to be designed by an intelligence - most of the rule-space is uninteresting. I wonder how hard it is to analyze a set of rules and a starting condition and find out if the output is interesting or uninteresting? Given that even finding out if the system halts is difficult, I guess interestingnesss of patterns would be an even harder problem to solve...?

Like Thompson and the book Growth-and-Form (my bible). TABOP moves into looking at section elongation or growth. He shows how Thompson's growth s-curves can be generated using the L-system (page 38)

w : a0
pi : ai → ai+1b0 for i < k
pk+j : bj → bj+1F for j < l

If you evaluate this and count the number of F's the growth curve is sinusoidal and segmented in the manner that is described in that biology text book. Defn. an area for investigation - is the structure robust like the grass-hopper's leg that reorganizes itself when you chop the middle out and turn it around?

However the book goes on to note that "it is often difficult, if not impossible to find L-systems with their required growth functions". And a quote from Vitanyi that "The slowest increasing growth we can obtain by allowing cell interaction is logarithmic". I guess the default state for L-systems is cancerous growth :(

But to compensate for this growth-control problem, another generalization (spotting a pattern yet?) of L-systems is introduced - Parametric L-systems. Here each symbol is given a set of parameters, and a replacement can only take place if a logical statement associated with the parameters is true. The replacement produces more symbols that can have a mathematically altered version of the parameter embedded within them. Once this system is dragged into the context-sensitive model, it starts to look a bit of a mess.

Of course, with the addition of parameters, the terminal symbols can be defined wrt the parameter, allowing much more complex and geometrical shapes.

The second chapter discusses the modeling of trees. Starting with a statement that self-sensitive L-systems have been avoided to date because of their complexity, then goes on to review the literature on tree generation. I've moved these to a following blog post. This chapter of TABOP ends with the best example of not-letting-computer-scientists-decide-the-colour-scheme imaginable:

Chapter three "Developmental models of herbaceous plants" then returns to trying to ensure that each parallel step in the evaluation of an L-system corresponds to a step in time.

The phrase database amplification is used here and defined as "the generations of complex looking objects from very concise descriptions". The phrase is a little under specified and over abused in the literature.

Another variation of the L-system is defined, the tabled L-system. This uses a different table of production rules to simulate changes in the environment, for example one for winter (non flowering) and summer (flowering). Seems a little dubious to be using discrete-evaluation steps to model continuous time, and with tables as well we aren't doing a very good job at developing the goal - a continuous time model of a growing plant. I guess it's possible (with enough) rules to smooth this out. But as we've already established, we've got a Turing complete system in our L-systems, so we can do anything that is computable with enough rules.

The rest of the chapter bores on through some trivial L-systems that represent common flowering patterns in plants. I cut down L-system ("partial L-system") is used here for teaching clarity.

Chapter 4 deals with the phenomenon of Phyllotaxis in a dull generative way. You can see these patterns in the seeds on a pine cone, or centre of a sunfower etc... These kinda cool patters where discussed in more detail in How the Leopard. They refer to the way that several reoccurring features of plants occur in similar spiral patterns.

The basic generative algorithm takes the observation (spirals of points with 137.5 degrees between consecutive points) and converts it to an L-system using the model:

for incrementing n for each point
phi = n * angle ("correct" angle is 137.5)
radius = c * sqrt(n)

Starting from the observed results, a mathematic system is deduced that can construct Phyllotaxis patterns. Using TABOP generative model and varying the angle the following animations where made (java src code):

Changing the angle over the range 135 -> 140 degrees:

Changing the angle over the range 137.25 -> 137.75

There are lots of ideas as to why Phyllotaxis patterns are so common in nature, but no one seems to know for sure. Looking at my videos above it's possible to see the different phases that the form goes through. The angles chosen by nature are where two patterns collide lie (life between the gutter and the stars). Is this coincidence? Is it the best point to give continuous coverage to all areas (look at the following from the other side of the room - one pattern stands out as being the most homegenous) Interesting experiment would be to draw Voronoi cells around each point to discover if the magic angle is just high-coverage local maxima (easy for evolution to reach) .


The chapter also discusses the derivations for simulating Phyllotaxis on a cylinder, again without any insight as to why the patterns would be as they are.

Chapter 5 discusses how to model plant ogans. So that'll be leaves, staymen (plant porn!), petals etc...

First approach is to used artist modeled surfaces. These can be scaled and stretched by the parameters of the underlying L-system, but not changed too much. An attempt is made to define the normal L-Systems (driving a turtle) to trace closed boundaries of a leaf, but it isn't very successful, and the leaves are all the same shape. A modification is proposed that uses a series of point to define a leaf's boundary. These points are defined by the turtle's location at t given time. It produces a more robust system.

Another method presented is to use additional operators and a stack of polygons - you add vertices to the current polygon until you reach the push symbol, then you start adding to a new polygon. When you reach a pop symbol you draw the current polygon and go back to the previous example. More interesting than either of these two variations in interpreting an L-system is the idea that there must be a huge class of algorithms out there that can operate on a subset of the output of an L-system (time, location, points, edges etc...)

These techniques, when used with careful grammars can create realistic leaves that can grow through time. However I'm left with the impression that there are easier ways of defining the same shapes.

Map L-Systems are introduced later in the book. These are remarkably similar to Stiny's Shape Grammars (all the journals containing the papers on these are missing from the Glasgow library...). You start with a labeled graph and perform parallel substitutions on given edges.


These replacements can include adding stubs of additional lines, magic occurs (orange line above) to match one pair of stubs with the same label and performing a replacement. This is a non-deterministic step. Still it is a nice way of working with graphs and produces some remarkably varied textures. A further modification over the above is to match directed line segments.

Eventually the book decides to get self-sensitive (in a way) and uses the map-L-Systems(ish) to continually divide a map of springs. This system simulates the pressures inside a cell, and provides some interesting techniques for modeling cellular growth. This feedback mechanism has an effect on the layout (if not the topology) of the cells. Basically it's using a these L-systems as a crude approximation of Induction (the biological growth term for cells differentiating based on surroundings) -

cell induction

i) two adjacent regions of cells
ii) new cell type induced by adjacency
iii) strata of cells via induction

Sub-goal: investigate the results of modeling cellular machines continuously compared to L-Systems.

The last chapter of the book deals with the fractal nature of nature and L-Systems. An introduction to Iterated Function Systems is given. These give true self-similarity (as do the class of L-systems without terminal symbols) by endlessly transforming the input set of points by a series of (linear) functions. The input points are continuous in a given bound. This leads to several predictable methods of rendering IFS'. Finally a technique is given for converting a one line L-system to an IFS, with very pretty results. Can't think of a reason I'd want to do it tho!

In summary:

Mainly a list of variations on the theme of Lindenmayer(L)-Systems:
  • stochastic
  • self-sensative
  • context sensative
  • 3D
  • stack-operations [brackets]
  • parametric L-systems
  • table L-systems
  • map L-systems

I have a bit of a problem with this as it seems to be largely botched model, each bastadization attempts to make up for some short coming of the previous models. Perhaps this shows L systems provide a valid framework, because it is so easily extended, but I'm not convinced. When you see a model overloaded like this in the software engineering world it should ring alarm bells. But it is also horribly open to abuse by authors who extend L-systems to the point of any-other language. When (as Chomsky points out in Three Models for the Description of a Language) a grammar should reveal some insight into the structure being examined. Wheras these extended L-systems in practice are a mess of concepts and corrections, showing little more insight than if if the same structures where specified in any other programming language.

From an engineering point of view simple L-systems work well because there is little growth, just structure. As you start to try and simulate growth and structure using the same technique, it become painfully clear from the mess of L-systems in the book and other literature that the system. I can't help feeling that you need to model the two systems individually; L-systems seem to work well for structure but are clumsy at best for simulating growth. This is a view shared by Hart et al in the introduction to the paper Structural simulation of tree growth and response.

An annoying thing about procedural modeling is the quantity of domain specific knowledge you have to pick up to discuss features. Plants have a lot of parts I didn't realize needed names.


Holyer effect

Google seems to know what direction I'm going to research next better than I do.

Classic example of the Holyer effect. I knew it would help to raise the profile of procedural modeling, so I started writing about it in this blog. A consequence I never considered was that it could obstruct my research. I guess you can't study something without changing it after all ;)

(Edit: you get the same result however you spell modeling :P )


Stormr 0.2


What is it?

It creates videos of a guy presenting your flickr image stream like this:

It reads out the name and the descriptions and points out any notes added to the images.

How to I use it?
  1. Install and run moviestorm (400Mb download) to check it works. You have to sign up on the website to get it started.
  2. Upload any pictures you want to flickr (you might have to sign up with flickr)
  3. Find the Stormr source code here (sorry no binary any more).
  4. Enter your flickr username, select the set you want, then follow the instructions to start moviestorm, and load the newly created movie.
  5. At this point you can play with the presentation in moviestorm. To render it to video click on this button
then this button
Leave moviestorm running full screen while it renders, then click on one of the options near the bottom of the screen to view the movie or upload it to ta interwebs. (Click the blue underlined link to show the file location on disk).

  • flickrj for a great flickr bridge
  • piddles for the audio
  • Matthias Pfisterer for some audio concatenation stuff

Source (LGPL'd of course)

  • Deal with html links/ markup looking funny
  • Fix the subtitles so they can't be white on white.
  • Fix the compression artefacts on the images
  • Large movie's (30+ image sets) will be usable and maybe crash Stormr
  • Doesn't work on Java 1.5

Development notes:

I've taken the philosophy that it should work with the base pack and be fairly simple. I might put some fancy camerawork in, but I leave using assets from other addon packs to the user.

It's an example of a bridge application - pulling the printed image into the video age. There's a lot of other content these things could be built for very easily.

Flickr note's are public - anyone can add one, so becareful what ends up in your video... ;) You can turn this feature off in flickr's settings.

Here are some future ideas:
  • Add crowds to read out comments, and point to areas of the image they've commented on.
  • Add some indicator of how many views each image has
  • Add a license (Copyright, Creative commons etc...) license management system. Perhaps stormr should old work on CC derivative works allowed (or laxer) input images?
  • Add a text to speach system (although these always sound pants?)
  • Port it to the cloud and allow it to download a zipped movie to your computer.
  • Find a way to add web links to the video file, even if someone's got a patent on it.
  • Figure out a way to get out of that damn volcano that moviestorm land is in...
Things I can dream about:
  • Having more content in the base pack to work with - just some background noises that aren't outside would make a difference.
  • Shortfuze providing a service to render videos for me

Same video on different sites fyi

More development videos:

lighting now in (will be available ~Friday):

update: Imitation is the greatest form of flattery! Was great to see the Moviestorm team take Stormr in-house:


Stormr 0.1


Stormr is a bridge between flickr and moviestorm.

The first preview is available here (still pre-beta, so run at ya'll owns risk).

Because development videos are fun:-


some procedural cities and architectures

  • Several papers state that game creators take a long time creating city artwork, but no one can cite how long it takes to create the cityscapes in recent games.

The current approach to 3D modeling is to manually create 3D geometry using tools like Autodesk Maya or 3ds Max [interactive visual editing of grammars for procedural architecture]
It can take several years to model detailed 3D land and cityscapes[procedural design of urban open spaces]

  • Presentation really matters - chucking the results through a global illumination renderer with (Sity, Mueller's stuff) instantly gives additional credibility to the project.
  • Other procedural architecture round-ups by Citygen people.
  • The two commercial projects (gamr7 and cityEngine) are European. woo! This means either the Americans haven't noticed this niche yet, or they don't think it's viable. Both companies have academic backers.
  • The unix command "pdfimages pdf-file output-directory-and-stub" is a great way to rip the images from pdf files. Sorry if anyone takes offence from me using their work here - the images give a good impression of what each project is good at. Shout (or leave a comment) and I'll take them down.

Example-Based Model Synthesis (pdf | doi)
Paul Merrell

Texture synthesis is a common technique for extrapolating 2D textures by incrementally adding bits of image. By choosing sections of image that are close to one another in the input, convincing textures can be built up. This paper (and the following one "Continuous Model Synthesis") extend the technique to 3D models very successfully. While not particularly to do with cities, all the examples are architectural.

This technique could be invaluable to artists as it wouldn't require many changes to the art toolchain - the technique can accept any model as input.

The input geometry is subdivided to blocks. These are analyzed to find allowable neighbors along each axis. To evaluate the model a search is performed to find configurations of allowable blocks. As with texture synthesis the output can be constrained to volumes force the structure to grow in a specific location.

Style Grammars for Interactive Visualization of Architecture (pdf | doi)
Daniel G. Aliaga, Paul A. Rosen, Daniel R. Bekins

This paper takes a different approach and shows a technique for semi-automatic extraction of a building grammar from a set of photographs. This grammar can then be applied to a stylize another block house model.

Several photographs are taken of the desired building. Then a simple 3D model is manually constructed, and aligned to the photos using a semi-automatic process - it's guesses become become better as more elements are aligned.

The paper then makes it appear trivial to deconstruct the given model into a grammar - admittedly taking some architecture specific properties into account. After orthogonal subdivision the layout of each floor is represented by strings. Analysis on theses strings with a few simple rules can produce the necessary grammar.

Obscured area of the input photographs have their features extrapolated from similar un-obscured areas.

Taking a bit of a tangent the paper goes on to describe several different rendering techniques are given at the end of the paper. Equalization of the textures ensures a constant colour over a facade.

Continuous level-of-detail modeling of buildings in 3D city models (pdf | doi)
Jürgen Döllner, Henrik Buchholz

The CityGML standard defines some city-centric extensions to XML and this paper presents a continuous LOD (level of detail) method for the generation of city buildings. The primary target of the work are GIS (Geographic Information Systems). The paper details the class hierarchy used to define each building, this corresponds to the more formal (but less readable) grammars in other work.

While the paper mentions straight skeletons it is ambiguous as to whether they are actually used. The final level of detail is given by texturing using a photo of the real world building.

The level of detail in the paper's title refers to whether a simple block model, the block model with a roof or the model with a roof and facade elements is shown. This isn't the same continuous LOD terminology used in video games, where one LOD would be expected to seamlessly morph in the next before the meshes where switched over.

This is a technique for manually constructing buildings.

Procedural Modelling of Cities (pdf | roi)
Yoav I. H. Parish, Pascal Müller

I've waffled on about this paper in-depth here.

It presents some techniques that have become very popular for generating roads, and some (later to be refined) methods of creating buildings.

The input supplied by the user includes geographic properties, such as valid city area (eg: not under water), a height map and a population density map. First major arterial roads are created by allowing roads to grow between population centers. This produces natural looking roads between the built up areas. Next a parallel L-system fills in the gaps between major roads with minor ones, using one of several models and self-sensitivity to give a connectedness to the otherwise tree-shape result of the L-system.

A shape grammar continually divides the volume made from the extruded footprint of a building to create a model of the building. By rendering different levels of this model, a level-of-detail (LOD) system is easily achieved.

One of the cornerstone papers in procedural urban geometry.

Instant architecture ( pdf | video | doi )
Peter Wonka, Michael Wimmer, Francois Sillion William Ribarsky

This paper details a grammar for the construction of buildings. A large library of components and a split (continuous subdivision) grammar a large design space is possible. A control grammar runs in parallel and distributes attributes spatially.

This system of attributes is the main accomplishment of this paper - it allows the design space to be purposefully evaluated. For example every element may have a "simplicity" value, and if the user asks for a "simplistic" building, this can be used to guide the evaluation of the split grammar. Other elements have attributes that ensure they are only used if there is sufficent space.

When all attributes are taken into account, there may be a tie-break between several rules. When such an arbitrary choice has to be made, the choice is "baked in" until the end of the evaluation of that particular building. So if a window type is chosen in a particular building, it will be used in all identical situations in the same building.

Some really nice façade results. The other cornerstone paper in urban procedural geometry.

Procedural Modelling of Buildings (pdf | doi)
P Müller, P Wonka, S Haegler, A Ulmer, L Van Gool

This is one of the papers that pushed Müller & Wonka et al. to the head of the field. Really impressive stuff that I'm going to be chasing for a long time. It outlines CGA shape, a shape grammar for describing buildings. There seems to be continuing work in this area developing CGA shape and it seems to produce some of the most convincing results out there.

The grammar is a shape grammar - one that continually subdivides volumes. It uses rules that replace one shape with one or more others, rules to subdivide over over absolute, scaled or repeated parameters and rules to perform functions on geometric primitives that allow a change in dimensionality (working in 2d for example).

These rules are fairly declarative ("if you decide to make a courtyard then this is how to do it"). There must be a large library of rules, painstakingly created.

Once these rules have defined a basic mass (the crude structure of the building) visibility testing occurs to retrieve a set of polygon faces that can be further processed for details. These include roof tiles or eves, doors and windows, all still produced using the shape grammar. Using features called snap-lines these are aligned to predominant architectural features.

The end results depends on the quality of the rule library, but it does produce some impressive results. This is a fantastic paper, and one that I've already started a more in-depth post on.

The work in this paper is extended (by morphable terminals) in the paper Procedural 3D Reconstruction of Puuc Buildings in Xkipché. CGA shape is again used in Procedural Design of Outdoor spaces.

Citygen: An interactive system for procedural city generation
George Kelly, Hugh McCabe

This screenshot doesn't really do the system justice. It allows interactive modeling of cityscapes - you can go in and tweak parameters for a particular regions and get real-time feedback. They borrow some elements from CityEngine (which is becomming the defeacto standard) namely secondary road generation using parallel growth and lot subdivision techniques.

The hierarchy used is to first create primary road, then secondary roads and finally buildings. The primary roads are generated between user defined juntions. Again, similarly to cityEngine, the roads "grow" by looking ahead to find desirable features (eg: least height differential). Finally they are smoother using splines. Some effort was put into investigating different functions of the height hight map when guiding roads.

The secondary roads are grown using a "parallel l-system", and produce a pleasing range of results between organic and ordered. Again as with CityEngine, snapping occurs when two secondary roads get close to one another. Work has been done on optimizing these tests a bit, but not so far as to use a quad tree or similar.

Pavements are created by shrinking the plot boundtry, although how the shrink is implemented isn't documented (I should just go and look at the src code). Block subdivison to plots occurs by repeated splitting of the entire concave block on a line between the midpoint and the centre point of the longest side. Sometimes this is modulated to occur more often on street-side properties.

Country houses are moved back from the street, while town houses are built to abut the pavements.

The buildings are currently simple extrusions that use texture and bump maps.

Roll A City
Graham Whelan, George Kelly, Hugh McCabe

A short paper that builds on citygen. Strangely they say the thing that they do differently is to not use statistical data as an input, then they use statistical data in the form of a height map. The user lays the main roads (yellow above) and then the detail is generated. This can then be edited, by creating new house grammars, or modifying one from an existing library.

Image based Procedural Modelling of Facades
Pascal Mueller, Gang Zeng, Peter Wonka, Luc Van Gool

This paper using procedural and image processing techniques to reconstruct 3d facades from a single low-res 2d image. Image analysis breaks the image down into and removes all repeated components (above, left). These are sudivided and matched against a library of 3d components by comparing the image to a rendered image of the 3d component. From here a grammar can be constructed to give the entire facade.

Procedural Inc.

This is the accumulation of Pascal Mueller's research in the form a company. Their major selling points are
  • Rapid Digital Prototyping
  • Shorter Time-to-Market
  • Cost Reduction
  • High-Quality Content
  • Redefine Design
  • Flawless Integration
Cost is $4000, and they seem to be pitching to a wide range of companies, everything from video games, architecture, archeology, simulation, film & tv. I'll write more once I've had a play with it on my PC

Introversion games
Chris, Mark, Thomas and John

This is a continuing video games project that shows some real promise. These guys are the heros on the indie-games network, unfortunately they seem to have grown up and moved from their bedrooms into offices. They don't seem to publish anything about their algorithms, but some of their videos are very insightful and of a higher quality than lots of the academic tat. Senzee has talked to them and say they're working from Mueller et al's excellent CSG and shape grammar stuff.

Lionel Barret, Bernard Légaut

A French company using a consultancy model. Like Procedural Inc. they have academic roots, but they are concentrating on the gaming market. There selling points seem to be:
  • Automatic Geometric adaptation
  • Automatic HeightField adaptation
  • Automatic LOD removal of objects
  • Exports to Collada for easy mockups and visualisation
  • Exports to xml for easy integration into pipeline
  • Heightfield Import
  • Configurable Forbidden Zone : Water, gameplay constraint
  • Configurable Road Newtork : Spline, Bridges, Etc.
They don't specify a cost, but this is normal for a consultancy. No very impressive screenshots from them; their buildings all look a bit too similar.

Sity Generator

eves variety

Rather depressingly my old school project doesn't look too out of place among these candidates. It presents some novel-ish techniques that simplify the control of complex architecture geometry generation. The skeleton algorithm is so well suited to architecture generation, I'm quite surprised it isn't used elsewhere (despite being a biatch to implement nicely ;). The design of the system is more heterogeneous - relying on the same {plane -> subdivision using voronoi -> extrusion using straight skeleton -> planes} cycle for everything from the city layout (admitedly not as nice as the grown cities, but lower effort) to the generation of garden walls.

It also featured concepts such as Wonka's "baking in" of random values when evaluating a grammar. But the baking wasn't limited to one-level of the grammar (the building in Wonka's example), but occured at several depths. For example the probability distribution of the type of building was baked in once per block.


gpu clouds

Will someone please build me a GPU cloud!


For research and commercial application cloud computing is a hot subject. It takes 10 minutes to boot an EC2 instance, it takes 10 minutes to boot 100 EC2 instances. I can get billed by the hour. It is a very scalable and cost effective model - I don't need to buy one hundred computers, but I can rent them by the hour. But these instances only have CPU's (because they are all sitting without monitors in a data center somewhere), and I want to buy time on a GPU cloud for several reasons:
  • Massive parallel processing of data - there are guys at Glasgow working on speeding up image recognition using the GPU, and many, many other applications around the world.
  • Production of real time 3D images - Companies like xtraNormal render simplistic graphics using something that looks like OpenGL/DirectX rendered to video. Modern video games show how fast very even more detailed content can be created.
  • Production of non real time images using Gelato or similar.
There are all sorts of fun things GPU clouds might be good for - we are seeing applications where we might not want to store rendered images, but produce and stream them on the fly. How about inserting custom Google-style individually targetted adverts into the billboards in a football game? Into a movie?

But no one has built me a cloud yet :(

It looks like amazon isn't really interested in this. And the NCSA has a massive...wait for it...16 node cluster. I guess this is a real business opportunity if someone wants to run with the ball!

The phrase to use when googling this subject is "GPU cluster" this gives more results, but none (that I found) for on demand use.

There's no such thing as a new idea - AMD want to build a cloud later this year. More or less for exactly the reasons above; lovely can't wait. It's strange that this is being done first by a hardware company (even if it's looking wobbly financially), does the rest of the cloud world know something AMD doesn't?

Hoopoe have a CUDA/Open CL cloud in alpha. They are giving away free usage, until they get the bugs out :) The interweb says it'll be announced at the start of november...

morphing pt 2

Over a couple of evenings, I've returned to morphing - I found a much simpler, if more computationally expensive algorithm for morphs - Feature based metamorphosis - by Bier. After some false starts -


it now works. I've just got some parameter tweaking to go, so here are some results using different control points - the smaller fish are the output, the larger the input (adding these points is a skill, as you have to avoid pinching the transform anywhere).

This takes a while tho, 10 control lines renders in colour on a 800x300 image pair in 30 seconds. Hopefully a video will follow tomorrow evening.



[edit:] a newer post about morphing, with links to code, is here.

Last week I made the mistake of wanting to morph some images together in code. All wanted was to build a fish...


It should have been trivial to build a tree of components (body- mouth - ventral fins -eyes) and each for each component to have a range of options to morph between. Then a little bit of hackery to stick the components on in the correct place (tail on the donkey stuff*) and I'd have a way of evaluating the 2d images of a subset of the Teleostei morphospace.

This is just the same technique as used in video games - morphing 3d meshes between (or even beyond) extremes-

This works by deforming a mesh (with a set topology) between a set of extremes, with soft masks so that features only get applied in certain ways. Of course this means all sorts of problems about not knowing exactly where a person's lips are when animating them. But video games are still crude enough to get away with this, and hopefully motion retargetting will eventually be powerful enough to fill in the gaps. Of course the morph also has to applied to tessellating body parts such as neck and hair.

While this doesn't really count as procedural graphics (we're evaluating morphospace between artist-defined points, rather than evaluating new a new area of space) it does show how far we can get without resorting to constructive geometry.

Morph 2D images has a long history, as this overview paper points out, the mighty Industrial Light and Magic defined a type of 2D morph in "A Two-Pass Mesh Warping Algorithm for Object Transformation and Image Interpolation" (which is a really hard to find paper btw). ILM used the morphing in a movie for the first time - Willow

This involved laying an identical mesh on two images, editing the vertex points of that mesh for each image, then morphing the location, size and colour of each rectangle to obtain an new image between the pair. The vertex points identify features that we want to see move from the first image to the second in the morph.

Another interesting technique was morphing the outline of convex shapes using (follow the link for a cute java applet) 'the two dimensional Gaussian image'. By exploiting the fact that the shape is convex, you can store each edge by it's distance from the centre and length. Using just a list of these two values makes it easy to create morphing algorithm between shapes with different numbers of sides. But it doesn't specify anything about the interior of the shape, or work on concave polygons.

These two techniques aren't quite what was needed because fish aren't very rectilinear or convex, "feature based morphing" was closer. The idea here is to take your two fish and add pairs of corresponding edges/outlines on the two images. You can then triangulate both images, and morph each triangle from it's start texture, position and colour to it's destination. So there are two problems:
  • finding a compatible triangulation between both outlines
In order to get clean & tidy morphs, it's desirable both triangulations should be topologically identical. As one triangulation may not be valid for both outlines, this is rather tricky ("invalid triangulation" in the below image).
  • finding a way to morph the triangulations
Again the naive answer - linear interpolation of the points, gives ropey results, triangles can disappear/flip or the outline might not be a simple-polygon throughout the morph. Imagine morphing two polygons, one rotated 180 degrees to the other. At the mid-point in the morph all points would be at the same place. We should really use a technique that guarantees that each polygon retains as much of it's area as possible at all points in the morph. (Solutions to this problem involved some involved maths that made me run away)

I read a few papers about these two problems. But there is a huge quantity of material on the subject. What follows is incomplete, but gives a taste of the solutions involved.

This paper shows that a triangulation must exist by combining a valid triangulation of the first and second polygons. It's easiest to imagine on by unfolding both polygons and their triangulations to the same regular representation and then intersecting all the edges.

compatible triangulation

The problem with this method is that it introduces many aditional vertices, the worst case is adding O(n2) Steiner vertices. So there is a good deal of reasearch that seems to just use huristics to find compatible tesselations with fewer Steiner vertices.

Steiner vertices are a term stolen from Stiener trees, which is a similar construction to the minimum spanning tree, except we're allowed to add aditional points to decrease the length of the tree.

A better triangulation technique uses the minimum-link-path distance as a heuristic to find pairs of vertices on both shapes that would introduce the fewest additional vertices. This led me on a bit of a false-lead investigating how to calculate these link paths. One early solution by Subhash Suri is outlined in A linear time algorithm for minimum link paths inside a simple polygon -

minimum link path

The algorithm starts by triangulating the polygon (any-old way), and since the connectivity between the triangles forms a tree (each chord splits the polygon in two) there is one path of triangles from the start to the end (1...10) in the above. The path will not go below 1 or above 10 (i :shown as 'x'), so can be discarded (ii). Then draw the visibility polygon (light green) from the starting point to find the window (dark green) (ii). The window intersects some of the numbered triangles, and we use the direction (of enumeration) of the highest numbered triangles intersected to decide which side of the visibility polygon the window is (in some cases it will have two or more sides, not shown). This is repeated (iii) , finding the visibility of consecutive windows until we can backtrack from the end to the start (iv) to create a minimum link path.

However (newbie research error - reading the mathsey bits of the paper and drawing the above took a good while) Suri later published a more effective mechanism.

At this point I gave up researching the area and went back to trying to find an outline morphing library (I never did find one - anyone know of a good opensource image morphing lib that isn't xmporph?). I learnt a lot about geometry and more importantly I learnt my lesson about not to blindly wandering into a rabbit warrens of papers.

afternote: One area that I didn't come across was how to morph curves that bound an area, this would let you define a boundary using a b-spline or similar and it would still be able to morph the underlying texture. Most modelling tools rely on vertex morphing. I wonder if there is any work to be done morphing 3D curves (NURBS etc...)?

* hey that's an idea morphing donkeys!

Fallout 3

Just laid this game to rest :)

I was consistently amazed by the scale of the game. Reading various wiki's now it seems that I did most of the quests, but I always felt like there where more out there. Between some fantastic story telling and enough variables, it really felt like it was a world to be explored, I can only guess at how long it took the guys to create all the scenery and terrain (is 200 man-years a close guess?). Lots of nice geometry tools (speedTree for foliage, morphing & colourable heads with stick on hair for people) must have eased the burden.


DC wastelands

The shear scale of the DC wastelands was what made it for me, I come across the "you can't go any further in this direction" message once in 45 hours of gameplay (i didn't go looking for it, but you get the idea). You could explore most of the buildings in the above image, and although most weren't very involving it was enough detail to let the game world come across believably. Games like this are few and far between (LBA, and Outcast spring to mind).

It felt like I had the choice to be a good or a bad character. Being good or bad changed the way some characters reacted, but it turns out it has no effect on the main story line. Convincing story telling covered this up nicely.

It's the kind of world you can almost believe could be created by one talented (techy-ish) writer and some really clever yet-to-be-invented tools; If you take the range of art in Oblivion (a fantasy/medieval game using a similar engine) and Fallout3 and create a procedural method for creating it all then one guy could conceivably create the whole world. I think we could get the creation time of Fallout down to the length of time it takes to write a novel....?

But then we could outsource the level design, or build it up using the open-source framework as a model. Almost like a Wiki where people could create stories and subplots in their 10km2, and a player could wander a world that was expanding as fast as they could explore.

The downside of Fallout 3 (there always is one) was I had big problems with crashes in the game - I ended up playing windowed, and still getting crashes every 2 hours. Trying to use the rpg-combat system (VATS) in a busy environment (like the nice set pieces) caused the game to lock up. OS, HW and game where patched and upto date. More than a little shoddy on Brethesda's part (what about the QA guys?). Bits of the game where broken, some robots didn't appear in BigTown to let me complete a quest. But all the niggles didn't stop me playing, so it didn't really matter.

But now I might just slip back into the wastelands to see if I can find that Oasis Three Dog was talking about ;)


PhD angst

So... I'm now two months into this PhD thing & many things are starting to fall into place, however lots more things have appeared on the horizon that I don't like the look of.

I don't have to do anything that isn't relevant, you say? Or anything that isn't interesting?

This is a blessing and a curse; mix these two and you can become lost in rabbit warren of academic references and citations. It's obvious now why academics invented the internet - surfing the web is just a degenerate form of surfing citations/cited-bys. Having lost several days (without taking notes - bad twak!) down various versions of these rabbit holes I think the same dangers apply to both - they're endless time holes. I'm going to try to write up my half-complete findings on image morphing soon (honest).


There is def. an art to reading. I can't read everything, ever. It is very much about always being on task, and checking what you are taking away from a text. This is a real problem in a new (to me) subjects like biology. Skimming great biology textbooks means that you're too far away from the subject to take away the ideas that you need. But reading them in their entirety comes down to taking first and second year biology classes - something I don't have the time (or ability to memorize random names) to do.

Biology has many concepts and features that weave together into an web of knowledge that is denser than CS. The subject feels backwards - undergrad textbooks flag massive areas of knowledge as unknown and under research. This feels quite alien because the bits of knowledge that the CS undergrad course admits now knowing about are few and far between and those that we don't know about (P == NP?) are presented as "can we ever know this?".

The lack of direction atm is really causing me to loose momentum. Being able to explore the whole world is great, but you can only explore a little at a time, and that's really frustrating. I think I have a real need to ground my knowledge by applying it or otherwise spending time digesting it. Grounding is thinking or coding or writing a blog post (hence ramblings here). I get distracted thinking, so coding/writing is a great way to stay on topic*.

I feel like I make such slow progress. I have lots of good intention - "reading & documenting a new paper every morning" was one, but I'm still spending my whole time reading books and writing up a paper takes a morning in itself. My supervisors are very good, assuring me that just exposing myself to all this material is def. the correct direction. Hopefully more disipline will follow after Christmas.

The department is the first place where I have challenging conversations on a daily basis. People actually questioning my opinions, so unlike my undergrad degree :). I guess this is what I missed at Oxbridge (still can't find Oxbridge on a map tho).

In this blog I'm trying a model of open research - documenting those directions and ideas that seem promising (and I have time to write out). I'm not pretending everything here is correct, accurate (or even well thought out - tis is a personal blog/notebook). It should be an asset to me when writing up end-of-year reports as the material is closer to being publishable. My ulterior motive is to try and build some reputation (pagerank?) for the topics I'm reading.

My subject "procedural geometry" isn't very well known in the way i'd like it to be, and I think I was hoping to get in early and grab some obvious research areas. For this reason it didn't really matter where I studied (although I am (very) happy with my supervisors and environment in Glasgow), but it did take me a couple of years to find someone (SICSA! woo!) to fund it. I was supprised that there was no computer-graphics (apart from image-analysis). This means that I'm even on my own for undergrad things like geometry and suchlike, but since it's just another topic on the heap-to learn-it's no great issue. Perhaps I should find a summer school or similar to fill in the massive holes I have in my computational geometry knowledge?

This really does explain a lot about how academia functions. The people that get through a PhD are unlikely to be the go-getters (unless they are particularly brilliant) but those that can build on something for a very long time (some people who are starting are only 21, meaning one sixth of their life is being invested) while fighting against all the flack from other academics and nay-sayers. This means that people in academia learn to be right the hard way, by being smacked down and wasting huge quantities of their life when they're wrong. Quite a fantastic entry-gate to have mind you, although not entirely compatible with the ability to teach.

In summary, lots of what this book says it true.

*To force the commoditization of a concept, to be able to use it as a building block for other ideas (#define "personal commoditization of an item of information" grok).


stand back, I'm going to try biology

(notes from a 1994 copy of Molecular Biology of the Cell, a cheeky Russian has posted all images (and russian text) from the textbook here.

Geometric Strcture in the Early Vertebrate Embryo

The head to tail axis (anteroposterior)is already defined in the un-fertilized egg, however the back to stomach (dorsoventral) axis of the fertilized egg are only defined by where the sperm enters the cell. Roughly the sperm enters the egg around about your belly button (or rather causes your belly button to appear there). This is a clear data point in the nature/nurture debate - which way up you are is defined by the point where a sperm first touches the egg, a very chaotic event.

After a while we have a bunch of cells packed into a membraney sack. The shape they make is an interesting one - basically packed spheres within a larger spherical shell. Space subdivision based on membrane tension, rather than distance-to-a-point.This is very similar to soap bubble patterns. There are equations defining their shape at their boundaries, (the link is worth following for the pretty renders of bubbles). Something similar shows up up in weighted Voronoi tesselations as well.

Sodium is pumped into space between the cells, and via osmosis water follows. This forms a single cavity with a cruchy shell of cells. This leaves room for the wonderfully titled gastrulation, a process, it seems, whereby an arse appears and grows into a mouth. (This explains a lot about what comes out of people's mouths). Seriously - an indentation grows that forms the anus, then continues up to create a mouth. This shapes the cell into a torus around the nascent digestive system.

But where does this indentation come from and how is it controlled? A homogeneous structure suddenly creates this balstopore lip that becomes a digestive system. If you splice a second lip into a developing creature, both digestive systems grow and you end up with something non too pretty:-

This lip was one of the original organizer features documented and has the name Spearmann's Organizer. It's initial formation seems to be a result of regulative development (several cell movements in combination with morphogenic fields) and it goes on to control a lot of important growth functions in early embryos.

Regulative Development

The two extremes that define development.
  • Mosaic development - pattern of the body determined by features of the egg
  • Regulative development - body determined by cell-cell interactions after fertilization
Once again these are the ongoing nature/nurture (compiletime/runtime; creationism/evolution) principles. I don't think I can really study mosaic development, it seems to be fairly unknown (and as the output of an evolutionary system, messily complicated), a more productive line of research is Regulative development. Trying to understand genes is hard, trying to understand the physical properties exploited by the gene is much easier (and I have a very low tolerance for comprehending TLA chemicals). Various principles from this field follow:

  • When certain types of cells are adjacent, new cell-division behaviours can be turned on. This induction can happen several times to build up layers of different cells as the adjacency changes. Based on the concentrations of morphogens, cells can develop in different directions. Cell age also has an effect, older cells can behave in different ways to new cells.

cell induction
i) two adjacent regions of cells
ii) new cell type induced by adjacency
iii) strata of cells via induction

  • The concept of function calls appears- "in this way the final specification of how a limb cell should behave is built up combinatorially; first it is supplied with information as whether it is to be a leg or wing then signals within the growing limb bud specify more fine-grained components of positional value, reflecting the precise position within the limb". So a programmer might say that values have been curried into the code for each part of the limb. You can see this in an experiment - if the tip of a embryonic chicken's foot is transplanted onto the end of the wing, you get a wing with a toe on the end; that is the tissue has already been curried (fated) as part of a leg, but doesn't know which part it is yet. (I wish computer science involved experiments like that - I mean just to spend the afternoon splicing bits between embryos is more excitement than the entire cs dept gets in a year, perhaps I should start a wet-lab in one of the terminal rooms that people never use, how about this one mixing up cells in an embryo, just to see what happens!).

  • Cockroaches show another pattern - the parts of their legs seem to know where they are in the sequence, somehow they know their positional values. Through a process called intercalation any missing (ie a mid-portion of leg removed in an experiment) parts are eventually grown back. There's an idea for geometry deformation here - if you have a house, and stretch out a wall sensible house-parts should grow to fill in the space (much like mueller's 2006 paper) .

Linear animals - built end to end, easiest thing to encode for & follows digestive system. makes HOX etc... genes feasible?

  • "Programmed death A C. elegans hermaphrodite generates 1030 somatic cell nuclei in the course of its development, but 131 die" - programmed cell deaths, cells that are "fated" to die. Is it possible for a system to have anti-features? features that cause something to disapear?

  • Cells use a system of morphogenic fields to control features:

sausage cell

For example if you want a feature in the middle of an organism, cells should be programmed to transform in that way if the concentration of red and blue (in the above diagram) are identical. Most of these morphogens are unutterable TLAs, but their production and effects are quite relevant. Experiments that showed these principles included taking a fly egg (Drosphila) and removing some of the fluid containing the morphogens from the head end leading to a fly without a head. If you stick some arse-morphogentm in place of the head field, you get a fly with two sets of abdominal segments.

These fields would be rather robust when dealing with geometric features (if disrupted the system can re-organize itself) and may well prove to be well suited to procedural geometry that adapts to new features. The field effects could also be simulated using Voronoi-like arangements (I'm a bit hooked up on V. diagrams if you haven't noticed).

These fields can be seen as giving input to the growth-algorithm. They certainly solve the problems of symmetry in simple grammars (since diffusion is equal in all directions). There are several techniques that seem to be used - differentiating features by field strength or using a combination of fields, each producing another one -


Note how similar this is to the concept of induction outlined above. It is interesting to note how these fields develop and are used (above) by excreting morphogens based on certain criteria, you can build up harmonic patterns to control feature development. Some of these criteria are:
  • A function of another field strength.
  • Equal strength of a number of other fields
  • Orientation of the field (use the local field gradient for some function).
So each of the harmonic divisions in 4) above can be used as the starting point for a feature/another technique along the length of an animals body.

  • Another technique is shown by the evaluation of "pair -rule" genes. A number of genes(hairy, even-skipped, paired, fushi-tarazu) of different lengths, are activated at intervals of two "segments". Different start-locations for the genes are apparent, and in combination (boolean operations - and, or, not, xor etc...) they can build more patterns.


This is really the idea expressed in Muller's 01 paper, that is used in 2D to positions windows on a façade (a boolean union of 2D pulse functions).
  • If we suddently drop down to later in the development cycle (biologists don't seem to know what goes on between), there is a technqiue for producing occasional features called Lateral inhibition.

differentiation via inhibition

As in i) above all the cells start of with the same concentration of signal. Cells near those with a larger signal stop signalling, the result iii) is that we end up interspersed pattern. In this example each of the selected cells then becomes a mother cell to a clutch of hair-cells, while the remaining cells carry on as normal to become skin. The more complex patterns of stimulation/inhibition using morphogens are best described by Turing (free via the Turing institute).

Final thoughts: how much of this is suitable for algorithmic control? for artist control? the design certainly feels like it comes from the transcend, and is totally uncontrollable, unplannable. If any technique proves to be too complicated for an artist, then morphogenic fields become as good as any? However my first attempt at creating cell membrane using morphogenic fields had positive results in 3 hours -


a841 north of sannox

a841 north of sannox

Fantastic camp & ride around Arran last weekend. This photo was taken in the empty & hilly north of the island; you can just see Ben powering off into the distance. The south of the island was very up-and-down, but none of the large hills from the north. The whole place was so nice and quiet (and sunny) it made it worthwhile despite being shagged by the end of the day.


engineered emergent behaviour

More morphogenic field playing, using this matrix

{-1f,  0f, 0f},
{10f,-.1f, 0f},
{-2f,  3f,20f}};

(horizontal = cell type, vertical = attraction to field)
(red, green, blue)

We can get double-walled cells! (almost) The white field is the red cell morphogen.

I'm considering that more rules are needed to make the cell-wall continuous and unbreakable. Maybe adding springs between the cells once they've been next to another cell of a certain type for a set period.

[edit] these people have been playing these games since 94.

[edit] this file will run the simulation on your computer if you've got Java 1.6 and webstart enabled. The referenced files contains the source code. Thanks Phys2D!


physics2d games

Still playing with cellular automota in phys2d -

Take some gravity-less balls. (Colours don't mean anything)

Now imagine each gives off a chemical trace-

Make the cells move towards the field. This is kinda-like gravity simulation (but gravity with history). There's also some jiggle in there - "brownian motion" to keep things moving. The cells look about 3x their radius in each of the 4 compass directions to figure out where to go. If they look less than their diameter away, they don't really do much :(

Adding some field decay back in and tweaking some values, gives us patterns that build themselves a potential hole to stay in and start spinning (gravity! there's even a cog arrangement in there). These guys are good at chasing one another, but can just follow their own track again and again (ants?). Notice they also often just join pairs and go for a spin. There's no friction on the movement (just bounds) so they keep going once they start (I think it's a property of the physics system)

The patterns that they stop in are nicely organic and might be useful in the future...

This is more fun than game of life stuff :) Apologies for youtube's awful video compression...Code here, (tis messy & you'll need phys2d)

The balls in this one can only see blue

Here's coloured gravity ;) each cell emits a field that is 3x more attractive to cells of the same colour than different colours:

Code here (still messy)
(this has 200 circles, and is reaching the limits of what Phy2D can do in real time). This next one switches of the red's attraction to it's own field, and increases blue and green's atrraction. And we get some really nice emergent behaviour - the reds form a cell wall around blue and green :)