Monday, January 26, 2009

reverse engineering windows

He, okay not actually Windows, but windows. A follow on from this bunch of window photos. The idea is to design a meta-window that can be evaluated to as many real world windows as possible. We're not trying to represent all windows (Gaudi and Seattle library), just as larger fraction as we can.

So the basic idea is to construct a recursive nesting of graphs that are used as extrude paths for the window frames:

Each nesting represents a different window aspect (stone surround (blue, above), wooden frame (red) and leaded (green) glass (cyan) ). Here's a rather silly example of this:


The point here is, they do look silly, but they are undeniably windows. (A side point is that the shape of the windows in the test-house influences the shape of the door and roof to add style, a normally elusive concept!). A couple of hours playing around in blender creates the following:

The thing about this, is that in-between stages also look like windows (Georgian I think?). However when close to a certain form, such as being rectangular, the design falls into an uncanny valley (first definition of uncanny valley for windows...?) and looks very wrong.

This system represents a large range of windows. To do better we have to get more complicated and less intuitive. For example, we can't represent the following window (similar to the designs in many churches around Glasgow), because the frame isn't a simple path, it's two combined paths. This means we have to deal with intersection and other not-so-clean methods of beveling and extruding shapes.


We often see several window together that share a frame. For example, here's a chapel on the Stanford campus:

stanford chapel

The windows share an outer frame with a complex form. It would be difficult with the methods described so far, to outline of the entire window is geometrically complex. We can modify the scheme (grammar?) further thuswise:


One set of outlines (blue arches in the above) become the primary definition as they are the easiest to define geometrically. These then become part of our recursive hierarchy of frames, but not at the top. Instead they define a mid-level of the frame hierarchy. The exterior frames (green dashed line in the above) can then be extended from them (without further subdivision). As before the interior frames can be shrunk and subdivided to form the window details.

Thinking outside of an individual windows, several other concepts appear:

window abstractions

i) Doors and windows are very geometrically similar. These windors/portals only have a few distinctions:
  • filler - window, door or nothing (archway)
  • window cill or door step!
  • location - doors are usually at ground level
  • details such as door handles etc...
ii) Window frames are often composed of features that are used as general frames (for example pillars framing a side of a building). Perhaps the windor concept should just be a frame, that can contain anything (including more windors).

iii) Most frames have horizontal symmetry. This concept is best expressed as an edge component (ie the pillar from ii)) that is reflected for the opposing side of the window. Sometimes this edge component is all there is (a circular window), sometimes it is just combined with a base (an arched window) and sometimes it is combined with a top and a base (square window). These top and bottom components need some discrete representation.

meta-note: I'm starting to understand the continual perversion of L-systems. You come up with a nice clean idea, but in order to extend it to real world systems you have to start engineering. This causes lots of ugly lumps and bumps on the design and you end up with something that is hard to understand, and difficult to find the beauty of.

Friday, January 16, 2009

some plants and shrubbaries

This post is a round up of some papers about procedural plant generation.

Structural Simulation of plant growth and response ( doi? | springer | 2003 )
John Hart, Brent Baker, Jeyprakash Michaelraj

These trees are grown by modeling the trees responses to graivty and light. Different trees respond differently to gravity, there are two main types -
  • Gymnosperms (pine, spruce) have external seeds. These trees support their weight of their branches by adding extra material below the branches, that is "compression wood".
  • Angiosperms (apple) have seeds in a fruit. These trees, however, support their weight by adding additional material above the branches, causing it to be in tension, hence "tension wood"
These two types of growth produce different shaped branches and this model simulates that by widening the area of the trunk above/below the branch for Gymnosperms/Angiosperms.

The algorithm here is to
  1. Compute mass
  2. Compute center of mass
  3. Compute photosynthesis
  4. Compute growth rates
  5. Grow branches.
By knowing how big a branch is and how dense it is, these calculations are mostly straight forward. However photosynthesis is computed using (the old work around for openGL's "select" function) a test-render with the leaves on each branch rendered in a different colour.

The grow branches stage works by firstly killing branches that don't recieve enough light, and then boosting the growth rate of branches that photosynthesis most (collecte the most light). Branches' growth is then limited by the ability of the parent branch to support additional weight.

Because there is a collision between the strength of a tree (proportional to area of trunk) and it's weight (proportional to volume) this provides a natural mechanism for the tree to stop growing.

Because of the physical modelling used here, the resulting trees are very well ballanced (even if some branches are pruned during growth) and effectively compete for sunlight.

On a final pleasing note the paper begins with a comment about L-systems very close to my own opinion:

In an effort to make these models produce more realistic results, some have encoded the environmental influences, such as geotropism (sagging branches) directly into the L-system. While this allows the genetic model to include more detailed information about its reaction to external forces, it makes the model much more complex and difficult to decipher.

Voxel space automata: modeling with stochastic growth processes in voxel space ( doi | acm | 1980 )
Ned Greene

This is a really old paper that explores the use of voxels (3d pixels) as a representation for the environment of a vine. They show that testing for availbility of light (through monte carlo probing of the environment) and proximity (empty voxels know the distance to the nearest occupied voxel) is easily doable using a voxel representation. However the paper notes that it is hard to ("we didn't implement...") estimate surface normals using voxels.

Heliotropism (plants tracking the sun) is implemented by calculating a table of brightnesses and applying a low pass filter to find the brightest direction. The paper states that the simulation of heliotropism is exaggerated, but it does produce pleasing results.

To create the above image a model of a house was imported into a voxel grid. The plant model was then biased to grow with respect to this model (branches staying clear of the doorway, leaving allowed to hang down).

For its time this paper included lots of ideas that are still being explored today.

A Plausible Model of Phyllotaxis ( pdf | pnas | 2006 )
Richard S. Smith, Soazig Guyomarc'h, Therese Mandel, Didier Reinhardt, Cris Kuhlemeier, Przemyslaw Prusinkiewicz.

This is a slightly different take on plant synthesis. Phyllotaxis is the name of the patterns formed by leaves on the stem of a plant, and this paper implements a model of a biologist's hypothesis for how these patterns occur. There are numerous geometrical/mathematical/physical explinations for these patterns, but finally scientists seem to be trying understand how it happens in real plants. It's all to do with Auxin, the substance that stimulates plant growth (it's the stuff that causes elongation on the dark side of plants so they always face the sum (or at least that's the simple story)).

Reinhardt et al's Regulation of phyllotaxis by polar auxin transport describes the concentrations of auxin and the protiens that control it's location. By using a scanning electron microscope it's possible to see how the phyllotacic patterns emerge. The following is the tip of a growing shoot (apical meristem) showing the older (larger) shoots and the newer shoots growing between them.

Renhardt's hypothisis is (I think, was a quite biological paper) that auxin is mainly produced at the base of the stem and moves to it's tip. Any existing apex's act as an auxin sink (shunting it downwards in the centre of the stem where it doesn't effect the growth). If enough auxin makes it to the peripheral zone (just below the main tip, yellow below) a new shoot forms and becomes it's own auxin sink. In this way new stems avoid the existing stems:


This means that each shoot grows as far away from the previous shoots/sinks as possible. This paper implements a model of the various chemicals involved and produces a pleasing 3D simulation of the shoot.

As you can see the same model replicates the pattern observed in the real world. It can even be tweaked to describe the different observed modes of phyllotaxis.

However, as the authors recognise, lots of this work is still experimental and big assumptions/tweaks without biological justification have had to be made to get it to work nicely:

In the absence of experimental data, we tested a number of alternative formulas and found that the postulated exponential dependence of the localization of PIN1 on the concentration of IAA results in the most stable phyllotactic patterns.

This work produces some very encouraging results, for not just emulating plants but understanding how that form was created.

Modeling Trees with a Spcae Colonization Algorithm (pdf | doi | eurographics | 2007 )
Adam Runions, Brendan Lane, and Przemyslaw Prusinkiewicz

This paper doesn't use L-systems! A branching model is creating using a space colonization algorithm. As the trunk of the tree grows upwards it is attracted to a set of (maybe thousands) of attractors (above & left - blue balls). These are scattered around in the volume of the tree, representing a preferred density map of the final tree. As the trunk gets within a set proximity for each attractor, the attractor is removed.

Once the tree skeleton is complete, various filtering operation are applied to smooth transitions between the branches. Then cylinders (as the Mighty Maple paper describes) are positioned over the skeleton to create a decent skin. Leaves, flowers etc... are then added.

By positioning the attractors in different ways (uniformly through a volume, on the surface of a volume) using different volumes (cones, cylinders) a variety of different plants can be created.

Another paper also uses a particle system (but this time interactively) to reproduce 3D models of trees from photos or sketches. Quite realistic leaf density is achieved from a set of photos from a number of viewpoints.

A Morphological Study of the form of nature ( doi | acm | 1982 )
Yoichiro Kawaguchi

This early paper presents some techniques for imitating some features of nature based on spirals, and rendering these out in 3d.

"The forms of nature based on spirals and ramification are generated not through the use of object data calculated by measurement, but through the use of algorithmic structure based on the laws of nature."

Each chamber in the spiral is constructed as a skew cylindrical prism. These provide growth in the forward direction and are repeated and scaled by a set ratio ie - geometric progression. Knots are added to allow the spiral form to rotate and as a location to add branches.

A tendril plant can be created by allowing a knot to form new branch. There is an adjustment factor to move child branches away from the parents. There isn't a formal grammar behind this system, rather a successive application of rules. However some pleasing shell and horn shapes are created in some really fantastic colours.

How wish these where the days where a typewrite and a badly penciled-in formulae were enough to get you published.

Synthetic Topiary ( doi | pdf | 1994 )
Przemyslaw Prusinkiewicz, Mark James, Radomír Měch

Prusinkiewicz is a hard man to escape from in this field! It is curious how authors of procedural form papers tend to stick to a given domain; many of the techniques seem to be shared between the different domains but aren't exploited by the same people.

Anyway this paper presents a modification to an context sensitive, probabilistic L-system that gives environmental sensitivity. While evaluating, each derivation step is followed by an evaluation stage that returns environmental (eg positional) information as parameters.

The left image here shows a environmentally sensitive L-system that prunes itself to a elliptical outline. The right image show a more complicated L-system that back-tracks every time it reaches the boundary:

w: FA?P(x,y)
p1: A>P(x,y) : !prune(x,y) → @oF/(180)A
p2: A>?(x,y): prune(x,y) → T%
p3: F>T →S
p4: F>s → SF
p5: S → o
p6: @o>S → [+FA?P(x,y)]
Apologies if this makes no sense, simply we start with w and apply the rules until prune() becomes true, then the symbol S propagates back through the string to a previous bud that hasn't sprouted yet and starts again. So it's a backtracking space search technique. It also highlights one of my main issues with L-systems - they can be easily engineered in non-biological ways (here it's back prorogation) so that intrinsic simple beauty (and comprehendability) of the system is lost. This grammar tells us little about how plants grow and isn't the most sensible way to describe form.

All that being said, it does seem to create some great results.

A stochastic branching L-system is developed that simulates real-world observed branching tendencies (bifurcation ratio), and combined with the above pruning system to produce the final model. All that is left is to find some 3d surfaces (some define implicitly, like the dinosaur above) to prune the growth to.

Here's an implementation.

Modelling the mighty maple ( doi | acm | 1985 )
Jules Bloomenthal

From an acyclic graph this paper produces a nice curvy 1-piece polygonal tree. Assumes that at a branch point, any branch that goes straight-on-ish is a continuation. It is then possible to draw splines through these n+1 control points on extended branches to create a nice spliney-tree skeleton. This is a well cited paper that brings together several techniques that are still used today.

Discs are then extruded at different sizes along the branch splines, but leaving gaps at the branch points. These branch points are computed by using the nearest disc on each branch as a starting point and using a system of splines to form a basis for polygonization.

Because close ups of the branches may be required it is possible to change the acceleration along the spline (before adding the discs) to add additional detail into certain areas on camera.

Textures were used to improve realism. The thing of note here was that the bump map was extracted by X-raying a plaster cast of a bark-section, then digitizing it.

To model the above ground bumps that roots cause, blobby objects where added to the contours at the base of the trunk.

Approximate and Probabilistic Algorithms for Shading and Rendering Structured Particle Systems ( doi | acm | 1985 )
William T. Reeves, Ricki Blau

you learn something new every day :-
Particle systems were first used to model a wall of fire in the Genesis sequence from the film Star Trek II: The Wrath of Khan [11].

The model of the tree they use is a recursive set of branches with leaves/pinneedles on the smallest branches. The forest has a certain probability dist for all trunks (width = mean width + random * delta), similarly for the heights that branches start occurring at. There is a bounding volume that stops branches growing too far from trunks. The child branches take some parameters from their parents, modify some and take other from the global tree model.

Different types of trees have different models.

The above image has 18 thousand clumps of grass, rendered with 733,887 particles. These are rendered using ambient, diffuse and specular components. A stochastic model is used as it would be too very expensive to raytrace for these properties on a per particle basis (this seems to be the general drawback for particle systems, esp wrt external shadows). So the diffuse colour changes the deeper into the tree we are from the lit side.

Rendering the lit particles to screen involves rendering the furthest away tree first, then the next nearest etc... (painters algorithm) Each subtree also has to be sub-sorted by depth. The information for each tree is procedurally generated for each rendering of the tree, and then disgarded (this is a really neat feature of procedurals).

Wind is simulated using a 2D tensor field that drives "wind particles".

These trees really blow away the competition image-quality wise (for their age) - must have been expensive to compute tho! Was rendered on "an essentially idle VAX 11/750 with floating point accelerator and 4Mb of memory". A 512 square image took a 10Mb of memory and 10 hours to render. They talk about the need for a real hardware frame buffer - I feel like I'm reading a classical textbook...

Particle systems are still used in video games for special effects - fire, water splashes, rain etc... but they don't seem to have caught on as a central rendering paradigm.

Plant models faithful to botanical structure and development ( doi | acm | 1988 )
Phillippe de Reffye, Claude Edelin, Jean Françon, Marc Jaeger, Claude Puech

This paper overviews a general mechanism for modelling the topology of trees. It explicitly ignores the finer details of rendering, leaving those to the Maple paper. It presents a really quite interesting view of the different forms that trees take and how the different observed patterns can be modelled.

The growth of a tree is modelled using a set of buds. By iterating through time these create the leaves, nodes, and branches. It uses the observation that topological change only happens at buds, and other growth (eg girth of a branch) can be modelled by age.

This is a good, botanically based introduction to the different topologies of trees. The different branching patterns ("architectures") and Phyllotaxies given, represent a lot of different tree structures, and many subtleties (such as branch direction - orthotropic (absolute position, always pointing up) vs palgiotropic (relative and orthogonal to the parent branch)).

The technique used in the paper isn't as clean or insightful as L-systems. However admitting a time-based approach allows for features such as discrete events(trauma, pruning) and forces(gravity) to be modelled without too much deviation from the main algorithm (unlike L-Systems, where contorted rule sets of special cases are needed for each of these).

The general evaluation algorithm is :-

for each clock signal do
for each bud which is still alive do {order, age, dimension, position, etc. are known attributes of the bud}
if bud doesn't die then
if bud doesn't make a pause then
create intemode
{with position in space}
create apical bud
for each possible bud do
if ramification then create axillary buds
{with age, order and dimension}

Each section of the tree is then associated with some geometry (cylinder, cone etc..) before rendering out.

Visual models of plants interacting with their environment ( doi | pdf |1996 )
Radomír Měch, Przemyslaw Prusinkiewicz

This paper addresses a shortcoming in lots of the previous research; namely that the environment's didn't respond the the plants. The basic model is two concurrent systems run in parallel - the plant and the environment. The plant receives information from the environment, transports the information to the needed location and responds to this information by changing the form. The environment perceives the plants actions and simulates the environment (diffusion of chemicals or flow of light) and gives feedback tot he plant. The development of this paper was this two way communication between model and environment.

The particular evolution of the L-system to achieve this is named an Open L-system. Certain symbols in the system are communication points. The L-system is interpreted to these points and the requested information (given as parameters to the symbols) are transferred to the environment process, which returns a result. This loop is repeated several times when transforming the String L-system output to a graphical form.

A self-sensitive system is run by the environment process. By modelling split-branch points and end points of branches a spheres, a simple proximity check is used to determine branch creation and direction.

Floor creeping (Colonal) plants are evaluated in 2D using a given bitmap of light-intensities which the leaves then obscures. The model nicely shows the plant filling the brightest patches first, then when the leaves obscure too much light for any more growth, moving into the darker patches. Similarly a 3D water-density map is used to guide a root system - this causes the roots to compete for water and grow away from each other.

Another section of this paper discusses light sensitivity - again by using a fairly course voxel representation of the tree's volume and a number of directional light, the flux of each voxel can be calculated and fed back into the model. Leaves occlude the light flow. Growth can then be slowed and leaves not produced in areas of insufficient light. By sharing this voxel array between trees, two trees compete for the same light source (photo, above). These techniques gives some really good effects, with branches moving around each other searching for more light.

This is all implemented as a framework (which is mostly an easy way of increasing the author's funding ;)

A screenshot from Fallout3, a game that uses SpeedTree

SpeedTree is a commercial product that allows artists to design trees and incorporate the results into 3D environments, usually video games. The offering is a suite of software, the most impressive part of which is SpeedTreeRT, which calculates physical effects and LOD in real time for video games. It is very widely used in the industry.

The challenging part of generating a forest of trees is being able to render them quickly and convincingly - SpeedTreeRT does this very well.

A CAD system provides the artists with a back end to specify what their trees will look like. The output from this can be used in real time or, of course, taken as a mesh and edited further in a 3D editor.

The cost is $8,500 per title released - I was really surprised that the price was this low, they must have a lot of customers to be able to sell it that cheaply. I guess tree creation technology was really figured out 25 years ago so this is an expected product, much more mature than the architecture/city equivalents.

The Algorithmic Beauty of Plants (pdf, 40mb | 1990)
Prusinkiewicz, Lindenmayer

This book describes various variations on the theme of L-Systems. Basically L-systems are very good at simulating cell induction in plants, but need many inelegant modifications to create realistic specimens. I've summarized it's contents in this blog post.

Meta notes:
  • Most of the work on L-systems dates back to the 80's - has nothing important & interesting been done since then?
  • The more I read the more I have one big idea that I want to implement, but I'm scared of writing what it is before I have some results, but I won't bother getting any results until i write it down.
  • Prusinkiewicz has a lot of work in this area, his pages are a good place to start
  • I didn't notice any authors who did work on procedural botany and other areas of procedural generation.
  • Apologies for the quality of some of the images, a lot of the old paper's were just scanned.
  • While I'm interested in form there was other cool work, light this one about realisting shading models for leaves.
  • Compared to the computer generated architecture corpus there was a much larger emphasis on biologically correct results:
Programs addressed to the biological audience are often limited to narrow groups of plants (for example, poplars [9] or trees in the pine family [21]), and present the results in a rudimentary graphical form. On the other hand, models addressed to the computer graphics audience use more advanced techniques for realistic image synthesis, but put little emphasis on the faithful reproduction of physiological mechanisms characteristic to specific plants. [Virtual models of plants interacting with their environment]

Our concern here is to produce images of plants and trees which should be faithful to their botanical nature and so to build a model which should include the known botanical laws which explain plants' growth and architecture. [Plant Models Faithful to Botanical Structure and Development]

We have not found much published information characterizing the impact of pruning on tree architecture. More data would be necessary to construct faithful models of particular tree species. [Synthetic Topiary]
  • But not entirely:
We concentrated on visual results more that botanical data [Approximate and Probabilistic Algorithms for Shading and Rendering Structured Particle Systems]

Altough rules based on intuitively obvious relationships such as the ones given in this paper may produce reasonable looking results, simulation that is true to nature requires rules based on empirical study[voxel space automata]
Some other round-ups in this area:
Papers I still mean to read

[edit: from sigg09, perhaps I'll write this up properly sometime]
Self-organizing tree models for image synthesis ( pdf | web | toappear: SIGGRAPH09 )
Wojciech Palubicki, Kipp Horel, Steven Longay, Adam Runions, Brendan Lane, Radomir Mech, and Przemyslaw Prusinkiewicz

This paper is a compilation of many of the above techniques. It offers some small incremental improvements over . I think the authors (if there are more than 4, it counts as a possy-of-authors) must have been surprised when it got accepted.

They use an existing L-system system, along with Moore's law since the last papers to produce some very pretty looking results. They tweak some of the resource-flow algorithms of earlier authors.

Lack of comment about leaves, rendering techniques (did they really use povray?)!

three points defining the center of a circle

Because it's a Friday and because this stumpted me (stupidly, wasn't dividing by the det) last night, here is a diagram explaining this algorithm for finding the center of a circle from 3 points:


One of the best tricks I've found is to always work with lines in the matrix form ax + by = c as it avoids the problems with vertical lines that y=mx+c has. Storing the end points of the line is the other fail-safe as it contains bounds information, but isn't as convenient when working with gradients and takes a whole extra constant!

The above was a Friday attempt at block-colouring matrix operations. I'm not sure it was a success, but it makes it more obvious that we where hunting the determinant as the denominator.

[edit] you shouldn't substitue for the y-coord as I give in the above, it might lead to divide-by-zero's. Best thing is to find the equivalent equation with the determinant as the denominator, as the linked algorithm page explains.,

Tuesday, January 13, 2009

Feature-based image metamorphosis constants

Have most of a hierarchical morph system running now. The results will follow in another post. Until then, here is an exploration of some of the paramaters in the paper "Feature-based image metamorphosis" (pdf | doi).

Open source morph code (Java) is here. Still a bit rough and bulging with unimplemented features.

The user defines a set of lines on the two objects that they want to morph, and sets a blend ratio (at which location between the two input images should the result be).

experimental outlines

Linear tweening is used on the points to find the destination of one set of features (pink and blue lines in the above). Then each image is contorted so that it's features are in this tweened location. Finally both images are blended together.

So the contorting is the difficult bit. Each pixel's new location is calculated relative to the features (lines). First it's location relative to the defined lines is found. Then the same combination is applied with the lines in the new location to describe it's destination point. The weighting method (how the line's influence on the pixels changes) is controlled with the following formula (dist = distance from line, length = length of line)
weight =  /lengthp \ b

I was curious as to how these changed the output, so here are my test videos. (p = 1, b = 2, a = 1 when the value isn't changing, which seems like about the right values)




Saturday, January 10, 2009

TED video

Here's a great video about symmetry and calculus in architecture by Greg Lynn:

This is fun because I've been working to identify the symmetry of form. Trying to find principles that unify form; finding algorithmic structures that reveal symmetry in existing objects. This video talks about breaking down that symmetry and adding variations. By using computational techniques these asymmetries can permeate a building's design at so many levels. (It's also going to be a nightmare to procedurally emulate these buildings, but that's part of the fun eh?)

It also stumbles apon the big long-shot win for procedural form - we're moving into a time when custom, individual form is tenable. The video shows a set of tea cups each created with a random set of curves, but having the same volume as each other cup. Digital fabrication techniques (rapid prototyping tools and CNC lathes) are becoming commonplace for designers, and are starting to move into the home with projects such as reprap. There is even a new website, shapeways, that will cost and rapidly prototype your models (even if they do cost a fair bit). The future problem is fulfilling the demand for unique designs (in the video it's a tea cup set, but it could be a house or pen). Artists just don't have to tools today to design infinitely variable designs, to ensure that we all have the unique object that we desire.

It's a good opportunity to post some photos from the Hunterian museum here in Glasgow, echoing the video's point that when things go wrong we are likely to have increased symmetry.

mutie 3

mutie 1

The expression "life on the edge of chaos" echoes true here - that when something goes wrong in design we fall away from the line between nothing and chaos. That when something went wrong with the development of these creatures the fell towards nothing, symmetry.

Thursday, January 01, 2009

Sity source code

[2016: Tom resurrected this page from the Glasgow servers recently. It contains many mistakes and a few dead links!] Sity (docs) was my master's thesis project. It's a procedural city generator:


[edit: a better weighted straight skeleton implementation can be found here]

It does contain a weighted straight skeleton implementation (that's my picture on the wiki page :), the class name is "skeleton/". It isn't very fast or robust. It's basically Felkel's method tweaked to produce weighted straight skeletons.

what follows is the overview that I wrote at the time


Sity is an architecture generator that was researched and implemented over 12 weeks as part of my MEng degree quite a long time ago.
Getting Sity

Before you begin you'll need Java 1.5 ( and Java3d ( Then download the file (~3MB), extract the contents to a directory.

Running Sity

You can then execute sity with the following command from within the directory:

java -Djava.library.path=./lib/-jar sity.jar -port 2424

If you experience the Java error "java.lang.OutOfMemoryError", you can assign a larger quantity by adding the following argument to the above command.

-Xmx [memory size in bytes]

(for example -Xmx 1G to assign a gigabyte of memory)

If you get the error Exception in thread "main" java.lang.NoClassDefFoundError: javax/vecmath/Matrix4d..., you have forgotten to install Java3d (see above), or maybe need to specify its location in the Java classpath.

It may look different depending on your operating system.

This window remains open as long as Sity at any time via the usual operating system methods, or by pressing the Q key.


Click the "Show preview" to get started...

After clicking in this window you can navigate the city using the arrow keys and AD to slide left and right and WS to move forwards and backwards.

From the preview window you can ask for the current city to be sent to Maya by pressing the return key, and can generate a different city by pressing the space bar. More details about these event to follow.

Returning to the utility window clicking the "Show waterfalls" will display the waterfall graph:

This structure determines how the output city appears.

The waterfall has the following elements:

The controls in the waterfall view are

  • Click and drag on the black background, move around the view
  • Scroll in or out with the mouse, zoom the view
  • The R key will attempt to automatically lay out the waterfalls
  • Click on a waterfall or input plug to select it (note that the utility panel displays the options relevant to the waterfall)
  • Click and drag from an output plug to an input plug to create a new link
  • Click and drag from an output plug to a blank space to create a new waterfall, you will be presented with a menu of compatible waterfalls

  • Click on an output plug to adjust to probabilities of the downstream waterfall:

Each of the sliders controls the probability of each of the downstream waterfalls. By hovering the mouse pointer over the sliders, the tool tips will inform you of what each connection leads
too, at the same time that flow will become highlighted. To remove a flow click the cross (x) button at the under the sliders. To adjust the probability of the downstream waterfall, move the slider up or down. The higher a slider the more likely the waterfall is to be chosen.

By connecting up different waterfalls in different ways a variety of cities can be created. Another way of changing the appearance is through the parameters that appear in the utility window when you click on a waterfall. A description of the waterfalls and the parameters appears at the end of this appendix.

Outputting a City via MEL

As describe above, pressing the return key in the preview window will attempt to output the current sity to Maya via MEL over the local port 2424. However Maya must first be told to listen to that port. To do this open Maya, and in the MEL command box enter

commandPort -rnc -eo -n ":2424";

Once you are finished, the port should be closed again as Maya does not check who is writing to that port and poses a security risk. Close the port using:

commandPort -cl -n ":2424";

Once the port is open, upon pushing return in the preview window the data is sent to Maya, the Maya will flicker and may stop responding during this time. Once this has stopped, the city will have been transferred to Maya.

Outputting a city as an .obj file

It is also possible to output a city a city as a .obj format file, this may be faster, produce the same result and produce less vertices that the MEL method above. Click the "Dump to .obj file" option on the utility panel. You can then open the city in a range of 3D packages.

Saving and loading a set of waterfalls

The save and load buttons will save and load the tree of waterfalls to disk. Note that only those connected to the root node are saved and recovered.

About the Waterfalls

When asked to make a city, Sity may return and create that element again, at that time it may take a different route if there is more than one waterfall connected to the downstream plug. The probability of which waterfall is chosen is set by clicking on the output plug (explained above). You can change the way a waterfall acts by changing its parameters. Often these contain several choices for one variable, such as block width:

The mean specifies the most likely value (approximately meters like all values in sity) and the SD or standard deviation specified how far from this number the value may be. So for many different, chaotically sized blocks you would set the SD to the same value as the mean. For all waterfalls to have the mean value set the SD to zero. The values are truncated to be in
the limit specified (here a 1m to 1000m output range), to ensure there are no invalid sizes. You cannot change these ranges.

Source code details

Source is in a very rough state, use at your own peril. Released under the WTFPL license. It needs the Java3D and jMonkey libraries, which can be found in the file (above).

The weighted straight skeleton code can be found in src/skeleton/Bones [edit: much better WSS code can be found here]. Sorry the docs aren't in better condition, it was a race against time to get this out at all.