Wednesday, February 25, 2009

a lot of windows

IMG_9012
Some fantastic variations on the theme of windows here - caen cathederal

broken refactor move in netbeans

Just figured out what was wrong with my netbeans install that stopped the refactor->move class tool from working (in all versions from 6.1 to 6.5). Not working as in silent-fail, making half the alterations then just pretending it did nothing not working.

Ready for this - the subversion version was too old - gah! Update from here to fix.


java.io.IOException
at org.netbeans.modules.subversion.FilesystemHandler.svnMoveImplementation(FilesystemHandler.java:450)
at org.netbeans.modules.subversion.FilesystemHandler.doMove(FilesystemHandler.java:214)
at org.netbeans.modules.versioning.FilesystemInterceptor$DelegatingInterceptor.handle(FilesystemInterceptor.java:402)
at org.netbeans.modules.masterfs.ProvidedExtensionsProxy$DelegatingIOHandler$1.run(ProvidedExtensionsProxy.java:294)
at org.netbeans.modules.masterfs.ProvidedExtensionsProxy.runCheckCode(ProvidedExtensionsProxy.java:262)
at org.netbeans.modules.masterfs.ProvidedExtensionsProxy.access$300(ProvidedExtensionsProxy.java:60)
at org.netbeans.modules.masterfs.ProvidedExtensionsProxy$DelegatingIOHandler.handle(ProvidedExtensionsProxy.java:291)
at org.netbeans.modules.masterfs.filebasedfs.fileobjects.BaseFileObj.move(BaseFileObj.java:244)
at org.netbeans.modules.masterfs.filebasedfs.fileobjects.BaseFileObj.move(BaseFileObj.java:225)
at org.openide.loaders.FileEntry.move(FileEntry.java:113)
at org.openide.loaders.MultiDataObject.handleMove(MultiDataObject.java:622)
at org.openide.loaders.DataObject$2Op.run(DataObject.java:703)
at org.openide.loaders.DataObject$1WrapRun.run(DataObject.java:849)
at org.openide.loaders.DataObjectPool$1WrapAtomicAction.run(DataObjectPool.java:258)
at org.openide.filesystems.EventControl.runAtomicAction(EventControl.java:120)
at org.openide.filesystems.FileSystem.runAtomicAction(FileSystem.java:502)
at org.openide.loaders.DataObjectPool.runAtomicAction(DataObjectPool.java:270)
at org.openide.loaders.DataObject.invokeAtomicAction(DataObject.java:869)
at org.openide.loaders.DataObject.move(DataObject.java:709)
at org.netbeans.modules.refactoring.plugins.FileMovePlugin$MoveFile.performChange(FileMovePlugin.java:124)
at org.netbeans.modules.refactoring.api.RefactoringSession.doRefactoring(RefactoringSession.java:125)
at org.netbeans.modules.refactoring.spi.impl.ParametersPanel$8.run(ParametersPanel.java:375)
at org.openide.util.RequestProcessor$Task.run(RequestProcessor.java:572)
at org.openide.util.RequestProcessor$Processor.run(RequestProcessor.java:997)
Caused by: org.tigris.subversion.svnclientadapter.SVNClientException: svn: This client is too old to work with working copy; please get a newer Subversion client

Monday, February 02, 2009

some more architecture/urban techniques

This is a list of some other urban(ish) papers I've looked at. An earlier list of papers can be found here.



Interactive Example Based Layout Synthesis ( doi | pdf | acm | 2008 )
Daniel G. Aliaga, Carlos A. Vanegas, Bedřich Beneš



The idea is to be able to take an example aerial photograph of a city, along with a database of roads (GIS-like data) and extend/synthesise more of that city. The user has to specify the locations of new road intersections.

To create new areas of the city ("fragment synthesis"), the road network is filled in in an hierarchical order of road size. The direction is derived from a statistical walk between the specified points, based on information from the example data (average area of blocks in the vicinity, road type/hierarchy, number of roads at a junction...) .

Styles are attributed to the new points/intersections from a classification system based on the example model. Groups of similar (types of road hierarchy in an area, spacial frequency) points are grouped together using a quadtree to maintain adjacency.

Information from the model graph-network is used to determine the most suitable topology of the network. The shape of each individual road is then modelled on similar streets in the example model (tortuosity).

The blocks ("parcels") are then split into buildings using Voronoi tessellation. A new trick they use here is how they define the points for the Voronoi -

We have observed that for most blocks this axis corresponds closely to the central segments of the medial axis of the block.

This is fun because we see an admission that scientists just pick the tool that "looks about right" for image synthesis. I'm fairly sure this is what everyone does - whether they admit it or not. Anyway, they approximate the medial axis with a shrunk OBB (similar to the idea used in cityEngine - splitting down the longest side), and position a line of blocks on either side.

The idea of using Voronoi tessellation for block layout generation has been seen elsewhere (Sity used it - but the points where located by shrinking in the boundary and adding points along the edge, and there are several earlier examples). Still it's good to see the idea making a comeback in the literature, as it's applications go way beyond block generation. The coolest thing about it is the vast range of subtle variations in the algorithm possible to create a range of effects.

Each block is then filled in by morphing the best-matching block in the example model to fit the space in the synthesised road network. This is implemented by subdividing the quad and using an affine texture transform. Because of the large number of blocks in a typical input image, there is likely to be a good match (so not so much stretch & distortion) in the synthesised image. However it is a weakness of the system, that it is less suitable for use with small quantities of input data.



A lot of the assumptions in this paper are based on the blocks being quads. For example, the above (bottom) extract from the pdf (never embed huge images into pdf's!), shows the distortion that occurs due to the quad-assumption. On the other hand the most enlightening things is the usefulness of the dual vector/image input data - it allows much more certain (and simple) pattern analysis.

The results are a decent looking city, although the roads could do with a little attention (above, top - left = published result, right = some shoppage that exposes buildings under the roads, and un-equal white-line distribution).




Generative Mesh Modeling ( pdf | web | 2003 )
Sven Havemann, Dieter W. Fellner


This paper presents a new way of representing 3d objects. By defining how they are constructed (in terms of the primitives and techniques used) it offers smaller file sizes, more detail, more intelligent LOD techniques, and access to the creation pipeline. Packages like Studio Max and Maya often have an (almost) stack of transforms that can be applied to an object - each additional transform operates on the output of the previous transform. This paper presents a system where each transformation can access the output of any previous stage (proper stack operations) removing a Von Neumann-esque bottleneck. The presentation of this system is very similar to Adobe's pdf format - operations are pushed onto and offa stack, each operation working on however many arguments it wants to pop from the stack.

The primitives themselves are defined using boundary representation and Eular macros. Sharp faces are just triangulated but smooth surfaces act as Catmull/Clark subdivision surfaces. The speed of triangulations means that geometry can be triangulated at run time, to achieve sub-object LOD - a very nice feature (that doesn't play so well with the current strategy of dumping huge meshes onto the GPU)

It makes a very convincing point for the power of generative methodologies. They can be used repeatedly, and subcomponents between of generative structures shared, or put into a library.

I'm not certain this paper ever got published! It has lots of nice little ideas - like Eular macros being used for automated LOD construction - as artists often start from the course mesh and refine it. But seems to jump up and down from small ideas to big ones quite a bit.



Real-time rendering and editing of vector-based terrains ( doi | pdf | web | 2008 )
Eric Bruneton, Fabrice Neyret


Covered here is how to chop up the task of rendering 3D scenes in real time, from a vector (rather than the normal raster) source. Of course this makes judicious use of the GPU and the standard large-terrain display systems. Because the input data is in vector form, it's easy to define some objects (rivers, roads) as nice smooth splines. It also allows for the input of standard GIS data.

An octree stores each patch of land, with larger octs being used for further away areas. The maps (texture, displacement) for the leaves of the octree can be dumped onto the GPU. Therefore a technique is presented that refines an octree leaf to produce it's children by subdividing the vector data without any clipping errors.

A problem with raster map conversions is that course height resolution maps roads can produce roads cambered to follow the side of a mountain or rivers flowing uphill. Because the input is in vector format here, it's possible to filter the terrain, so that it is flat and smooth for road and rivers, and ensure that rivers always flow downhill. It also gives very sharp edges to these features and allows context sensitive features (road over river == bridge, two roads meet == junction) to be drawn.

It's also possible to edit vectors on the fly - by only re-processing the elements that change with each edit it's possible to edit spline features in real time. This is very similar to the Crysis (a game engine) editor (which seems to do everything that this paper mentions, and more!).

Standard techniques like frustum culling and LOD also helps speed things up nicely (and allows them to plant tree's down the side of every road without much overhead).

The website is really worth visiting to watch the videos.




An Example-based Procedural System for Element Arrangement ( doi | pdf | web | 2008 )
Takashi Ijiri, Radomir Mech, Takeo Igarashi, Gavin Miller


This paper presents an element-synthesis technique. Somewhere between model synthesis and texture synthesis, element synthesis is concerned with positioning discrete elements in space. So features such as distance between elements and their orientation are considered important. This paper details a technique for arranging features in space based on their relative locations and rotations in an example arrangement.

The algorithm triangulates the space between the example elements, creating a graph with an element (or space) at each node. The synthesis begins with empty space and a seed element, then additional elements are chosen based on the best-fit in the example.

This best-fit is primarily concerned with adjacency and distance in the example graph, but can also take into account rotation. There are three rotation modes defined - searching for a matching rotation in the example, by rotating the example to achieve a better match or flow field mode. This final mode alters the data so that the more important elements are arranged relative to a given field. This can be painted-on to change the resulting arrangement.

The elements have a priority ordering, and those with the highest priority are positioned first. This allows systems that uses some elements as 'filler' between larger elements. After the new arrangement has been created, a process of relaxation allows the edges of the graph to revert to their desired proportions.



Continuous Model Synthesis ( doi | pdf | acm | | 2008 )
Paul Merrell, Dinesh Manocha



This paper extends the ideas in the previous paper - synthesis of an original model from example model via local similarities. The microscopic features of the surface near a point in the synthesized model are the same as in the example model. On a macroscopic level, however, different gross layouts are possible.

For every vertex, it's absolute orientation (in world space) is used to match inbound and outbound edges to those in the example model. This gives a list of possible matches for all locations. Small partitions of space in the synthesized model are then taken at a time, and a backtracking search takes place to ensure that each vertex has the same microscopic environment as in the example model. Apparently it's always possible to find at least one solution.

The random element here is hard to control, but could be guided by the likes of a mass model. It can construct unexpected and new features, such as the arches within arches the above image. Matching on the global orientation isn't as nice as it could be - you have to do things like have the model in a couple of orientations (as above) to ensure features aren't biased to one side of the object.

This is more like the image and motion synthesis work than the other procedural work I've been writing about. It's very Markovian, and the work shows the promise of moving in a Bayesian direction.



Interactive Geometric Simulation of 4D Cities ( doi..? | pdf | web | eurograph | 2009 )
Basil Weber, Pascal Mueller, Peter Wonka and Markus Gross



This is a presentation of a system to build a city that grows through time. It is based on simulation for the numerical aspects, rather than the pure estimation & guesswork for the geometrical aspects (this echoes the long standing need for evaluation techniques in procedural geometry) -

Therefore, we found sources with many illustrations ultimately the most helpful...as the synthesis of geometric rules in the simulation had to be mainly our own work.

Similar to Mueller's earlier work the input to this system is a range of maps (height, water, forest...) and an initial starting state (eg a single street). Subdivison occurs in the following manner:
  • Major Streets
  • Quarter
  • Minor Stretes
  • Block
  • Lots
  • Building Envelopes
As traffic builds up in areas, new major streets are built. These quarters are then filled with additional smaller streets. As new blocks are generated their land uses are modelled and assigned. The simulation repeats for each discrete time step.

In contrast to the earlier work, the streets are grown for each time step, and must be generated in chronological order, thus L-systems aren't used :) This allows the user to influence the street pattern directly. These facts and the following quote support my increasing cynical view of L-systems -

We chose not to embed the expansion in an L-system framework to make the implementation more efficient.

The self-sensitivity in Mueller's previous work is present here - with streets, looking ahead of themselves to find sensible other roads to join with.

Traffic modelling occurs by stochastic sampling of a set number of trips. The number of trips is limited by the fact that this model is designed to run at about 1 update a second. This is very much the standard technique in land use modelling (or it was when I was in the business). Trip simulation is one of the key areas that urban modelling has solid mathematical foundations to fall back on, unlike botanic modelling which has a huge corpus of biological data and examples.

The category of land use also uses a hefty model. This mixes the suitability of any given plot (elevation, traffic, distance to resources) with global constraints (quantities of each land use city wide).

Lot subdivision occurs in a technique very similar to the earlier papers. Based on the land use and density, these lots can then be inset from the streets and extruded upwards into buildings of representative dimensions.

The output of the model is compared with the real world growth of Los-Vegas. Because the model is interactive, subtle changes can be made at different points in time to keep the growth on track. The final result is quite similar to current-day Vegas, with few changes, given that so much theory is involved!




Compressed Facade Displacement Maps ( doi | pdf | ieee | 2009 )
Saif Ali, Jieping Ye, Anshuman Razdan, Peter Wonka


This paper addresses the problem of rendering these massive cities, once we've procedurally created them. The basic approach is to use ray casting techniques against a displacement mapped surface for each wall of each house.

To create a fast displacement-map intersection mechanism the orthogonality (rectangularness) of the generated geometry is exploited. Cubes can be matched to empty volumes of space to allow fewer samples when casting each ray. This is a nice technique, that I can't help thinking would also work for blocks of buildings as a whole, since they are also primarily orthogonal.

The other technique mentioned is the compression of these displacement maps and associated colour maps. Again, by exploiting the expected orthogonality, a lossless matrix-based algorithm is presented. I am surprised that repeated sub-sections of textures (eg same windows in different houses) can't be used to further accelerate the algorithm.

The results of these techniques are shown to be implementable on a GPU. When combined with existing techniques such as level of detail and mip mapping, it should produce even more realism.



Generating 3D Building Models from Architectural Drawings: A Survey ( doi | ieee | 2009 )
Xuetao Yin, Peter Wonka, Anshuman Razdan

This paper is a fair old way away from procedurals. It studies the problem of creating 3D models from a 2D image of the architect's floor plan. This is problem being tackled in academia and the commercial world. So, simply, the image is cleaned up and processed into a clean 2D polygon floorplan, identifying symbols (windows, doors) and other meta data.

Interesting quest (in the context of model extrusion from the floorplans):

Creating details of architectural entities relies heavily on empirical assumptions. Any template library that tries to cover all styles and designs will inevitably by huge and have potentially conflicting architectural styles.

I won't say anything more about this - is mainly a list of image processing and vector recognition problems.



Interactive Visual Editing of Grammars for Procedural Architecture ( doi | pdf | acm |2008)
Markus Lipp, Peter Wonka, Michael Wimmer


The idea behind this paper is to take CGA Shape (an existing shape grammar) and allow visual editing of individual buildings. For example you may want to change a given window's style (above, right). This can't be done in the grammar (or it would either be too gross- changing many buildings in the city or too fine, creating a horribly complex grammar).

The solution is a series of patches, one for each edit made to the base grammar. These patches change the decisions or evaluation of a rule (I wanted to say expression of a rule, as in gene-systems...). Because the base grammar might change, or edits higher up the tree might knock-on to an earlier edit this is difficult. Furthermore it's likely that features that are visible to the user aren't immediately visible in the grammar - for example the grammar in question defines floors of a building at a time, and hence has no concept of columns of windows.

So two concepts are defined to help an edit find the feature desired -
  • exact instance locator - the decision path through the underlying grammar is stored. By remembering the element number at each level in the hierarchy (eg the 34th house, 3rd floor, 2nd window) it is fairly robust to changes.
  • semantic instance locator - this is the technique for selecting more abstract constructions, eg columns of windows. Additional meta-data is added to the grammar, so that split or repeat operations tag each production with the given semantic property.
An algorithm is provided to robustify identification of these instances against changes in the grammatical structure. By redrawing the parse-tree of the grammar wrt a semantic feature, it is easier to identify suitable instances .

The end of the paper deals with adding graphic controls to CGA shape (above, left). This amounts to optimizing the algorithm to run in real time and adding handles for the different features + a bunch of UI controls. The paper mentions building a general procedural modelling system, but that's quite a goal!