Tuesday, March 31, 2009

geometric robustness papers

I've suddenly lost faith in floating point (FP) arithmetic for geometry. This is best explained by the CGAL people. Here are my notes on papers I've read:



Classroom Examples of Robustness Problems in Geometric Computations ( web | Lutz Kettner, Kurt Mehlhorn, Sylvain Pion, Stefan Schirra & Chee Yap | 2006 )

This is a great introduction to the properties of floating point arithmetic in a floating point environment. The authors set out to describe a complete set situations where floating point upsets a few simple algorithms.

The authors describe this in respect to calculating the convex hull (gift wrapping algorithms) and finding the 3d Delunay triangulation of a set of points. They give the facts that the algorithms rely on, and show how these often do not exist in the floating point situations.

The most common cause of error is that when working with number of different sizes, the smaller one is always rounded to fit into the larger one. When working with points, this means that the precision of a calculation is limited by the largest point involved. In many situations these number can be spatially remote. The below image shows the results of trying to decide which side (of the black) line a point lies on. The point is moved over the (256x256) grid in the smallest steps allowed by double precision. The grid is the colour coded based on which side of the line the algorithm reports the point to be on (red, yellow and blue for right, on-the-line and left respectively).


Different lines are examined in each of the three images - the patters of the errors are quite non-intuitive. We often see points classified on the wrong side of the line.

The patterns get messier if we look at higher precision floating point types. The paper does a very good job at explaining how these properties of floating point can lead to gross errors in the algorithms.

It briefly looks at techniques that don't solve the problem - adding an epsilon value (video below) and perturbing the input.

Here is my video of animating the epsilon value in a geometric floating point context:


We're evaluating the same function to determine which side of a line a point is on (except the horizontal is flipped). Over each frame we move the point around. Blue is one side, red the other, yellow on the line. We move the point by the smallest quantity allowed by the double precision floating point system.

The animation shows an increasing "fudge factor" or epsilon added to try and make the computation more robust. The boundary stays just as strange!

Animating the epsilon value of the first example in the paper from 0 in steps of 2^-44 @ 3 frames per second. Java code.




Construction of the Voronoi diagram for `one million' generators in single-precision arithmetic ( pdf | ieee | doi | Sugihara, Iri | 1992 )



Hence, we change our goal; we try to construct a diagram which shares some topological properties with the true Voronoi digram....the output "converges" to the true Vornoi digram as the precision in computation becomes higher.

To do this the paper's authors present version of the insertion Voronoi algorithm that modifies the topology from one consistent state to the next. The basic step is to query the topological model before each cell is included in the insertion of a new point.

Given a new point to insert (i, below), the topological model has three rules relate to finding the set of cell-corners to modify:
  • The set of cell corners is not empty
  • The set of cell corners form a tree
  • Every corner is reachable from every other corner by traversing existing Voronoi edges that lie between cells in the set.
These properties have are not shown to be necessary and sufficient to derive the Voronoi (I think they are, elsewhere, see the references in next paper - Topologically Orientated Implementation), but they get close - selecting topologically consistent cells. The algorithm "flood-fills" a region of the cell-corners, starting with the nearest to the new point (blue, i, below), collecting cell-corners (ii, green dots) that satisfy these conditions. Once all adjacent corners have been shown to be inadmissible (red dots), all the admissible ones can be proved to have been found. Then (iii), the Voronoi graph and the topological map is updated, and we move onto the next new point:

vor

There is also a section on how to compute with the least floating point inaccuracy, but it isn't as interesting. The cunning work here was breaking the algorithm down into a set of topological manipulations that are always valid.

Sometimes the floating point output isn't valid, that is cells and lines overlap. But that happens in areas proportional in size to the error margin (FP accuracy) - they could easily be post-filtered out if required. The robustness of the algorithm can produce output such as the million points in the above (black and white) figure, that would otherwise cause visible trouble.

The approach can be seen as using the topology as the central structure that is guided by the inaccurate floating point calculations.



Topology-Oriented Implementation—An Approach to Robust Geometric Algorithms ( doi | pdf | K. Sugihara, M. Iri, H. Inagaki & T. Imai | 2000 )


This is another presentation of the previous material with a couple of different examples. Logical and combinatorial computations can be done correctly while geometric ones are unreliable.

The overview is that geometric algorithms have a set of topological solutions, that branch from each other. One of these is the correct solution, but as long as we end up somewhere on tree of valid solutions we're doing ok. The floating point gods (inaccuracies) decide which branch the algorithm will follow. More formally (quoted:)
  • Collect purely topolgical properties that should be satisfied by the solutions of the problem and can be checked efficiently
  • Describe the basic part of the algorithm only in terms of combinatorial and topological computation in such a way that the properties are guaranteed
  • Conduct numerical compuations at each note of the tree in order to choose the branch that is most licely to lead to the correction solution
Since we can't even detect degerate geometric cases consistently, the output might be degenerate, but if well designed, the degeneracies will be at around the floating-point-accuracy-scale. As the precision of the FP calculations increases, the solution converges on the desired one.

The paper includes examples for surface-slicing and Voronoi tessellation.



Lazy Arithmetic ( doi | pdf | ieee | Dominique Michelucci & Jean-Michel Moreau | 1997 )

This is a subject overview paper.

It outlines several solid geometric examples that are broken by floating point arithmetic -
  • The point where two lines cross is on a different side of each line
  • Zero determinants of non-singular matrices (the result changes depending on how the calculation is done)
  • Lexicographical order of line crossing (y1 < y1 ="=">
  • Simple comutations expressed in different ways suffer different truncation limits so one equation has two answers
  • Pappus' theorem doesn't generally hold
  • Borderline cases when using matrix determinants to identify angles less than or greater than a right angle
Two main classes of ways of dealing with these problems are presented - inexact and exact algorithms.

Inexact algorithms do not produce the exact mathematical result, but approximate this result. Since FP is only an approximation in the first place, these can be viewed as a set of tools for ensuring that the algorithms do not end up in an inconsistent state (they don't infinetely loop or teminate abnormally).
  • Warned programming involves coding tricks using limited reasoning (topological data or exact number representation) as each borderline case is found.
  • Epsilon Heuristics - assume that two items are identical if they are within a given range (epsilon). There is no way to guarantee that this approach will produce a consistent internal state - we are effectively shuffling points in one region of space.
  • Symbolic Programming - By ensuring that the algorithm's choices are guided by topological or combinatorial reasoning, floating point errors will only change the path not create invalid results. This is the idea presented in the Topology-Orientated Implementation paper by Sugihara.
  • Historical Book-keeping - By keeping a list of decisions and ensuring that all following decisions are consistent with previous decisions, the world is consistent from the algorithms perspective. This approach is described as being cumbersome and using lots of memory.
  • Confidence Intervals - Formalised Epsilon Heuristics - results involving floating point calculations can be "I don't know" when FP is known to be incorrect.
Entirely exact methods on the other hand always calculate the true value of every expression. Since we're often interested in boolean answers (is this point on the left or right of that line), we do a lot of additional work.

Ten random points with double-precision coordinates were triangulated using floating-point arithmetic in .1 seconds; 10 random points with rational coordinates (two-digit numerator, three-digit denominator, in base 2) were triangulated using rational arithmetic in 1,200 seconds, generating intermediate values with as many as 81 digits. [Efficient Delaunay Triangulation Using Rational Arithmetic: Karasick et.al]

The additional processing makes exact methods formidable, therefore mixed methods are introduced. These first perform the calculations in a specific accuracy, but track the error bounds to ensure the result is correct. Classes of language such as XSC do this automatically. If the result is unstable around the bounds then the algorithm reverts to exact arithmetic.

One approach (Karasick/Yamagushi) converts all calculations to computing the sign of a 4x4 determinant. This is then calculated using the most significant bits, then adds additional bits until the result is unambiguous. While I haven't seen enough geometry to be able to say if all problems can be found my calculating the det of a matrix - I am sure many of the important ones (which side of a line does a point lie on?) can be. Fortune and van Wyk implement an automatic system along these lines that compiles down to c++ source.

Several of the exact methods pre-process a specific problem to calculate how much precision is required, then calculate using these figures.

At this point the authors of the paper point out that using a specific tolerance ("long integer libraries" eg BigNum) means that division often has to be barred for fear of recurring decimals. All of the presented exact methods need lots of additional work by the programmer.

The paper continues to introduce Lazy Evaluation - floating point calculations are performed until the error is enough to introduce uncertainty. At this point it reverts to using exact? evaluation.

The operations performed on the values ("Lazy Numbers") are tracked by a parse-tree (or symbolic computing) like graph specifying operators and operands. When an operation is ambiguous (within the error bounds) the graph is evaluated using exact arithmetic.

This seems like a dream answer for Declarative programming enthusiasts (Haskell!).

The cost of Lazy Arithmetic is given as a 4-10 decrease in speed, which considering it's a big leap away from the computer's architecture, is very pleasing. Even better it's claimed to be 150 times faster than purely exact methods. In systems where this method is used (CGAL) the cost appears to be much larger for more complicated algorithms.

The paper describes a method for hashing lazy numbers, and shows that non trivial equality tests can be expensive. The system implemented only does very simple algebraic/symbolic equivilences, otherwise expesive tests are needed. (Probabilistic hashing is given as a possible solution)

Problems with Lazy Numbers include
  • lack of hardware support,
  • need for the programming lanaguage to allow a new number types, along with operators on those numbers.
  • conversion back to FP for output is problematic - polygons may self-intersect after processing. (But at least the algorithm will run, and at least the errors will be close to the size of FP accuracy)
  • numbers such as pi or e can still cause lots of problems

Monday, March 30, 2009

copyright vs fun [rant]***

Copyright is a privilege that the population has given to creators. The governments allow this copyright deal - that the contents of a published can only be published by the author for a given time - in the hope that the creators create more. The child's argument goes "I made it, so it's mine", the grown-up's "creation of a work takes time, the author deserves to be repaid". But I think these arguments are getting less persuasive as technology advances.

My main problems with copyright are that it is a false economy, and that for a lot of media it is not in the interest of people. I'm a student & there's little reason for me to support copyright* - I'm not invested in the system. I own little valuable IP, I expect to create very little, but every day my activities are constrained by copyright law.

"The American people would just have to get over the fact that software no longer had any economic values. It wasn't fair, it wasn't just, but it was a fait accompli. In many ways, Oscar had to give the Chinese credit for their cleverness in making all English language intellectual property available on their nets at no charge. The Chinese hadn't even needed to leave their own borders in order to kick the blocks out from under the American economy." - Bruce Sterling's Distraction.

But it doesn't need to be China to expose the deceptive value of copyright, the twak consortium (or seasteaders) might declare moonbase alpha it's very own sovereign territory and remove copyright. Moonbase alpha can then listen to all of Earth's music without charge, it's a no brainier for those uninvested in copyright.

Massive piracy means that copyright is making most consumers illegal. Piracy is commonplace, trivial and increasingly accidental (copyrighted works in the background of hi res photos, music in the background of a youtube video, buying used software often breaks the EULA). It is becoming very difficult to live a life in which you do not break copyright laws. Perhaps this should be taken as a hint that they aren't compatible with today's technologies.

Industries that rely on copyright have started to look dubious. Print newspapers' revenue is in free fall, the music industry is trying to sue its customers and book publishers are trying to stop computers reading their books aloud. These are not activities that secure, upcoming companies engage in. The companies in these sectors that are doing well, are doing so because they are moving away from a copyright dependent model. Perhaps they're worried that their wealth is based on a bubble? (In the light of current financial problems, it might be unsporting to point out that we're building a bubble around IP that could burst). It could be burst by another country as an act of war or it could be burst by new technology (undetectable music copying?). Overnight our copyrighted material, our copyrighted assets, might be worth as much as the canvas, paper or disk that they're held on. We've added a lot of value to something that isn't tangible, and is making less and less sense every day. If that bubble bursts, lots of value will be dumped from the big companies - it's best we slowly deflate the bubble slowly, rather than wait for the time bomb to explode.

So the main question is how much content wouldn't get published without copyright? And how much broader content would each person experience without copyright? Here's a fun gedankenexperiment:

We only need copyright on things that are expensive to create. On things that wouldn't be created without it. Music is fun to make, people can do it very cheaply in the evenings, and they enjoy it. What about abolishing music copyright? Would the population as a whole suffer? All currently available music would be free to all people - I think the value of this person is more than the value of the record companies. Distribution by the record companies isn't needed, the net is a suitable medium - music is quick and easy to download and many people who like music use portable mp3 players and never see CDs. However the very popular artists with all the money would loose most of their incomes, as would the record companies. But would music continue to exist? yes - we have huge backlogs of material that we mostly don't get access to because of copyright. Would music continue to be made? yes - it's very cheap to create music (you can do it your basement), and more importantly it's so much fun you couldn't stop people creating it if you wanted to.

This is very much a socialist argument. It's benefits wouldn't be measurable in market capitalizations or economic figures, it's benefits would be in the breadth and quality of music people end up listening to every day

Which other arenas is copyright unnecessary in? Photography maybe, but there aren't too many other media with which the above argument holds today, but films and video games are starting to get to the point where they can be made to a comparable quality by people in their bedrooms, for fun. Not for profit feature films include Bloodspell and Star Wreck, although their quality isn't up to it yet, it will one day.

We're just waiting, in all these media, for the quality of the work by people-who-do-it-for-fun to get good enough. It doesn't have to be perfect, it just has to be sufficient, such that 90% of the people don't notice it 90% of the time. As more people get involved in doing it for fun, the quality gets better. You can look at the "most interesting" photos in flickr and the quality is immaculate (a million monkeys with a million Canon 400d's). This is because of the number of people involved, the sophistication of the technology they use, and because it's fun!

Copyright protects the establishment. In a certain way I think it stops people being able to publish their own work. People have been groomed into accepting high production values in their media. What if the established publishing houses disappeared - if the Hollywood studios vaporized?** I might start reading more interesting books and watching a broader range of movies. So much is spent on polish and advertisements that smaller artists are drowned out (do you really think that the profits of an artist correlate with how good their music is?). Social media techniques like digg.com would allow you find what other people think is good, rather than who's paid most for advertising.

Of course this doesn't reach to all areas, copyright in more hi-tech industries still seems to be needed (such as software engineering), until they too can be done entirely in the bedroom. When I can ask a computer "computer, write me a word processing application!" I think the copyright on software should be abolished (and probably patents too, but that's an argument for another day).

modder
modders: ye be warned

But today we have to ask if copyright is still a benefit to the people? It has got to the point where copyright is holding back a tide (academic paper on the subject) of new exciting & valuable content. Copyright prevents derivative works - new content from that which exists. The Grey Album or American Edit are great bits of music that the world will not hear because of copyright. This work cannot take place under copyright law - the people who do it for fun can't afford to use copyrighted materials. Licensing copyrighted media is expensive, time consuming and not much fun at all.

My final point is that we may soon find ourselves in a world where computers can create desirable content unassisted (this is part of my work, finding tools to help (and maybe replace) 3D artists). When a computer can write you a pop song, does copyright make sense?

Copying can't be stopped, as long as I can point a camcorder at the screen of my telly; As long as I view the movie with my eyes and store it in my brain. Copyright just feels broken.

* A land of copyright-haves and have-nots. I own little copyright, I consume huge quantities of copyrighted material. So how much of it would have been written without copyright? What incentive do I have to welcome stronger IP laws?
** can tell I'm trying to be persuasive, I keep using question marks?
*** sorry for the rant. had to be said. yes the blog is copyright. yes I'll put it under CC, but not until next year...


[edit] this is another summary of the issues surrounding copyright:

View more presentations from Rob Mcminn.

Sunday, March 29, 2009

bike review giant scr 2

So I've had this bike since June last year (9 months). I've given it some real punishment, and thought I'd jot down some stats and figures. It's probably equivalent to Giant's 2009 Defy 3 or 2.5.

mr bike on holiday

Now I've been careful with it, but it's been ridden from west spain to east denmark, been on a couple of planes, carted around on more trains than I can remember, and left to defend for itself in the back of a transit with 10 mtb's (gang bang!). It's also just gone through a Glasgow winter, which means freezing nights, being drenched in salt water (road grid) daily and potholes like nothing I've seen the western world. I also think I've done about 5000km over about 8 months.

you don't have to go home...

The only changes I made to the bike before touring was a pannier rack (yes the defy and scr ranges are drilled for pannier racks) and spd-cleats.

This was the first bike I had with cartridge brakes. Me no like. Stopping power is pants, and they need replacing quite frequently. And it costs 5 quid for a new pair. Will be looking to replace these with some v-brakes. Some of the problem was the front rim became so scratched (by crap stuck in the brake pads) that it slices through pads at a rate. This means a whole new whole front wheel to replace as well as the back with a bust hub...

bike box

When I was touring (15-20kg of gear, but I'm only 65kg) the back wheel lost 2 spokes every 3 days for a fortnight before it was replaced (with a Mavic Aksium). Bit annoyed with this, it might have been shunted while in the airplane hold (above)...but nothing visible and it was really well packed up. Original front wheel has held up with minimum hastle tho.

and so it starts

I was really impressed with the Mavic Aksium replacement back wheel. No dents or creases and at it has straight pull spokes (not the j-ones) which are awesomely hard to break. So hard, in fact, that the hub gave out last weekend (after 6 months) when one of the steel spokes was pulled through the aluminium fastening - the spoke still looks like new, the hub is a write off (seems to happen to other people as well). Still I rode 80 miles one spoke short (after some spoke-key persuasion, and disconnecting the back brakes to allow for 1.5cm of wobble) and it didn't get any worse. I'm going to try and repair the wheel, so that says something about how much I like it.

mavic aksium fail

Was totally shocked when the rear mech hanger gave out on me for no good reason 7 months in, leaving me stranded on the edge of town. Was torn in two at what looks like a weak point where one of the nuts went through. Thing is you never check the condition of the hanger, because you can't see it and lets face it, it's a inanimate lump of metal. Was cheesed off tho, because when it went it snapped the chain, meaning new chain and rear cassette. Plus my back wheel locked up in the middle of a busy junction...but that's another story. There aren't many giant dealers around, and normal bike shops don't carry mech hangers for all bikes, so in the end it was a mail order job. Because I'm paranoid, now I carry a spare hanger.

broken part

The bundled tyres also weren't up to too much either. They ate tread quickly and punctured easily. Ended up replacing them with some kevlar (Vittoria Rubino Kevlar Tyre) ones that are holding up well, and their yellow nicely matches the panniers.

Running costs for 8 months (pounds sterling):

20 misc spokes
30 Replacement pads (front and back x 3 at five quid each)
20 Chain
25 Rear cassette
15 Rear mech hanger
75 back wheel - Mavic Aksium
130 for touring wheel pair - Deore hubs and Mavic A719 rims (300g heavier, but am fed up about going through rear wheels so fast.
25 tyres

So that's a non-trivial 340 to keep the bike on the road for 8 months, about 3/4 the cost of the bike. Then again it's about 7 pence per km, or 7 pounds for every day out/100km. That's about the cost of fuel for a car doing the same distance. Other (optional) stuff I've brought includes:

[edit:]
Last week my left shifter barfed out in the middle of a tour (10 months after purchase) stranding me in bottom gear for two days before I botched on a temp thumb shifter. It would have been another £150 on that list, if it wasn't for the fact that I managed to get home to the dealer the bike came from, and got him to get giant to buy me a new shifter.
[end edit]

40 Tools/oil
25 Saddle bag
60 SPD pedals and shoes
20 Pannier (+90 for Panniers, indestructible and fantastic Ortliebs)
60 Lights (already had a helmet)
30 Lock (that was certified by the insurance company, and is too heavy to really carry around)

235 in accessories and vanity goods... but these are by choice.

Stuff I still want
  • V-brakes - Glasgow eats cartridge brakes for breakfast and they're fuppin expensive.
  • A saddle bag not made by topeak (both of these have crapped out on me very quickly - if you buy anything by topeak, expect all the plastic catches on it to crap out quicky (two saddle bags and a multi tool have fallen this way))
  • New front light (Cateyes turn out not to be waterproof)

IMG_8858

I guess the long and short of it is that a road racer isn't a tourer. Racers are for 2 hours every weekend, instead of going to the gym. Touring bikes are heavier and can take extended punishment. I want both, but am poor, generally I'm leaning towards a tourer now. For all it's problems I've liked the bike and it's been a great introduction to serious biking etc... & it's been a whole lotta fun :)


Saturday, March 28, 2009

at least we're good at being told what to do

Here's something that missed me last year, the country that spends most on online advertising, per person, is the UK - 214 dollars for every netizen. If you're going to do an online start up, with an advertising-based business model, there's some serious thinking about where it's going to easiest to get going - the US is down in 4th place. (via 'crunch and PWC)



Unique Visitors (000) - Apr 08Total # of Internet Users in market (thousands)Total Interactive Ad Spend Per Market in $ (millions) for '08Per Capita Interactive Spend ($)
United Kingdom341247302213.98
Australia112291673148.99
Denmark3438496144.27
United States19072825200132.13
Germany345394105118.85
Sweden5785651112.53
Netherlands11029104794.93
Japan55260496589.85
France26263232688.57
Italy18997145076.33
Canada23947158266.06
Finland298419264.34
Norway280217963.88
Spain1570590657.69
Rest of Europe82441205824.96
Brazil1932040220.81
Mexico1177311910.11
India283751023.59
Rest of Latin America30516601.97

Friday, March 27, 2009

robust straight skeleton



Have been trying to figure out how robust you can make the skeleton to floating point errors. I suspect it's impossible to do entirely, without it reducing to a very expensive operation, but am not really sure. (edit: source code here)

Wednesday, March 25, 2009

marvin the robot

I realised that some of my old projects never got put back onto the web, like they deserve. So presenting my UCSC CMPS160 final project marvin-the-robot

marvin the robot

It was made in C++, with a meg/sec memory leak to show for its troubles (this was a work of graphics not engineering). The windowing code came from the fltk system. The only bit of code that might be useful to someone is the downhill iterative IK solver that's in there, somewhere.

Controls:


  • R - change camera mode


  • 1..6 - ask marvin to do some actions


  • < > - move camera forward / back


  • [ ] - move camera in / out



  • Is an IK example of robot riding around a small moon endlessly. Was written in C++, source is here use it under the normal WTFPL.

    The binaries for OS X are here. If anyone builds it, it'd be great to have some binaries for other OSes, also if you've got a mac powerful enough to run the app and do a screen capture at the same time, a youtube video would make me very happy :)

    Tuesday, March 24, 2009

    Eurographics 09 papers

    A couple of procedural-ish papers from this years Eurographics:



    Arches: a Framework for Modeling Complex Terrains ( [tba:eurographics] | pdf | web | 2009 )
    A. Peytavie, E. Galin, J. Grosjean, S. Merillou


    This paper presents a method for generating and editing terrains that include overhangs.

    A discrete block model describes columns of materials. This is covered by a simple implicit surface (using a cubic kernal giving a C-0 continuity) to smooth it. The block model describes which material is present between which heights, this is presented as a lower memory version of a voxel system.

    These techniques are intended for use in realtime editing, so instead of complex physical models that can only be used offline, Arches uses a simplistic physical simulation in each column. Where sand and loose rock are clustered above bedrock, the materials are unadjusted to form contiguous columns. Then adjacent columns with the same materials are allowed to settle (with different viscosity etc..).

    The bedrock layer can be edited using a tool that adds or removes material from the columns, taking into account the fact the user is viewing the implicit surface. This just adds a sphere of material at a time, and isn't particularly useful when building up large cliffs - so some sweep tools were also introduced.

    An erosion tool takes a volume of bedrock, and converts it to loose rock and sand. These are then allowed to settle to give the appearance of decay. Similarly loose rocks are made by tessellating a volume of bedrock using Voronoi cells, then erroding the outlines, leaving contact point unchanged. As these rock piles are not the result of physical simulation, they might not be physically plausible, but the results are good enough.

    To render the scene, the implicit surfaces of the bedrock and sand are polgonized. The loose rock's implicit surface is polgonized separately. To avoid computing UV's a 3D volumetric texture is used that includes the normals of the local surface.

    This paper contains quantities of unnecessary equations. There's something inside every computer scientist that believes a really good paper has to have lots of equations! It's also an offline system - the textures and meshes are generated from the column model at design time and we only see "normal" meshes and textures at run time - the potential is almost there for real time generation, but it just falls short.

    The results are very nicely presented and represent a system that is an obvious improvement on current games-industry practice. I've been meaning to review more terrain generation techniques, because it's one of the best sources of mesh generation literature.

    Cheeky note: this system of columns for representing rock was used in a undergrad project ("mudslignin'") to represent a 2D landscape of mud and water - it was even fast enough to calculate 2d "water" (aka jelly) simulations in real time:)




    Structure Preserving Reshape for Textured Architectural Scenes ( [tba:eurographics] | pdf | web | 2009 )
    Marcio Cabral, Sylvain Lefebvre, Carsten Dachsbacher, George Drettakis




    This paper presents a method to intelligently reshape existing 3d models. By using context-sensitivity image resizing and attempting to keep angles constant, an artist's models can be manipulated to a new shape by dragging a few handles around.

    An input model is pre-processed into a set of constraints and a solution. Then the model can be reshaped using a set of point handles, while the constraints are satisfied to position each vertex in real time.

    There are a few different constraints, edge direction constraint attempt to keep the angle between two consecutive polygons close to the original. Edge length constrains attempt to preserve the length of shorter edges more than longer ones. There is a minimum edge length to stop degeneration. When two parts of the mesh touch, but are not connected, a contact constraint enforces continuity. This constraint triangulates one of the touching faces and constrains the other part of the mesh to this position. To capture the requirement that ornaments (windows etc..) stay the same size, similar sets of faces are detected and (optionally) constrained.

    This set of hard and soft constraints is dumped into a matrix and solved using least-squares. At run time the users input is projected using the found solution and analyzed - if it is degenerate, the users edit is rejected.

    Unlike the content-aware image resizing that made the front page of digg last year, the texture resizing algorithm is specialized for rectilinear architecture. The input texture analyzed for self-similarity, and only those parts that are self-similar are stretched on a rectangular grid. Sample patches are used to reduce blurring.

    This paper is one in a growing trend towards manipulating existing meshes into new forms. I have always felt that this approach isn't competitive with outright procedural generation for the pro-artist due to over constraining the user:

    "To enable interactivity, we rely on a simple solver and allow it to fail or reuse user input if this leads to degeneracies (ie collapsing edges or collisions). We argue this behavior is reasonable since the user has full freedom to assign new handles and guide the solver in avoiding degeneracies)"

    However as a method for quickly splicing together geometry for a un-trained user this looks really effective. The video is really worth checking out.

    Thursday, March 19, 2009

    when an irresistible force meets...outlook!

    [edit] apparently this problem's been fixed in the latest version of Elgg.

    So after signing up to SICSA's Elgg (social networking hoodicky) I had a bit of a mail fail:

    mail fail!

    Found some great new toys debugging this one:
    • Export Cookies allows you to de-sql-arize the Firefox cookies file and use the output in wget's --load-cookies argument.
    • Live HTTP headers allows you to view the headers for all the page's content as it streams by.
    Combining the power of these two showed what happened - Outlook's web access (gobshite!) uses javascript forwarding on all email links...

    window.location.href = foo.html

    ...but Elgg's email confirmation scripts forwards to the referrer (for some unknown reason, perhaps the idea is that you never see it). So we have the irresistible force of Outlook vs the immovable object Elgg, giving a redirect loop that sends me an email every time round. The following downloads google's homepage (yay for cheap HTTPS pages without real certs):

    wget --no-check-certificate --referer "http://google.com" "https://projectnets.cs.st-andrews.ac.uk/sicsa/action/email/confirm"

    The thousand little edge cases like this (browser #342.b with web app #564.j) is the reason we all use commercial networking sites. They have the time and willingness to fix these before release (so I don't have to find them). They also have enough existing users to make it worthwhile filling in the sign up form.

    Still it's one more tool to use when rick-rolling academic types eh?

    [edit] Here's Elgg's source - looks like someone wasn't quite thinking straight. Bugged.

    00043 forward($_SERVER['HTTP_REFERER']);

    There's no way this was ever a good idea...

    Tuesday, March 17, 2009

    netbeans combobox

    The simple solution to the broken combo-box (drop down box) rendering in netbeans under linux (Ubuntu hardy/8.04) is to update java. Grab the 6.12 jdk installer, install then update the symlink in /usr/lib/jvm


    twak@xxx:~$ cd /usr/lib/jvm
    twak@xxx:/usr/lib/jvm$ sudo rm java-6-sun
    twak@xxx:/usr/lib/jvm$ sudo ln -s jdk1.6.0_12/ java-6-sun


    After and before:



    I'm not sure this is the official way to switch jdk, but it works. Am getting fed up with netbeans niggles now....

    Sunday, March 15, 2009

    gizmodo'd

    Thanks to Sean for pointing out that Gizmodo used a self portrait of me in a set of images. Must have been a slow news day, so they scraped the flickr group for creative commons content....

    2009-03-15_2131

    from here:

    tourist3

    Thursday, March 12, 2009

    this is the hardest thing I've done

    hardest thing

    and this is the second time I've done it. Sorry this is another rant about PhDs!

    I've spent a lot of this week working on the problem of 'horizontal bisectors in the straight skeleton'. The solution (i'll publish it here when I'm sure it works) was stupidly simple, but it's just one of a thousand snarls while implementing what I thought was a fairly simple routine. It means that this term has been effectively been a crash course in implementing computational geometry algorithms. The thing I feel guilty about is the small amount of output -the total code is less than a day's output when I was employed as a code monkey last year.

    I've built this algorithm, for Sity, but a lot of the theory was lost on me. It's a bit of a legendary quest that I want to finish. I should understand this implementation well enough to really fix (rather than patch) any problems that occur, and extend it in bizzare and interesting ways. But the best thing is - I haven't been bored since I started.

    I chose my own subject, it's up to me to get my supervisors interested in the subject rather than vice-versa. There's little related work going on at Glasgow (or anywhere in the UK really). The thing is, this is what I secretly always wanted (being a bit of a loner at heart), but keeping motivated is going to be a challenge.

    I guess you build up tools that keep you going - conferences, daily/weekly routines and presentations. But I think there is very much another skill in keeping yourself interested. I've noticed this in a lot of the older generation of people who studied for PhD's years ago,will motivate themselves to thoughily research an area in (what looks like) zero effort. I guess it's an aquired skill. It means paying great attention to your state of mind, when you're energised you're ready to takle hard work, or break into pastures new. If you're tired it's important to have a juicy area to work on that's very involving and provides effortless motivation (for me this is programming).

    It's much like tending a flame...add just wood to keep it going when there's a good bed of ashes, and pour on the petrol when it's about to go out. Perhaps silly metaphores are best not dwelt on, and attributed to the bleak Glaswegan winter.

    Wednesday, March 04, 2009

    blender animated obj import

    I needed an animated mesh import to blender and .obj is always an easy format to work with, so here's a modification to the blender import script that loads a sequence of .obj's in a directory and animates them together in a new scene in the current blender file. All the meshes get dumped onto layer 1, and moved onto layer 2 for one frame only. Result!:

    that straight skeleton again



    The straight skeleton is an algorithm that takes the floorplan (bird's eye view) of a house and produces the roof shape. Each edge of a polygon is moved inwards, in the direction of it's normal, tracing the intersections between the edges (gutters and crests) as they shrink in. Raising Roofs, Crashing Cycles, and Playing Pool (doi | web | Eppstein, Erickson | 1999 ) is an excellent introduction to the skeleton, even if the O(complexity) bits can be ignored on the first reading. For those looking for a decent implementation of the unweighted skeleton, I point ya'll to the masters of robust implementations at CGAL. [edit:] My floating point implementation of the weighted skeleton is here.

    bisectors

    This data structure has always haunted my work in urban modeling. As an undergrad it took me a few weeks to understand fully and implement, and even then I kept running into nasty numerical issues. The data structure was described for the first time in 1995, but there's a been a fair bit of literature on it's implementation and computational complexity.

    The straight skeleton was first described quite recently in A Novel Type of Skeleton for Polygons ( doi | pdf | Aichholzer, Alberts, Aurenhammer, Gärtner | 1995 ) as an alternative to the medial axis, but with straight edges. It is introduced as "that roof shape". It also points out that straight skeletons are unique, but as Eppstein's paper shows they are quite chaotic - small changes in the input or decisions made can lead to large changes in the output. There are assorted uses for it, beyond the obvious roofs in procedural urban geometry, you can even morph polygons by morphing their skeletons!


    This algorithm took me a long time to implement (just getting back into the swing of coding, and picking up the equivalent of a couple of graphics courses in linear algebra along the way). It was implemented to define the weighted straight skeleton (see Eppstein's paper again) rather than just the straight skeleton. This was basically done in the method given by Straight Skeleton Implementation ( pdf | Felkel, Obdrzalek | 1998 ), although I have several issues with the paper, I'll write about that elsewhere (maybe a bathroom wall).

    My first attempt at this was for Sity, (the results are here). It was a long time ago, but the algorithm gave me real trouble. I was basically using using a Jordan curve algorithm to detect line intersections, and tweaking it a little bit so that weighted skeletons could be created. I had endless problems with collision rays escaping from their bounding boxes (I was up to four rewrites). In an orthogonal environment, you very often get 45 degree bisectors targeted against square corners. This mean that it's a disaster if neither of the edges adjacent to the corner catch the ray. Because of the nature of floating point arithmetic, this is fairly likely when repeated a few hundred thousand times.

    wrong

    The next, more recent attempt at implementation, shrinks the outline (moved each vertex in the direction of it's bisector) a number of times, until termination. Figuring out where termination is a bit of a problem - my first reaction was to union the the resulting outline with itself and remove any negative areas (if the outline runs in the opposite direction to the original, it's area is, by convention, considered negative). The problem with this is that several intersections can happen within one shrinkage (question mark, above), making a mess of the mesh (and so it's union). In fact, since an arbitrary number of these events can happen in one shrink step, and we need to sort them out in order, this problem is just as complicated as the original! (doh) So not such a good approach - except maybe a time saver for the concave case :-



    So I went back to Felkel's bisector collision with edges approach . My previous problem of dealing with these escaping bisectors could be dealt with by converting the n-sided outline shape, into n/2 planar slices through the completed skeleton. The following animations show a polygon with (dark) rainbow coloured edges, and the location of intersections with a slice collinear with a given edge (with a thick red edge). The intersections are in light rainbow colours:





    I thought this approach would be successful, so much so that I made a nice series of images explaining the details.

    don't do this


    But this approach turns out to be very fiddly. As the skeleton is constructed from the floor upwards we always remove edges (never adding them). Each time an edge is removed two other edges come together (unless we're at a peak). This means removing a edge from each 2d projection and re-running many intersections, which is a lot of book keeping (and it makes the already bad complexity even worse). Another complication is the need for a 2d representation for 3d faces (to represent parallel edges), this was done, but wasn't very tidy. In the end I learnt a lot about computational geometry, but it took ages (my current version took less than a day for rewrite). And the benefits of doing this way - more watertight boundaries - are more simply dealt with in 3d.

    So...next time children you'll need an empty washing liquid up bottle and I'll show you how not to shoot both your feet at the same time while constructing the weighted straight skeleton.

    Tuesday, March 03, 2009

    the problem with package managers is...

    ...abandoned packages don't get updated very often. Hashing Tuple3d's didn't give the expected results. I was using ubuntu's vecmath package. The hashCode() values are the same, but the equals method falls back to Object.equals because the Tuple3d only defines Tuple3d.equals (Tuple3d). So you get very strange results if you're hashing with Tuple's.

    Bugged!

    The vecmath Tuple3d class doesn't define equals correctly.

    the method missing (from the GPL2'd java3d vecmath) is

    public boolean equals(Object t1)
    {
    try {
    Tuple3d t2 = (Tuple3d) t1;
    return(this.x == t2.x &&
    this.y == t2.y && this.z == t2.z);
    }
    catch (ClassCastException e1) {return false;}
    catch (NullPointerException e2) {return false;}

    }

    boolean equals (Tuple3d) isn't enough!

    public class NewClass
    {
    public static void main (String[] args)
    {
    Vector3d v = new Vector3d(1,2,3);
    Point3d p = new Point3d(1,2,3);

    System.out.println(
    v.equals( (Object)p));
    System.out.println(
    v.equals( (Object)new Point3d(1,2,3))
    == v.equals( p ));
    }
    }