Friday, December 28, 2007

Test of Picasa - sunset on the med

Tuesday, December 25, 2007

Here's an idea - agile movie production.

To make a film you go through a number of steps, like the waterfall model of Software development. Once each step is complete you move onto the next one. You never really go back a step - once the script is written, you don't change it much, once filming is done, you don't redo it. This means that if you decide to change something ( that the film takes place on a haunted space ship instead of an infested submarine) you have to abandon loads of work (scratch a lot of filming) swimming upstream to make the changes.

If we take the agile methodology to this you would start with a simple yet complete story, get something that can is releasable as soon as possible and then continue to fill in the gaps. By sketching out concepts and filling in the details when we have a better idea of the context we don't waste effort. If everything is low effort we're not afraid to make the sweeping changes we need to make a better film.

So we could write a simple story, sketch it out, decide what we like and what sucks, fill in some details, change the story, change the lighting in the final scene, then change whodunnit before we show it to a focus group, and see what they think sucks.

Some tools that'll come along in the next few years that could make this possible this:
  • Face mapping - your lead actor quits! use a clever computer hoodicky to map the original face onto another actor.
  • Fat-pipe broadband - sharing movies is time consuming today, this'll be fixed soon.
  • Good previs software - tools like antics3d and iclone make it really fast to get some footage in the can. They still have a long way to go tho, be sure to pick one that is still improving (And yes I work for 'storm team.)
  • Other refactoring tools - Make it easy to carry through sweeping changes - you can change the gender of a character in moviestorm in three clicks.
  • Collaborative film creation. Any person can make an edit they want as long as it is consistent. Lots of material would get left behind, but it would all add back-story and character to the piece.

Friday, November 23, 2007

Parker's Piece, 10 am today:

The ice rink runs on emos!
How long has antics3d had a forum. How long have they had a flash banner on their homepage and been hosting movies? Sounds like someone heard about "user generated content"!

Both rivals are comparing themselves to Photoshop:

"This (Antics) looks like it could be the next Photoshop"
Lori Furie – Snr. VP Vfx Sony/Colombia
[quote from machinima for dummies, via Moviestorm's Ann]

Suddenly antics have discovered this "machinima" thing as they "develop and deliver 3D animation software that brings your ideas to life, giving your stories a new dimension through easy to use “point and click” software." - thats a change of focus over a few months ago, ease of use wasn't on the agenda..?! What do they have in store for the 3.0 release - tutorials? interface overhaul? jump to another OS?

Still the two arn't in competition, but there is a narrowing gap. Antics is becoming more low brow and movestorm has always avoided the loftier markets. There are a lot of other fish too fry too! The pre viz market in Cambridge betta absorb a lot orange ;)

Not that I'm biased, but I'm sitting with two ex antics employees (ok senior software monkey and founder) at 'storm HQ, and they both like spuing war stories that make the antics people sound like something between an army of PHBes and a WTF nightmare. I almost want to go work for them to find out what it's like there! Seems like they've [over||miss]-administrated staff ingenuity, productivity and morale into the floor. Still antics seem to have better raw tech even if they don't seem to known what to do with all their PhD's!

edit: Good to know that antics are in no way comparing moviestorm and antics on a regular basis:

(for the url impared, someone from was googling moviestorm antics on the 23 Nov!)

Although I will never post identifying info on individuals who view this blog, that screenshot from my logs doesn't give away anything that isn't already public. Good to know that they're running IE7. Interesting point: if they didn't have their own domain range i wouldn't have known who it was.

Wednesday, November 21, 2007

Install4J, the Java installer for moviestorm is just plain shoddy. If the file is truncated (say, due to incomplete download cause by the wrong type of leaves on the line) one of two things will happen. If the file is truncated near the start then it gives a "corrupt download" message. If the file is truncated elsewhere (~>10Mb) the installer starts up, displays a nice bar while it prepares its file and looks great until it throws a "Failed to install AddOn unexpected end of ZLIB input stream." message box. Really - no check for a corrupt download in this day and age?

So I wrote a doofit to verify / validate the install4j binary before it starts copying files. Will punt it up onto sourceforge, but for now here it is.

edit, tis up on sourceforge!

Tuesday, November 06, 2007

First take of a mini-sketch I'm pondering

Thursday, November 01, 2007

Woo, finally !ill and new webcam (Logitech 9000) has arrived. After figuring out that it needed USB 2 to work without big compression artifacts I ran it through it's paces. It was giving nice hi res images (with quite a lot of lag - 250ms compared to quickcam-messenger's 100ms), but most exciting one calibration shot came back with exaggerated normals and really showed the skin's texture fantastically (not to mention my natty knitted jumper):Next step is to sort out some motion compensation to improve image quality over several frames. Logitech's included app that lets you replace your face with a CGI creature is cool, if flaky. This is the kind of thing that I want to reproduce (with less error) to drive puppets. An older video that is just plain cool:

Tuesday, October 30, 2007

Wow, I crashed javac, now if only i was in a position to submit a bug report that wasn't a company secret...

An exception has occurred in the compiler (1.5.0_13). Please file a bug at the Java Developer Connection ( after checking the Bug Parade for duplicates. Include your program and the following diagnostic in your report. Thank you.

BUILD FAILED (total time: 22 seconds)

I tried to reproduce this the other day with no luck. Will try again!

Wednesday, October 10, 2007

now this is a better idea - webcams be damned, milk baths are the order of the day for object imaging on a budget.

Hmmm... Went to a test driven development this evening. Quite a hostile audience, and a good presentation, but as predicted very academic:
  • Shooting fish in a barrel - if I fire 20 tests at this barrel and don't hit a fish then there are no fish in the barrel? But much more likely to prove that the code does what the programmer thinks it does.
  • You only test for the errors that you anticipate. These are the same as the ones you program for. Fair enough, agile says you add tests for those you don't anticipate, but those have already become bugs so TDD's "test before code" principle has failed.
  • Doubles code size. Complexity is super-linear in code size. Big minus for agile workflows.
  • Brain stack space. Adding one level of complexity pushes out other things that I'm thinking about. Using stack space for testing. I go to someone elses function and I need to understand their code and their tests before working on it.
  • Fucks encapsulation, exposes internals.
  • Good for academic, deterministic, small libraries (those that arn't hard to debug). Sucks for large problem spaces (games) and non-determinism(games, image processing, UI).
One advantage is that it is the camping sniper of defensive coding... they break your tests, blame them! Perhaps the best thing about TDD is that you spend a lot of time thinking about your code. But perhaps then you're not hiring the right people.

I don't think it's a coincidence that TDD's platform is java. Java is designed to make it harder for programmers to write bad code. TDD makes it even harder... but perhaps you want to write good code?

One thing that did come out of the evening was mock classes - super test friendly prototypes. Makes it even harder to write bad code.

Monday, October 08, 2007

Something really strange going on here. Using a half-screen sized chessboard squares and still taking the difference I get this - some highlights, but in other places where it should be specular, such as the end of the nose - nothing! Think its because the screen isn't being used 50/50 black and white.

main problem is that the roughness of the surface of some bits, like the end of the nose is enough to give no difference between black on white and white on black screens. The highlights around the nose are caused by the difference between shadows and non-shadows. I think this can be improved by using only having one square of black vs a white screen. Then we check for only positive differences.

But a 10 minute average shows that the idea is right (striping still there, something wrong with the randomization)
Why is it picking up the edge of the disk label and the outline of the (solid) text on the bottle? This is a high-contrast area (white label on black disk) so perhaps it's natural because of the difference algorithm used. Perhaps the next stage is to use an edge-detection algorithm on the fully-lit image to identify high contrast areas, and subtract these from the results.

Am really glad that the background came out properly black.

Also note the limited angles this works for on the deodorant bottle (~PI/4), but this should be plenty for a shiny nose and reflective eyes when imaging faces.

Repeating lots of time seems to be a sufficient substitute for a quality webcam. What if I got a really good webcam and repeated it lots...?
Right, new day new mad idea. Using a naff webcam and a big ol' TFT to capture normal maps and specular maps for a face. Some academic people have tried this using big illumination rigs and monitors before (but seemed to be working at it too hard).

First up left/right normals. By strobing a white line across the screen and compiling the pixels that light up the most you can build up a convincing normal map (black -> white = left to right):

The grainyness down the left hand side of the image is the light reflecting from a white wall that is next to the monitor :(. This took several passes and blended the results.

My next attempt was at extracting specular maps. This is much trickier. Some people have tried this before with mirrored objects, but reading their papers before trying it would have been cheating.

So if you display a chess-board inverted with every frame and composite the difference in maximum and minimum brightness for each pixel over a bunch of flashes you trigger epilepsy. You also get:

(a deoderent bottle with a 20 pence piece attached)

Good - but on the edges between the chess squares you get low change = no specular. Solution: most the squares around in a non-linear fashion (add a translation based on a random rotation).

The response for this algorithm was also very nervous and it acted as edge detection when something was moving a little. But it did pick out the eyes and my greasy nose.

For objects that are shiny but rough (such as the coin above) you get no output because the response is the same for both inverted and non-inverted stages. To differentiate between a matt bit of paper and shiny metal I tried different sizes of squares to display. With 1cm - 30cm sized squares I got the following results.

It looks like you get different responses for different levels of roughness. The differentiator seems to be when it peaks first. I'm also still getting a lot of striping, so supspecting something's wrong with my averaging!

Saturday, September 29, 2007

I want haskell tetris and I want it now.

What I did to get the opengl library to work with ghc:

installed ghc using "fink install ghc"
updated Cabal using the download from here and the install instructions here, basically:
runghc Setup.hs configure
runghc Setup.hs build
sudo runghc Setup.hs install
from inside the unzipped download (note sudo on install).

Next up opengl was isntalled from it's home in hackage in a similar manner. I'm a little worried because of references I found sayhing that the opengl package in darks is better.

This approach failed with an 'impossible error' using haskell 6.4. It seemed to have everything but it just didn't work.

In the end I built ghc 6.6 from source grabbed from darcs (Tho it would have been better to get the binaries) . The instructions on the ghc website are impeccable, even if it did take 6 hours on my craptop. Then it was just a matter of ensuring that the default 'ghc' command was bound to 6.6 and not 4 (playing with my ~/.profile on OS X). Then it was just a matter of rebuilding and installing Opengl (2.1) and GLUT as above and it worked. w00tage!

ghc --make -package GLUT Tetris.hs -o Tetris