Wednesday, October 10, 2007

now this is a better idea - webcams be damned, milk baths are the order of the day for object imaging on a budget.



Hmmm... Went to a test driven development this evening. Quite a hostile audience, and a good presentation, but as predicted very academic:
  • Shooting fish in a barrel - if I fire 20 tests at this barrel and don't hit a fish then there are no fish in the barrel? But much more likely to prove that the code does what the programmer thinks it does.
  • You only test for the errors that you anticipate. These are the same as the ones you program for. Fair enough, agile says you add tests for those you don't anticipate, but those have already become bugs so TDD's "test before code" principle has failed.
  • Doubles code size. Complexity is super-linear in code size. Big minus for agile workflows.
  • Brain stack space. Adding one level of complexity pushes out other things that I'm thinking about. Using stack space for testing. I go to someone elses function and I need to understand their code and their tests before working on it.
  • Fucks encapsulation, exposes internals.
  • Good for academic, deterministic, small libraries (those that arn't hard to debug). Sucks for large problem spaces (games) and non-determinism(games, image processing, UI).
One advantage is that it is the camping sniper of defensive coding... they break your tests, blame them! Perhaps the best thing about TDD is that you spend a lot of time thinking about your code. But perhaps then you're not hiring the right people.

I don't think it's a coincidence that TDD's platform is java. Java is designed to make it harder for programmers to write bad code. TDD makes it even harder... but perhaps you want to write good code?

One thing that did come out of the evening was mock classes - super test friendly prototypes. Makes it even harder to write bad code.

Monday, October 08, 2007


Something really strange going on here. Using a half-screen sized chessboard squares and still taking the difference I get this - some highlights, but in other places where it should be specular, such as the end of the nose - nothing! Think its because the screen isn't being used 50/50 black and white.

--edit--
main problem is that the roughness of the surface of some bits, like the end of the nose is enough to give no difference between black on white and white on black screens. The highlights around the nose are caused by the difference between shadows and non-shadows. I think this can be improved by using only having one square of black vs a white screen. Then we check for only positive differences.
--edit---


But a 10 minute average shows that the idea is right (striping still there, something wrong with the randomization)
Why is it picking up the edge of the disk label and the outline of the (solid) text on the bottle? This is a high-contrast area (white label on black disk) so perhaps it's natural because of the difference algorithm used. Perhaps the next stage is to use an edge-detection algorithm on the fully-lit image to identify high contrast areas, and subtract these from the results.

Am really glad that the background came out properly black.

Also note the limited angles this works for on the deodorant bottle (~PI/4), but this should be plenty for a shiny nose and reflective eyes when imaging faces.

Repeating lots of time seems to be a sufficient substitute for a quality webcam. What if I got a really good webcam and repeated it lots...?
Right, new day new mad idea. Using a naff webcam and a big ol' TFT to capture normal maps and specular maps for a face. Some academic people have tried this using big illumination rigs and monitors before (but seemed to be working at it too hard).

First up left/right normals. By strobing a white line across the screen and compiling the pixels that light up the most you can build up a convincing normal map (black -> white = left to right):

The grainyness down the left hand side of the image is the light reflecting from a white wall that is next to the monitor :(. This took several passes and blended the results.

My next attempt was at extracting specular maps. This is much trickier. Some people have tried this before with mirrored objects, but reading their papers before trying it would have been cheating.

So if you display a chess-board inverted with every frame and composite the difference in maximum and minimum brightness for each pixel over a bunch of flashes you trigger epilepsy. You also get:



(a deoderent bottle with a 20 pence piece attached)

Good - but on the edges between the chess squares you get low change = no specular. Solution: most the squares around in a non-linear fashion (add a translation based on a random rotation).

The response for this algorithm was also very nervous and it acted as edge detection when something was moving a little. But it did pick out the eyes and my greasy nose.


For objects that are shiny but rough (such as the coin above) you get no output because the response is the same for both inverted and non-inverted stages. To differentiate between a matt bit of paper and shiny metal I tried different sizes of squares to display. With 1cm - 30cm sized squares I got the following results.



It looks like you get different responses for different levels of roughness. The differentiator seems to be when it peaks first. I'm also still getting a lot of striping, so supspecting something's wrong with my averaging!