20070920

Needles!

It was probably the first time anyone had handed this tattoo artist a copy of The Human Brain Coloring Book as reference material, but this didn't seem to be a career milestone. He liked that it was braaainnnzz, though. So did some of the other artists. They kept stopping in to hold conversations, many consisting (so far as I could tell, with my head mashed into the chair) of "Ooh, it's a braaainnnzz, eh?"

The white things are neurons; the quote is from Hilbert the year before Goedel published his Incompleteness Theorem. (Likewise I have answers to "zomg wedding dress," "zomg AIDS," and "wtf," although the last is a bit involved.) The only unfortunate thing at this point is the total failure of such an endeavor to generate a dinner story. Ah well. Perhaps indirectly.


20070914

Conformal maps on photography

I just found a Flickr set with some cool examples of applying conformal maps to photography.


20070911

Prototype II







Flame-like fractals, comprised mostly of integral powers and assorted trigonometric maps.


20070909

Fractal neurofeedback

There's an article on the Mind Hacks blog that overlaps heavily with the kind of stuff we talk about here. Their output looks Sheep-ish; since my realtime Electric Sheep renderer is working now, maybe I'll build an OpenEEG box and bang out an open source alternative.


20070903

More screenshots

Here's the latest.

Edit: I've added some more screenshots to the gallery, with tasty vertical symmetry imposed by mirroring.


Here I'm trying out some different maps, and also incorporating a camera feed, which is what gives it the more fuzzy, organic look. The geometric patterns with n-way radial symmetry come from z' = z*c, which gives simple scaling and rotation. The squished circles come from z' = sin(real(p) + t) + i*sin(imag(p)), where p = z^2 + c and t is a real parameter.


20070901

More fractal video feedback

I've been working on a new implementation of the fractal video feedback idea. Unlike the previous attempts, the code is nice and modular, so complicated bits of OpenGL hackery get encapsulated in an object with a simple interface. It's still very much a work in progress, but I thought I'd share some results now. Feedback (no pun intended) is very much appreciated.

Video:

Shoving the video through the YouTubes kills the quality. I have some higher quality screenshots in a Flickr gallery. Some of my favorites:




The basic idea is the same as Perceptron: take the previous frame, map it through some complex function, draw stuff on top, repeat. In this case, the "stuff on top" consists of a colored border around the buffer that changes hue, plus some moving polygons that can be inserted by the user (which aren't used in the video, but are in some of the stills). In these examples, the map is a convex combination of complex functions; in the video it's z' = a*log(z)*c + (1-a)*(z2+c). Here z is the point being rendered, z' is the point in the previous frame where we get its color, c is a complex parameter, and a is a real parameter between 0 and 1.

There are two modes: interactive and animated. In interactive mode, c and a are controlled with a joystick (which makes it feel like a flight simulator on acid). The user can also place control points in this (c,a) space. In animated mode, the parameters move smoothly between these control points along a Catmull-Rom spline, which produces a nice C1 continuous curve.

The feedback loop is rendered offscreen at 4096x4096 pixels. Since colors are inverted every time through the loop, only every other frame is drawn to the screen, to make it somewhat less seizuretastic. At this resolution, the system has 48MB of state. On my GeForce 8800GTS I can get about 100 FPS in this loop; by a conservative estimate of the operations involved, this is about 60 GFLOPS. I bow before NVIDIA. Now if only I had one of these...