experiencing numbers 2003

// week1

sketch 1.2


assignment 1:
iraqbodycount

Create a static or interactive image that embodies the provided data set, a measure of civilians killed during the War in Iraq. The database, www.iraqbodycount.net.


sketch1.1 ::: sketch1.2

// week2

sketch 1.4


iraqbodycount con't

My 1st sketch, showing the difference in the maxmium and minimm reported body counts graphed upside-down proved to be more powerful than the 3D data-set version. So I riffed on that, to make it more informative at a glance.

sketch1.3 ::: sketch1.4

// week3

iraqbodycount :: finished

Small but meaningful changes. Now displays more dimensions of data in an uncluttered but informative format.

sketch1.5

// week4
no class

// week5

Sketching Assignment #2: Displaying Physical Information.

Thinking about taking all the live webcam feeds from the MTA and turning them into a sonic stream, so that you can listen to a webcast for a few minutes and get a very accurate sense of the traffic conditions in NYC. Could replace those crappy radio traffic broadcasts.

// week6

:: TrafficAV ::

(working title)

Finishing assignment #2. Creating a Max/MSP/Jitter patch that represents change in the traffic patterns at the NYC river crossings (taken from live MTA webcams) as live music/video.

// week7+8+9

For my first experiments, I looked for changes between successive webcam images. Assuming that changes in the image corresponded to the state of the traffic (heavy and congested, or light and quick flowing), I devised a system where I could visually compare those changes, and change the threshold above which changes would be measured.
I subtracted the pixel values of the newest webcam frame from the frame before it, giving me areas of black and near-black where the images were very similar. Then, I divided that image into a 4x4 grid of squares, where each square was white if the average pixel values for it were above a certain threshold (set experimentally), or black if it was below. That grid of white or black squares was overlaid on top of the original image, so that changed areas showed up as very light, while the rest were darker.

Problems with this approach:

1. A 4x4 grid couldn’t accurately portray changes in the webcam images; the lanes of traffic didn’t fit into the grid well enough to capture the paths of the cars.
2. The white overlay didn’t fit conceptually. People didn’t feel that “lighter” corresponded to “change” and “darker” to “unchanged.” A better conceptual coloring would be the reverse, so that the lighter elements faded into the background.
3. The overlay, while helpful for me (in debugging the project), wasn’t helpful to other people. the biggest criticism was that too much information was presented to the viewer, who was already naturally aware of basic changes in the traffic patterns, due to their possessing a much more sophisticated image recognition system.
4. While changes from frame to frame were shown, in a rudimentary sense, there was no other sense of change over time that the viewer might find useful. Having a great natural sense of what was changing from frame to frame already, viewers were more interested in seeing changes over longer periods of time.


The system also included an audio synthesis portion that generated sound based on those changed squares in the grid. Conceptually, the audio should build up and thin out based on the density of traffic over time, so that the user gains some awareness of the past traffic conditions through the current state of the sounds produced. The program read the grid like a sonogram, from top to bottom, feeding the white or black (on/off) values into a feedback FM synthesis patch in Max/MSP. Sounds generated by the patch would change over time due to its chaotic nature.
The audio was interesting, but it wasn’t clear whether or not the sounds were in some way describing the traffic conditions, or merely making noise over time. The biggest obstacle to that was the complex nature of the sound generating patch, and the number of simple but over-powerful controls, each of which altered the current sound simply by the rate at which they were changed.

// week10
Experimenting with different audio schemes:

one
two
three
four
five
// week11
testmovie 2D
testsound nurbs1
testsound nurbs2
testsound nurbs3
trafficbleeper

Understanding that I needed to show changes in the system over longer periods of time, I experimented with the Computer Vision for Jitter (cv.jit) motion tracking and analysis tools for Max/MSP/Jitter.

In a perfect world, I would identify each car in the stream of traffic and track it’s motions past the webcam, displaying them as a streak to give the observer a sense of the car’s movements through time and space.

Unfortunately, this was not easy to do. cv.jit provides some tools for tracking objects, and for creating “mean’ images that are time-based averages of frames of video, but none of them proved successful at tracking the irratic movements of cars across the webcam images, each of them of low-quality and taken at three second intervals.
Trying to separate out the background image was moderately successful when I took the average frame and the standard deviation of each pixel over a few hours and then compared that to the current image. I wasn’t able to successfully identify cars as independent objects, and determine their size, but I was able to produce a black-and white image that represented the outlines of traffic fairly well by applying a binary edge filter to an image produced by subtracting the mean background image plus 2.23 times the st. dev. from the current frame (meaning that less than 5% of the pixels in the new image fell within the range of “background” for the experiment).

The audio concept was also refined. I realized that “playing” the image like a sonogram, reading it from top to bottom (at the same time as left to right) was ignoring the binocular quality of vision. Translating that into audio, I split the webcam images in half, vertically, so that each half couold be processed like a sonogram for the corresponding left or right ear.
For the actual audio generator, I used a collection of oscillators working off of a waveshaping object in Max/MSP/Jitter. Now, the image was scaled down and each row in it was assigned its own oscillator. The frequency of each oscillator was drawn from as position in the image relative to a sound file, so that the brightness of the pixels determined the volume and pitch of the sounds produced.

// week12

I used a sound clip from Jimi Hendrix's "Crosstown Traffic," and played pieces of it based on how much movement there was in the traffic patterns. The reuslt was very tight, conceptually, but I can't say that I was pleased with the overall sound.

// week13

Discovering a new filter in Jitter that repositions the image based on another image, I tried another approach that "melts" the image based on just the changes between successive frames of traffic. The result was very visually cool - watching the frame of video "melt" out along the lines of change into the rest of the video frame really brought the minute visual fluctuations to life in a very large way.

// week14

I've folded this project into Controller Design and Live Video Processing. I came to the conclusion that statistical tools like standard deviation, covariance, and averages are most useful when interpreted by a human. We're so good at pattern-matching, and interpreting data, in retrospect it almost seems silly to attempt to freeze my interpretations of a stream of data or event into a static (or semi-static) web page, image, or chart, when I'm ready and available to perform an analysis of the data in person, in realtime.

I'd been working with the traffic as a video source in Live Video, manipulating it using my Grab Pipe from Controller Design, so the natural course of action would be to combine them into something meta-meaningful for Experiencing Numbers.