experiencing numbers
// assignment 2 ::
representing data from physical sources in a meaningful way

// trafficAV

(cropped image from
MTA webcam)

Note: this project is in conjunction with Live Video Processing


The MTA (Metropolitan Transit Authority) of New York City shows 15 different webcams on their site. Each webcam takes a new picture of one of 15 toll bridge/tunnel crossings about every 3 seconds and displays it on a webserver.

The idea is to display the incredibly complicated and unpredictable phenomena of traffic as live video and sound. Listening to a person describing the traffic on the radio is almost never useful, the chief reason being that are limited by both words and time.

The goal is to create a good system for experimenting with these images, with the end goal ( in the far future) of generating sound from them that can meaningfully impart the state of traffic on the NYC watercrossings to the listener.

A person in a helicopter, limited to reporting at 10 minute intervals, can only describe the traffic conditions at an arbitrarily brief point in time. It is the equivalent of a snapshot of a rainstorm - is the rain/traffic is about to let up? Is it just beginning?

More information can be encoded into music (and especially video), giving the listener an intrinsic feeling for how the traffic flows are changing over time.

Using a combination of realtime video-processing software (Cycling74's Max/MSP/Jitter), and my own Java applications, I collected thousands of images from the MTA webcams, and worked on methods for turning them into video and sound, in a meanful way.



::: process :::::::::::::::::::::::::: :::::: ::::: :::: :: :


Measuring traffic is measuring change. For this assignment, I experimented with different approaches to measuring the change in traffic conditions.

First, I began a process of downloading webcam images every three seconds. I then broke each successive image up into a grid of 16 squares, and compared each square with its equivalent in the preceeding image. First, I subtracted the old image square from the new image square, and did some smoothing to get rid of artifacts (from the JPEG compression). Then I looked at the number of pixels from the old image that were different in the new image, and lightened the square if there were more than a certain (experimentally derived) amount.

Pictures speak louder than words, so here is an example of what happens after two images are compared:

Sections of the image where cars appeared (or disappeared, relative to the previous image) are lighter than sections where there is relatively low amounts of change.

I tried two different methods:

Movie :: sketch1a (604KB)
Movie :: sketch1b (1.4MB)

Next, I refined things a little. I created an image that showed only changes, without the actual webcam image in the background, so that the resulting image was a grid of gray or white squares. Gray squares represented sections of the current webcam image that were relatively unchanged from the previous image, whereas white squares showed a high degree of change.

I then fed each gray/white image into an FM feedback synthesizer patch in Max/MSP/Jitter, and toyed with the parameters to get a sound that I felt was ambient but harsh enough to be "traffic-like."

Movie :: sketch1c-audio+visual (1.7MB)