Tag Archives: Processing

Your own Twitter song

CarbonFeed takes your most recent 200 tweets and turns them into a minute loop, a song that changes over your Twitter lifetime. Every time you tweet you generate 0.02g/C02 [1]. Don’t worry too much though. Listening to your one-minute song will eat up roughly 2.86 grams/C02e in electricity, servers, and embodied computer emissions [2].

[1] http://carbonfeed.org
[2] Mike Berners-Lee. How Bad Are Bananas?: The Carbon Footprint of Everything. Greystone Books. 2011.

simpleKinect

simpleKinect is an application for sending data from the Microsoft Kinect to any OSC-enabled application. The application attempts to improve upon similar software by offering more openni features and more user control.

simpleKinect Features

  • Auto-calibration.
  • Specify OSC output IP and Port in real time.
  • Send CoM (Center of Mass) coordinate of all users inside the space, regardless of skeleton calibration.
  • Send skeleton data (single user), on a joint-by-joint basis, as specified by the user.
  • Manually switch between users for skeleton tracking.
  • Individually select between three joint modes (world, screen, and body) for sending data.
  • Individually determine the OSC output url for any joint.
  • Save/load application settings.
  • Send distances between joints (sent in millimeters). [default is on]

Download simpleKinect.

simpleKinect FAQ page

Projects utilizing simpleKinect

Casting. Electronic composition for solo performer with the Microsoft Kinect and Kyma.

Treason of Images

Brad Garner of Harmonic Laboratory asked for a visual component to his choreography for the 2012 (sub)Urban Projections digital arts festival. Originally a single Processing sketch, I split the video between two projectors in order to fit the venue, the top of a parking lot in Eugene, OR. The work explores male stereotypes, especially in dance, and the text augments these portrayals, which are often quick to be placed upon the male body.

Human Chimes

Human Chimes transforms users into sound that bounce between other users inside the space. The sounds infer interaction with all other participants inside the space. Participants perceive themselves and others as transformed visual components projected onto the front wall as well as sonic formulations indicating where they are. As people move, the sounds move and change to show changing personal interactions. As more users enter the space, more sounds are layered upon the existing body. In this way, sound patterns, like our relationships with others, continuously evolve.

The social work dynamically tracking users’ locations in real time, transcoding participants as sounds that pan around the space according to the participants’ positions. Human Chimes enables users to create, control, and interact with sound and visuals in real time. The piece uses a multimedia experience to ignite our curiosity and deepen our playful attitude with the world around us.

The work was commissioned in part by the University of Oregon and the city of Eugene, Oregon. The work was presented as part of the (sub)Urban Projections film festival: Nov. 9, 2011.

                       

Graffiti

(sub)Urban Projections Film Festival wanted to include live projection bombing in downtown Eugene, OR, and I was commissioned to create an interactive installation that allows a user to paint graffiti upon any projected surface. The human interface uses TouchOSC on an iPad or iPhone, which drives my graffiti computer software. The work was presented each night of the (sub)Urban Projections festival: Nov. 9, 16, 23; 2011, the WhiteBox gallery in Portland, OR Dec. 10, 2011, and the second (sub)Urban Projections festival: Nov. 7, 11, 14 2012.

Running Expressions

Running Expressions is a real-time performance composition using bio-feedback and remote controllers. Written primarily in Kyma and Max/MSP, the piece captures live physiological data to create and control music within an 8-channel and video projection environment. The musical performance narrates a distance run, the psychological and emotional impacts of a running experience.

+ Download Documentation .pdf and the performance software (Max/MSP/Jitter, OSCulator, and Processing) files. (.zip, 11.5 MB)

+ Download Kyma performance audio files. (.zip, 45.3 MB)

+ Download Thesis documentation separately. (.pdf, 11.2 MB)

Sonic Dog Tags

Sonic Dog Tags is a set of compositions I composed using programs written in Python, Max/MSP/Jitter, and Processing. My programs retrieve biographical information of fallen service members from the Department of Defense RSS feed, map the information to musical parameters, and draw complementary visual sketches, collectively forming compositions unique to each service member.

Download source code examples.

The above video explains the compostional process. For videos/compositions of the individual service members, please click on the links below.

Tramaine J. Billingsley, Carlos A. Benitez, Rafael Martinez Jr.

Jessica Ellis

Jarod Newlove