CarbonFeed takes your most recent 200 tweets and turns them into a minute loop, a song that changes over your Twitter lifetime. Every time you tweet you generate 0.02g/C02 [1]. Don’t worry too much though. Listening to your one-minute song will eat up roughly 2.86 grams/C02e in electricity, servers, and embodied computer emissions [2].
jpb.mod is a Max 6 package with ready-made data modification modules. These modules address each of the five data modification types (interpolate, thin, offset, scale, smooth [itoss]). jpb.mod modules handle the modification of a one-dimensional data stream. Rapid prototyping is one of the core purposes of the jpb.mod package library. You may find the jpb.mod.scale object especially helpful for non-linear scaling.
The project #CarbonFeed directly challenges the popular notion that virtuality is disconnected from reality. Through sonifying Twitter feeds and correlating individual tweets with a physical data visualization in public spaces, artists Jon Bellona and John Park invite viewers to hear and see the environmental cost of online behavior and its supportive physical infrastructure.
CarbonFeed works by taking in realtime tweets from Twitter users around the world. Based on a customizable set of hashtags, the work listens for specific tweets. The content of these incoming tweets generates a realtime sonic composition. An installation-based visual counterpart of compressed air being pumped through tubes of water further provides a physical manifestation of each tweet.
simpleKinect is an application for sending data from the Microsoft Kinect to any OSC-enabled application. The application attempts to improve upon similar software by offering more openni features and more user control.
simpleKinect Features
Auto-calibration.
Specify OSC output IP and Port in real time.
Send CoM (Center of Mass) coordinate of all users inside the space, regardless of skeleton calibration.
Send skeleton data (single user), on a joint-by-joint basis, as specified by the user.
Manually switch between users for skeleton tracking.
Individually select between three joint modes (world, screen, and body) for sending data.
Individually determine the OSC output url for any joint.
Save/load application settings.
Send distances between joints (sent in millimeters). [default is on]
San Giovanni Elemosinario is a music for film work that attempts to recreate a Venetian church through sound. Collaborating with architecture students studying in Venice, Italy, I received sketches of axonometric views, floor plans, column details, entrances, and other structural perspectives. Placing these sketches inside Iannix allowed cursors to trace the architectural renderings in real time. These cursors output data to Kyma, where mappings of data control oscillators, harmonic resonators, noise filters, as well as other acoustic treatments (panning, reverb, EQ, frequency shifts, etc.). While no impulse response was recorded, listening tests inside the church determined a ~3 second decay time, and helped influence the creation of spatial reverberation.
A huge thank you to Matthew Burtner and Anselmo Canfora, both of whom made the collaboration possible. Video/Music: Jon Bellona Drawing: Olivia Morgan, Alex Picciano
Brad Garner of Harmonic Laboratory asked for a visual component to his choreography for the 2012 (sub)Urban Projections digital arts festival. Originally a single Processing sketch, I split the video between two projectors in order to fit the venue, the top of a parking lot in Eugene, OR. The work explores male stereotypes, especially in dance, and the text augments these portrayals, which are often quick to be placed upon the male body.
Zero Crossing is a collaborative work by Harmonic Laboratory. The piece explores the relationships between moving bodies, real and perceived, and the line that exists at the junction of action.
Music was composed by Jon Bellona. Choreography by Brad Garner. Digital Projections by John Park. The piece was created, in part, for (sub)Urban Projections, a digital arts festival sponsored by the University of Oregon and the City of Eugene. The video performance is the premiere. Please wear headphones to take advantage of the full audio spectrum.
Human Chimes transforms users into sound that bounce between other users inside the space. The sounds infer interaction with all other participants inside the space. Participants perceive themselves and others as transformed visual components projected onto the front wall as well as sonic formulations indicating where they are. As people move, the sounds move and change to show changing personal interactions. As more users enter the space, more sounds are layered upon the existing body. In this way, sound patterns, like our relationships with others, continuously evolve.
The social work dynamically tracking users’ locations in real time, transcoding participants as sounds that pan around the space according to the participants’ positions. Human Chimes enables users to create, control, and interact with sound and visuals in real time. The piece uses a multimedia experience to ignite our curiosity and deepen our playful attitude with the world around us.
The work was commissioned in part by the University of Oregon and the city of Eugene, Oregon. The work was presented as part of the (sub)Urban Projections film festival: Nov. 9, 2011.