Writing an ABCs book was something I had on mind while my oldest son was growing up, and when my youngest son was born, I finally decided to make it happen. Published with Ingram-Spark, you can purchase a hard copy, or an eBook. Below are a few pages from inside.
Category Archives: Blog
Challenge Song: The Cardigans “Lovefool”
Students in my Audio Recording Techniques III (Spring 2023) course at the University of Oregon had ten weeks to recreate a recorded song of their choice. They voted on reverse engineering The Cardigans “Lovefool” from their 1996 album, First Band on the Moon. The goal was to get the song as close as they could to the original recording. They arranged, recorded, overdubbed, mixed, mastered, and played nearly all parts! Enjoy!
Grimes AI voice sings Phoebe Bridgers’ “Kyoto”
In the spring of 2022, students in my Audio Recording Techniques III course at the University of Oregon recreated Phoebe Bridger’s “Kyoto” from her 2020 album, Punisher, from the ground up. I used our 2022 class song as the foundation for Cyberduck AI’s challenge to use Grimes’ AI voice in a creative context. I uploaded our original vocal into Cyberduck’s web interface to create Grimes’ vocal track. I had to do this in chunks due to size/length limits.
I then remixed our 2022 Phoebe Bridgers’ “Kyoto” cover with AI Grimes’ singing. You can listen below
or at https://soundcloud.com/jpbellona/phoebe-bridgers-kyoto-cover-with-ai-grimes
Grimes AI courtesy of app.uberduck.ai/grimes
Phoebe Bridgers’ “Kyoto” (2020). Purchase Phoebe’s original version on Bandcamp.
Challenge Song: Phoebe Bridgers “Kyoto”
Students in my Audio Recording Techniques III (Spring 2022) course at the University of Oregon had ten weeks to recreate a recorded song of their choice. They voted on reverse engineering Phoebe Bridger’s “Kyoto” from her 2020 album, Punisher. The goal was to get the song as close as they could to the original recording. They recorded, overdubbed, mixed, mastered, and played parts!, on all elements of the song. I continue to be amazed by their work. Enjoy!
Sub-Kick Mic
I had some leftover parts and speakers from sound art projects, so I decided to make a sub-kick mic.
I used a 6.5″ driver, a XLR Male end, a few feet of audio cable, and I custom build a mic clip for the mic using a 3/8″ to 5/8″ mic screw adapter, scrap wood, and some scrap metal.
Solder the XLR pin 2 to positive terminal (+) and pin 3 to the negative terminal (-). Cut off the ground (other posts state one could solder this to the speaker). I used spade speaker connectors so I could reuse the speaker for another project if I needed to.
For the stand, I had old rack ears that I screwed into a 2×4 that the magnet of the speaker clings to and the simple ledge helps brace the speaker. For the mic clip, I screwed in a 3/8″ to 5/8″ mic screw adapter. If necessary, one can add screws to further assist the magnetic hold onto the clip (see image below).
That’s it! No need to add a -20dB pad. You can use line input if necessary or pad it up on the mic preamp. I tested using my DIY Lola Mic Pre from Hairball Audio. Audio recordings are forthcoming…
Soundscapes of Socioecological Succession
Over the summer, I produced five sound sculptures centered around fire-affected areas of the 2020 Holiday Farm fire. The work was part of the Soundscapes of Socioecological Succession (SSS) project that was funded through a Center for Environmental Futures, Andrew W. Mellon 2021 Summer Faculty Research Award from the University of Oregon.
Through field recording fieldwork, local wood sourcing, and custom electronic design, the five (5) sound sculpture prototypes were one way to generate a unique auditory experience aimed at the general public. The work was designed to unpack the sounds and scenes of wildfires in natural and human-systems and to document the regenerative succession of coupled social and ecological processes.
Video 1. Sound sculpture C prototype. Burnt cedar wood and audio sourced from fire affected area near Blue River, OR.
Socioecological systems emerge from interdependent processes through which people and nature self-organize across space and time (Gunderson and Holling, 2002). STEM-centric studies of socioecological dynamics miss literal and metaphorical connections between people and nature, which are difficult to quantify and to communicate. To address this limitation, the sound sculptures test a new approach to capture SSS as a qualitative record of collective response to catastrophic wildfire.
Like a slice of tree ring that marks age and time, the field recordings of audio in visits to fire-affected areas connotes a slice of succession activities. Sound recordings of the area are meant to capture multiple scenes and ecological voices, filtered through a raw material from the sites themselves.
Video 2. Sound sculpture D prototype. Wood and audio sourced from fire affected area near Blue River, OR.
Our sonic environment is polluted by man both in its content and its reflections. This is certainly true even for field recordists who venture further and further into the wild to break free from the noise pollution of a passing airplane, a highway’s din, or even audible underground activity such as fracking (One Square Inch, 2021). Treating site-specific wood as an acoustic resonator — a filter that distorts as much as it renders sound audible — casts a shadow onto the sounds it projects. The physical material acts as a filter upon the sound. The wood slightly changes the spectrum of sound by boosting or cutting the amount of different frequencies in the sound.
Our University of Oregon team expanded previous research by sampling the rich SSS at fire-affected sites, including soundscape field recordings, recorded interviews, and collecting “hazard tree” waste material. These materials offer a document of the resiliency of the landscape and illustrate how forest disturbance can set back human-defined sustainable development goals regionally. The development of the five sound sculptures are just one means to inform the public and inspire collective action towards sustainable futures.
Video 3. Sound sculpture E prototype. Wood and audio sourced from fire-affected area near Blue River, OR.
Audio field recordings were captured during two site visits to fire-affected areas on June 16, 2021 and July 2, 2021. The second visit was to H.J. Andrews forest and an interview and tour with Mark Schulze (H.J. Andrews Experimental Forest Director) Bailey Hilgren and I used a few field recording setups, and which mostly consisted of Bailey recording with a Zoom H6 using on-board mics and I recording with a Sound Devices 633 field mixer and three mics: Sennheiser MKH 30-P48 and MKH 50-P48 microphone in mid-side configuration and a LOM Uši Pro omnidirectional microphone. The Zoom recordings were captured at 96k-24bit, and the 633 recordings were captured at 192k-24bit. During the second visit, we were able to setup “tree ears” that consisted of two Uši Pro mics taped to a tree and a LOM Geofón low frequency microphone, and which we left recording for several hours in the H.J. Andrews forest (see Figure 2). Bailey organized all the audio recordings using the Universal Category System (UCS). The system is a public domain initiative for the classification of sound effects. While we chose not to make the 30+GB of audio files as a publicly available archive, we have made the audio categorization spreadsheet publicly available (SSS metadata spreadsheet).
Figure 1. Field recording setup at fire affected site.
Figure 2. “Tree ear” field recording configuration.
During the technical design phase, some secondary research questions were asked. Which audio exciter/transducers work best on non-flat, raw wood surfaces? Which exciters are the most cost-effective solution for an array of speakers? For fabrication of installing wood as pieces on a wall, can I cost-effectively source sturdier materials than aluminum posts?
Figure 3. Sound sculpture prototypes depicting standoffs and speakers.
I tested a few different models: waterproof transducer, round and square exciters, and distributed mode loudspeakers. I also tested different speaker formats: 10W 8ohm, 20W 4ohm, and 20W 8ohm. Unfortunately, the desired power outputs, 25-30W, models of exciters were consistently sold out throughout the project, therefore I was unable to equally distribute testing across similar power outputs. From experience more than a scientific A/B test, I found that the more flexible options for attaching to wood surfaces were the Dayton Audio DAEX25Q-4 Quad Feet 25mm and the Dayton Audio DAEX32SQ-8 Square Frame 32mm Exciter, 10W 8 Ohm. Generally, I realized that in order to get decent output in both frequency response and gain, the low-end of $15-20/transducer seems about right. I do not recommend anything below 10W for this type of work. Getting a stereo image was not important and would be difficult given the size of wooden pieces. I valued volume and minimizing visual distraction, so speakers were meant to be placed behind or under the sculptures. I doubled speakers whenever I used 10W drivers.
Figure 4. Recording log loader moving hazard tree material
Audio 1. Log loader field recording (see Figure 4)
For standoffs, I sourced variable size stainless steel standoff screws used in mounting glass hardware which worked extremely well on the river wood sound sculpture (Figure 5).
Figure 5. Stainless steel standoffs, 10W 8ohm speakers, and custom electronics board on sound sculpture D prototype.
I sourced audio amplifiers on sale for under $10 each, where $15 is normal pricing. The TPA3116D2 2x50W Class D stereo amplifier boards have handled well on previous projects, and finding them cheaply with the added volume control and power switch were a great addition for fine-tuning amplification in public spaces.
Normally powering the amplifiers and audio boards is where the real cost comes in, and I was happy to learn that Sparkfun’s Redbaord Arduino’s can now handle upwards of 15VDC, so I went with their MP3 Player Shield and Redboard UNO in order to split VDC power between the amplifier and board (12V, 2A power supplies were adequate for the project and transducer wattage).
Figure 6. Custom electronics board consisting of MP3 player shield, Arduino UNO Redboard, 2x50W class D amplifier, and power split for up to 15VDC.
Figure 7. Recording site near Eagle Rock along the McKenzie River.
I modified the outdated MP3 player code on Arduino to dynamically handle any number of tracks and named audio files, such that one doesn’t need to rename audio files in the convention “track001.mp3” “track002.mp3”. Whatever audio files are uploaded onto the SD cards, the filenames simply need to be placed into an array at the top of the code uploaded to the board. Thus, when powered on, the sound sculpture will play an endless loop of the uploaded audio files found on the SD card.
***For those interested in the Arduino code running on the MP3 players, I have made the code publicly accessible as a repository on Github.
Figure 8. Full electronics module example: 12V 2A power supply, MP3 player shield, Sparkfun Redboard Arduino, TDA 2x50W stereo amplifier, single 10W exciter.
Video 4. Sound sculpture A prototype. Wood and audio sourced from fire-affected area near Blue River, OR.
Selecting the audio for sound sculptures came through discussions with Bailey around ecological succession, the interviews conducted, and the types of audio that was captured and categorized. We chose four audio bins (categories) to work with: animals, soundscape or ambient, logging or construction, and scientific or interviews. Again, Bailey created a categorical spreadsheet of audio files within these four bins.
Video 5. Sound sculpture A prototype. Wood and audio sourced from fire-affected area near Blue River, OR.
Constructing the sound sculptures involved imagining public space and the materials. There are two pieces for wall, one for hanging, one for a pedestal, and one for the ground. The sculptures are stand alone pieces that simply require AC power for showing. See below for a gallery of stills of these works.
CONCLUSION
By activating sourced raw materials (e.g., “hazard tree” wood) with acoustic signals stemming from local sites, the sound sculptures amplify the regional and collective voice of wildfire succession even as it outputs a modified version of the input sound.
The process of developing sound sculptures led to additional ideas for iteration or for incorporating the sculptures within a larger-scale project. For example, in our interviews with Ines Moran and Mark Schulze, we found out about “acoustic loggers,” battery operated, weather-proof audio field recorders that record audio based upon a timer. We ordered one such acoustic logger for the project, an Audio Moth; however, the Audio Moth order did not arrive after the completion of the project. Working these into the project through sampling fire-affected sites would create a unique dataset.
The sound sculptures can be stand-alone works. We appreciated the modular approach to the design, and we could continue the modular approach or tether sound objects together. Future work could involve spatializing audio across multiple sculptures similar to previous sound artwork, like Wildfire and Awash.
For the sound sculptures themselves, there is gain control on speaker-level but not on the line output of the players. We could add buttons for increasing/decreasing volume on the MP3 boards to better manage levels, and if we want to provide an interactive component to the works, we could buttons for cycling through tracks on sound sculptures.
Listening to our environment is essential. In 2015, The United Nations Educational, Scientific, and Cultural Organization (UNESCO) formed a “Charter for Sound” to emphasize sound as a critical signifier in environmental health (LeMuet, 2017). By continuing to incorporate sonic practices (bioacoustics, sound art, field recording) into our work with the environment, we create more pathways to experiencing and understanding the planet we live on.
References / Resources
- Gunderson, L.H., Holling, C.S., 2002. Panarchy: Understanding Transformations in Human and Natural Systems, Panarchy understanding transformations in human and natural systems. Island Press. https://doi.org/10.1016/j.ecolecon.2004.01.010
- Bellona, Jon. Arduino MP3 Player Code Augmentation. Github Repository. https://github.com/jpbellona/arduino-mp3-player
- One Square Inch. https://onesquareinch.org/ Accessed Apr 28, 2021.
- LeMuet, Yoan. “UNESCO hosted ”The sound for a new urbanism” conference.” Acoustic Bulletin. Feb 14, 2017. https://www.acousticbulletin.com/unesco-hosted-the-sound-for-a-new-urbanism-conference Accessed Apr 26, 2021.
The sound sculptures were made possible through a 2021 Center for Environmental Futures Andrew W. Mellon Summer Faculty Research Award in collaboration with Lucas Silva (ENVS).
A shout out to Thomas Rex Beverly from whom I got the idea about recording using the “tree ears” configuration.
Spectrogram Videos of Audio Files
This is a short article on creating video spectrograms (time-frequency plots) of audio files. The work comes from research project, Soundscapes of Socioecological Succession, funded by a Center for Environmental Futures, Andrew W. Mellon 2021 Summer Faculty Research Award.
The example in Video 1 is a spectrogram video created using Matlab. The audio is a recording of a small dynamite blast of a 70″ stump across from Eagle Rock, just past Eagle Rock Lodge on the McKenzie Hwy in Vida, OR.
Video 1. Video of spectrogram with playback barline and synchronized audio file.
I love spectrograms. I’ve worked with time-frequency plots in various ways in the past, namely spectral smoothing music (listen on Spotify), collaborative research (read the paper), and even teaching (Data Sonification course) at the University of Oregon. Yet, I am still amazed by the work of spectrograms and sound in the sciences. I knew of the theories around animals occupying various frequency spaces within a habitat based upon the bioacoustics work of Garth Paine and great multimedia reporting by Andreas von Bubnoff. Yet, after an interview with a UO visiting researcher, Ines Moran, as part of our Soundscapes of Socioecological Succession project, I was further intrigued by how sound, spectrograms, and AI plays an integral role in her bioacoustics research on bird communication.
This led me to revisit my work with spectrograms. I was blown away by Merlin ID’s auto spectrogram video app, and I wanted to relook at how I create my own spectrogram videos. I’ve been frustrated with multiple software solutions to generating scrolling spectrogram videos. Not having a seamless solution other than using screen capture on iZotope RX or Audacity spectrograms, I did some more research looking at iAnalyse 5 software (replaces eAnalysis software) and Cornell Lab’s RavenLite software, but was unsatisfied with movie export results. I appreciated the zoom functionality of each software but wanted auto-chunking or scrolling of the spectrogram within a high-resolution video.
I didn’t easily discover a straightforward plug n’ play solution (although I’m open to hearing one if you have a suggestion!). I ended up going back to Matlab to see if I could find a pre-existing library or code I could implement. I found slightly different versions, and not exactly seamless. I ended up refashioning some pre-existing code written by Theodoros Giannakopoulos that generated gifs from spectrograms. See gif Figure 1.
Figure 1. Original Gif export using pre-existing Matlab code.
I used this code as a starter for me to build out the function to export videos of spectrograms, and which I can specify the length in seconds for each window. Video 2 depicts example display output of audio waveform and the spectrogram of a Swainson’s Thrush bird call. I sync’ed the audio afterward in Adobe Premiere. I removed the waveform to focus on the spectrogram, and I had to get fancy on x-axis labels to dynamically match the length of windows that could be any length of seconds.
Video 2. Video output on a single screen, with split waveform and spectrogram view.
While I was unable to get a scrolling spectrogram video in one software, the auto-chunking feature was quite time-saving. I simply crafted an Adobe Premiere template with a scrolling animation graphic that I can easily edit to equal the exact window length and sync my original audio file to the movie. All within about a minute or two (see Figure 2). The final version has a nice scrolling playback bar on pages of spectrogram videos.
Figure 2. Screenshot of Adobe Premiere with line graphic that keyframe animates across the spectrogram during playback.
Video 3 displays the spectrogram complete with audio waveform, audio file, and playback barline (audio and playback barline added in Adobe Premiere).
Video 3. Video example with scrolling playback barline
Video 4 shows the final version of the code output after removing the audio waveform, resizing the graph, and updating the title. Again, adding the playback barline and synchronizing the audio were done in Adobe Premiere.
Video 4. Final version of Matlab code that generates a 1920x1080p spectrogram video the same length as the audio file.
The code gave me an easy way to label the spectrogram and embed this in the video. There are four steps.
1. Run the script in Matlab which outputs the 1920×1080 video and contains the same length as the audio file,
2. Drag the video into Adobe Premiere with the Graphics playback bar template
3. Drag the audio into the start to match the animation
4. Export the 1920×1080 video.
The process for one audio file takes about 2-3 minutes from start to finish.
I could make this more dynamic by grabbing the audio file length automatically and setting the frame rate automatically to match. simply determine how many “screen/pages” I want by editing the function variables.
***For those interested in the Matlab code, I have made it publicly accessible as a repository on Github.
References / Resources
- ***From this article. Spectrogram video of audio Matlab code as a repository on Github. https://github.com/jpbellona/spectrogram-video-from-audio
- iAnalyse5 audio analysis software that includes exporting spectrogram videos: https://apps.apple.com/us/app/id1513428589
- RavenLite is a free software program that lets users record, save, and visualize sounds as spectrograms and waveforms: http://ravensoundsoftware.com/raven-pricing/
- Another interesting use of creating spectrograms with R (warbleR) https://marce10.github.io/2016/12/12/Create_dynamic_spectro_in_R.html and scrolling spectrograms, https://marce10.github.io/dynaSpec/reference/scrolling_spectro.html#examples
Inspiration for the work was after an interview with Ines G. Moran, visiting scholar at the University of Oregon, who works in wildlife bioacoustics (website).
Random Playback
Are streaming services ready for dynamic random-order concept albums?
Random Playback is a music album that explores using dynamically looped playback to generate a unique listening experience for your own device (and which never ends). The album leverages streaming technology to randomly playback material endlessly and aims to simultaneously test the boundaries of streaming services’ “gapless playback“ feature. Just hit shuffle, repeat, and play.*
The Random Playback album consists of twelve loopable tracks of equal length that have no audio fades. The source material was generated and recorded with permission from playing an iOS clicker game, Rhythmcremental, created by Batta (Simon Hutchinson and Paul Turowski). In the game, sonically one advances by adding different instruments and triggers, thereby creating more rhythmic density and harmonies. Do check out the game. It’s addictive.
The tracks on the album are sequenced sequentially, such that one could play the album straight as if listening back to a selection of a single game of Rhythmcremental. Yet, by turning “shuffle” mode on, and turning on “repeat,” the album will loop endlessly, navigating seamlessly** between random tracks on the album in order to weave together a listening experience unique to your device. But don’t take my word for it. Hit Shuffle, Repeat, and Play and check it out for yourself…. (recommend playback in the app!)
Listen on Spotify.
Listen on Amazon Music.
Listen on Apple.
*phone apps for Amazon Music and Apple services are the most seamless for shuffle playback.
UPDATE 9/21/21: While Spotify is seamless on chronological, non-shuffle playback, Spotify seems to falter if randomizing playback (confirmed by other users). Amazon Music Unlimited, however, seems to handle randomization playback well, nearly seamless (from the phone app). Apple Music also has seamless shuffle playback from the phone app. That said, any browser playback has terrible audio drops between songs on shuffle mode for all services.
**”Seamless” is part of the ‘testing the boundaries of streaming technology’ bit. The track-to-track gapless playback result relies quite a bit on server bandwidth and the “Gapless Playback” feature of the one’s streaming service. At the time of writing this, sequential playback sounds more seamless than random “gapless” playback (on Spotify). Although, this could be my own device and data bandwidth.
The process behind sound artwork Wildfire
Wildfire is a 48-foot long speaker array that plays back a wave of fire sounds across its 48-foot span at speeds of actual wildfires. The sound art installation strives to have viewers embody the devastating spread of wildfires through an auditory experience.
Wildfire employs sound to investigate how the climate enables destructive wildfires that lead to statewide emergencies. The speed at which fires move can be mimicked in sound. By placing speakers along a surface (every three feet across 48 feet ~16 speakers), Wildfire implements spatialization techniques to play waves of fire sounds at speeds of simulated models and actual wildfire events. Comparing the speed of different fires through sound spatialization, we can hear how quickly different fires move across various wildfire behavior (fuel, topography, and weather).
Stereo audio example. Fire sound moving across stereophonic field at 16 mph.
Stereo audio example. Fire sound moving across stereophonic field at 83 mph.
Wildfire is comprised of sixteen 30W speakers, 120’ speaker cable, sixteen 8” square wood mounts, sixteen 6.25” diameter wood speaker rings, 64 aluminum speaker post mounts, eight custom electronic boards and enclosures, eight 50W power amps, one custom motherboard and enclosure, eight custom length Ethernet cables, custom-built power supply cable, sixteen 15V 4A power supplies, and three 9V 5A power supplies.
Eight different recordings of wildfire sound simulations are played across the 48-foot speaker array in looped playback. A narrator describes each wildfire event before the audio playback of fire sounds. Because audio on all eight stereo channels are triggered at the same time for simultaneous playback, audio spatialization is ‘baked-in’ on the audio files. The fire soundscapes are audio samples that have been simulated in a virtual space to move at speeds of actual wildfires and captured (read recorded) as eight stereo audio files at the same spatial location of the sixteen speakers in the physical world. The virtual mapping and recording process ensures little destructive interference as a result of phase shifts and time delays. I then mixed the resulting files inside Logic Pro X (see Figure below).
I am always amazed by how different topics are defined and vocabulary used when working across disciplines. For example, in seeking to play audio at varying rates of ‘speed,’ wildfire scientists and firefighters instead describe fires in terms of ‘rate of spread.’ Because fires are not single moving points but instead lines that can span miles moving in various directions all at once, speed is difficult for the field to put into practice. The term ‘spread’ and how it’s calculated serve wildfire science well but required me to think about how to convey the destructive ‘rates of spread’ as a rate a general observer may perceive along a two-dimensional speaker array (speakers mounted along a wall).
In order to distill wildfire science down to essential components for a gallery sound installation, I spent a lot of time speaking with various wildfire scientists on the phone, emailing various fire labs, working on estimating wildfire behavior using Rothermel’s Spread Rate model,[1][2] and working between the measured distance of ‘chains’ and miles. I am not a fire scientist; I am indebted to the help I received but any incongruencies are my own. I compiled eight narratives that juxtapose ‘common’ rates depicted in simulated models with real wildfires that have occurred in US western states over the past ten years based upon fire behavior (fuel, topography, and weather). These narratives are outlined in Table 1 at the bottom.
Earlier in the year, I worked with Harmonic Laboratory, the art collective I co-direct, on a 120- speaker environmental sound work called Awash.[3] The work was commissioned for the High Desert Museum in Bend, Oregon as part of the Museum’s 2019 Desert Reflections: Water Shapes the West exhibit, which ran from April 26 to Sept. 27, 2019. The 32’ x 8’ work evokes the beauty of the high desert through field recordings, timbral composition, and kinetic movement (Figure below).
The electronic technology that I implemented in Awash for playing back audio across 120-speakers influenced my design of Wildfire. The electronics in Awash works by sending a basic low-voltage signal from the Arduino Mega motherboard to ten sound FX boards across Ethernet cable, thereby triggering simultaneous playback of audio across all 120-speakers (twelve 3W speakers per board powered by a 20W amplifier circuit). The electronics in Wildfire function in the same way: a low-voltage signal from the motherboard (Arduino Mega) is sent to eight electronic MP3 boards across Ethernet cable, thereby triggering simultaneous playback of audio across all sixteen speakers (Figures below). Instead of 3W speakers and 20W power amp boards used at the High Desert Museum, I chose to scale down the number of speakers and ramp up the wattage per board, choosing a stereo 50W power amp matched with two 30W speakers. The result is sixteen channels of audio running across eight stereo boards. And it doesn’t have to be sample-accurate!
For Wildfire, I built custom laser-cut acrylic enclosures for the electronic boards (Figure below) using MakeABox.io (note: I found a good list of other services here). The second element was designing and creating custom PCB boards for the electronics themselves (Figure below). For the custom PCB board, I used Eagle CAD software (SparkFun has a great tutorial!) and then used an Oregon-based manufacturer OSH Park to print the boards.
For the sixteen panel mounts and speaker rings, I sourced all wood from the woodshop at my father-in-law’s, who has various wood collected over the last 50-60 years. The panels were planed, cut, and drilled on-site and the speaker rings were cut using a drill press. The figure below depicts the raw materials after applying a basic wood varnish. The wood mounts consist of black walnut, pine, and sycamore woods. The wooden speaker rings consist of alder, ebony, and myrtle woods.
In the build-out, I was unable to power both the power amp and the MP3 audio boards from a single power source, even with voltage regulators. A large hum was evident during the split of power. A future work could attempt to power from a single power source while sharing ground with the motherboard. Yet, the audible hum led me to power the boards separately.
During install, I ran into issues of triggering related to the MP3 Qwiik trigger boards. The power draw for each MP3 board is between 3 to 3.3V, and I ran four boards from a single 9V 5A power supply using a custom T-tap connector cable and 1117 voltage regulators, in which I registered 3.26V along each power connection. However, upon sending a low-voltage trigger from the motherboard to the MP3 boards, I was unable to successfully trigger audio from the fourth and final board located at the end of the power supply connector cable. The problem remained consistent, even after switching modules, switching boards, testing Ethernet data cable, testing a different I2C communication protocol in the same configuration, among other troubleshooting tasks. When powering the final board with a different power supply (5V 2A supply), I was able to successfully trigger all eight electronic boards at once. It should be noted that the issue seems to have cascaded from my failure to effectively split power from a single power source per electronics module.
The minimal aesthetic was slightly hindered due to the amount of data and power cables running along the floor. There is minimal noise induction with long speaker cable runs, such that in my second install at SPRING|BREAK in NYC, I relied on longer speaker cable runs instead of long power and data cables. Speaker cable is cheaper than power cable, so keeping costs down, saving time in dressing cables, and minimizing cabling along the 48’ span, focusing the attention on the speakers, wood, and audio. And, if I use the MP3 boards again, I would implement the I2C protocol and consolidate the electronics, which would save on data cabling.
Through the active listening experience of hearing sounds of wildfires at realistic speeds, viewers are openly invited to support sustainable and resilient policies, including ones that can be done immediately, like creating defensible space around their homes. In the face of continued ongoing wildfires that become more frequent, Wildfire sonically strives to impact the listener in registering the devastation caused by wildfires. Getting the public to support sustainable policies and/or individually prepare for wildfires helps make communities more resilient to the impacts of wildfires and other disaster-related phenomena caused by climate change.
The work was made possible through the University of Oregon Center for Environmental Futures and the Andrew W. Mellon Foundation. The Impact! exhibition at the Barrett Art Gallery was supported with funds from the Oregon Arts Commission. Thank you to Meg Austin for inviting me to display work at the Barrett Art Gallery, and I am indebted to Sarisha Hoogheem and Matthew Klausner for their hard work in putting the show together. Thank you to Meg Austin and Ashlie Flood for curating Wildfire at SPRING|BREAK in NYC. And kudos again to Matthew Klausner and Jay Schnitt for their hard work in putting the piece up. Thank you to my cousin John Bellona, a career Nevada firefighter, for his insight on western wildfires and contacts in the field. Thank you to Dr. Mark Finney for providing common averages of speed-related to wildfires; Dr. Kara Yedinek for sharing insights on audio frequencies from her fire research; and Sherry Leis, Jennifer Crites, Janean Creighton and the other fire specialists who helped me along the way.
Table 1: Narratives in Wildfire
Feature |
Characteristics |
Rate of Spread |
Time across 48-foot speaker array |
Surface Fire: Grass Yarnell Hill Fire, June 30, 2013 Crown fire: Forest Delta Fire, near Shasta, California. September 5th, 2018 Surface Fire: Western Grassland, Short Grass Long Draw Fire. Eastern Oregon. July 12, 2012 Crown Fire: Pine and Sagebrush Camp Fire, near Paradise California. November 8th 2018. |
Low dead fuel moisture content, High wind speed, Level terrain 3-6% dead fuel moisture content, Wind speed 15-25 mph, Mixed terrain Low dead fuel moisture content, High Wind speed, Level terrain Moisture content unknown, Wind speed unknown, Mixed terrain 2% Dead Fuel Moisture, Wind speed 20 mph, Level Terrain Moisture content unknown, Wind speed unknown, Mixed terrain 2% Dead Fuel Moisture, Wind speed 20 mph, Level Terrain Low Moisture content, Wind speed 50 mph, Mixed terrain |
Upper average forward rate of spread, 894 chains/hour During Granite Mountain crew de- ployment, 1280 chains/hour Upper average forward rate of spread, 297.6 chains/hour Initial perimeter rate of spread, 16,993 sq. chains/hour Perimeter rate of spread, 1250 chains/hour Average perimeter rate of spread, 61,960 sq chains/hour Perimeter rate of spread 525 chains/hour Peak perimeter rate of spread, 67,000 sq. chains/hour |
2.92 seconds 2.04 seconds 8.7 seconds 1.54 seconds 2.16 seconds 0.422 seconds 4.99 seconds 0.394 seconds |
[1] F. A. Albini, “Estimating wildfire behavior and effects,” United States Department of Agriculture, Forest Service, Tech. Rep., 1976.
[2] J.H. Scott and R.E. Burgan, “Standard fire behavior fuel models: A comprehensive set for use with Rothermel’s surface fire spread model,” United States Department of Agriculture, Forest Service, Tech. Rep., June 2005.
[3] J. Bellona, J. Park, and J. Schropp, “Awash,” https://harmoniclab.org/portfolio/awash/
The Art of the Cron Job
Sound art installations that require digital computing, especially projects that rely on advanced software, demand added insurance of stability in order to remain up in an unattended space for extended periods of time. For exhibitions, this time period can mean a month or more with hours that vary from business hours to a taxing 24-7. One added insurance for artists relying on computers (e.g., Mac Minis) for unattended digital works is the cron job.
A cron is “a time-based job scheduler” that runs periodically (time intervals) to help “maintain software environments” (footnote 1). A software utility for Unix (read Mac), the cron automates processes and tasks, allowing the computer to be used as your personal docent to check on installation software, updating variables as part of the work or fixing issues as they crop up.
I got into cron jobs in 2014 while I was working with John Park on #Carbonfeed (URL), a multimedia installation that leverages Twitter API to transform real-time tweets into physical bubbles in tubes of water as well as a musical composition driven by behavior on Twitter (Figure 1). The piece incorporates a custom node.js script running on a Mac mini. To anticipate power failures, and to even alter hashtag sets on the LCD screens (Figure 2), I needed a way to automate software processes and failsafes. Enter the cron job.
In #Carbonfeed, I used the cron to check if the software had crashed and automatically reboot, and every 8 minutes, I altered Twitter hashtag sets on the LCDs, in order to change the dynamic of the work and create new opportunities for discourse. For a how-to on the cron and cron specifics, please jump to the bottom of this article.
Since #Carbonfeed, whenever I found myself working on a sound installation that required advanced software (e.g., Processing, Max/MSP, Logic Pro X), I inevitably involved a cron. For example, in 2017, I worked with Harmonic Laboratory (URL) on a Mozilla Gigabit Foundation Grant (URL) project called City Synth, which turned the city of Eugene, OR into a musical instrument. The piece involved taking live video feeds from Raspberry Pis (a collaboration completed by the South Eugene Robotics Team, URL) that was mangled by a Processing sketch and subsequently controlled a live synthesizer running in Logic Pro X. The work was up for a month in the Broadway Commerce Center in downtown Eugene, OR.
In 2019, my first solo exhibition at the Edith Barrett Gallery in Utica, NY (curated by Megan C. Austin and Sarisha Hogan and supported by funds from the Oregon Arts Commission) had six sound artworks running for three months. Since I was able to borrow Mac minis for the exhibition, I incorporated cron jobs and scripts to transform Mac minis into glorified audio players for two of the works. Sound Memorial for the Veteran of the Vietnam War (URL), ran an Automator script upon startup that opened iTunes and played a playlist holding the six-hour-long work (Figure 4). I mixed the 8-channel work down to a stereo headphone mix in order to account for the bleed of other works inside the space. Relay of Memory (URL) used the same script to output computer audio to an FM transmitter, which played the work through nine radios hung on a wall (Figure 5). Cron jobs checked the status of running software.
The cron utility has been an amazing tool for my sound installation work. I can still recall driving home after installing Aqua•litative (URL) when I received a frantic call from the curator that there was a power outage. In the middle of the call, the power came back on, the computer turned on (setting to automatically start after power failure), and a minute later, the cron kicked in opening up all software. I didn’t need to turn around and drive back or walk the curator through how to turn on computer software. A happy moment.
I have saved countless hours that I know about, and I’m sure many other hours I won’t ever know about thanks to the cron. I even have started to implement the cron in other ways to help with basic tasks in my daily life (see below for code specifics) such that the cron has helped me get closer to what Allan Kaprow describes as the “fluid” and “indistinct” “line between art and life.” Maybe the overseer of digital automatons is what a 21st-century computing artist feels like (footnote 2).
CRON
This is a walkthrough of the crontab on Mac OS using Terminal. I’ve included some code specifics by theme below. If you use, please share your work with me and how you implemented your cron! If you like what you’ve read, sign up for my mailing list (URL), follow my music on Spotify (URL), and please share it with friends.
Setting up a cron
Googling helped me in every way possible for working with crontab, but there are three basic steps. Open up an editor via Terminal, add your cron code (requires setting a time of how often it’ll run), and then saving the file. For more on Terminal, here’s a beginner’s walkthrough, Apple’s user guide, and a command cheat sheet.
1. Open up an editor to add a cron via Terminal
env EDITOR=nano crontab -e
2. Inside the editor add the executable file to the cron job
* * * * * ~/Music/citysynth/cronjobs/citysynth_cron.sh
The asterisks tell how often to run the cron: Minute, Hour, Day of Month, Month, Day of Week. Straight asterisks mean “every” so this is a call to run the cron EVERY minute. The cron after the timer is a call to run a bash script called “citysynth_cron.sh”. The below cron runs every 5 minutes and closes the bash window in Terminal.
*/5 * * * * osascript -e 'tell application "Terminal" to close (every window whose name contains "bash")';
3. Save and exit the cron.
Ctrl-O, saves the file. Ctrl-X exits the file. You must save the temporary file after editing. When you are done with the cron and want to remove the cron job, follow step 1 to open, but then delete the lines (using Ctrl-K) and save the file. For reference, see
http://www.maclife.com/article/columns/terminal_101_creating_cron_jobs
4. Want to know if you have a cron on your machine? List your crons in Terminal with
crontab -l
Adding a bash script.
If you decide to run a bash script via a cron, you’ll need to make the .sh file executable, that is, give the cron the ability to run the script. In Terminal, navigate to the folder where the .sh file lives and change its permissions with
chmod +x bashfile_name
where “bashfile_name” is the name of the .sh script (make sure to include .sh in the filename).
Below is an example of a .sh script that checks to see if an app is running and if not, reopens the app. I included the initial bash line of the file in the code.
#!/usr/bin/env bash
echo "cron job";
PROCESS=api_hashtags-polyphonic
number=$(ps aux | grep $PROCESS | grep -v grep | wc -l)
if [ $number -gt 0 ]
then
echo Running;
else
echo "sound is Dead";
# open music player application
cd ~/Music/carbonfeed/work/sound/;
sleep 2;
open api_hashtags-polyphonic.app;
fi
Doing it all in one line of code
For recent projects, I opted to run code directly via the cron instead of relying on bash and AppleScripts. Below is code to start the Chrome web browser at a random time (to the second!) between 855-9p.
55 20 * * * perl -le 'sleep rand 300' && open -a 'Google Chrome'
Remember, the timing of the cron comes first:: Minute, Hour, Day of Month, Month, Day of Week. The cron is fired at 8:55pm, but has a random sleep time (between 0-300 seconds, read between your 8:55–9:00) and THEN opens the web browser.
Adding in an Apple Script
You can use your cron to trigger an Apple Script (.scpt file), just another way to execute commands on your Mac. Here’s an example of telling Safari to hit the spacebar (or could even be iTunes).
tell application "Safari"
activate
end tell
delay 2
tell application "System Events"
key code 49 -- space bar
end tell
Automator scripts (triggered by cron or system startup)
If cron and bash aren’t your thing, Apple has the Automator app that allows you to create automated processes straight from a GUI and then save out as an application (Figure 6). You can also easily trigger the app via a cron or by system startup by going to System Prefs > Users > Login Items. Login items can be set to run Automator scripts upon computer startup, and configuring the computer to power up automatically after power failure will help ensure a work stays running.
Hope this was helpful. Please get in touch if you have questions or want to share your work with cron in art.
Footnotes
1. Wikipidea, “Cron”. URL: https://en.wikipedia.org/wiki/Cron accessed August 27, 2020.
2. Allan Kaprow. Essays on the Blurring Between Art and Life. University of California Press, Los Angeles. 1993. URL