A soundscape is a sound or combination of sounds that forms or arises from an immersive environment. The study of soundscape is the subject of acoustic ecology or soundscape ecology. The idea of soundscape refers to both the natural acoustic environment, consisting of natural sounds, including animal vocalizations, the collective habitat expression of which is now referred to as the biophony, and, for instance, the sounds of weather and other natural elements, now referred to as the geophony; and environmental sounds created by humans, the anthropophony through a sub-set called controlled sound, such as musical composition, sound design, and language, work, and sounds of mechanical origin resulting from use of industrial technology. Crucially, the term soundscape also includes the listener's perception of sounds heard as an environment: "how that environment is understood by those living within it" and therefore mediates their relations. The disruption of these acoustic environments results in noise pollution.
Why care? There are many reasons, aside from the joy of it, to record soundscapes, but there are essentially four reasons biologists do it:
Monitor changes over time, both during a day and over broader scales like seasons or years (e.g. to indentify impacts due to degraded or restored landscapes)
Monitor for diversity
Monitor for variability across different locations
Monitor for rare species
To learn more about soundscape monitoring watch the Ted Talk to the right, and then visit The Center for Global Soundscaping hosted by Purdue University. And to better understand the human impacts on soundscapes in the environment, read this Ted.com post. Also check out what is going on here in Montana at the Acoustic Atlas project.
An Example Soundscape
In August of 2021 two of us set a mobile phone down in the northern range of Yellowstone Park and hit the record button. We then looked at the recordings of a full day in software that displays the audio as a spectogram. A spectogram is akin to written language. It is a way humans can transcribe audio content in to a visual display. Spectograms tell you the frequency (pitch) and amplitude (loudness) over time. The higher up on the graph, the higher the pitch. The brighter the orange, the higher the volume. By looking at the one hour spectograms below, you can get a quick snapshot of what is going on over time. And if you zoom out even further to a day, you get an even bigger picture. Week, month, year, decades...and you can see large scale changes over time. And, when you get good at it, you can identify different species just by looking at the "words" in the spectogram. And when you get really good at it, you can sometimes interpret what might be going on by looking at the sentences and discource in a series. One common thing you will notice when you compare daily spectograms side by side over, for example, a month, is that the dawn chorus shows a lot of activity at, of course, dawn. Insects then start to take over at mid-day, and then other animals might chip in at dawn or even throughout the day (e.g. cars). Below are three spectograms taken at morning, noon, and in the evening. The circles show what type of animal was making that type of sound displayed in the spectogram. We have also provide a zoomed in portion of each spectogram to show you what the sound of specific species looks like, and in the last case, what happens when birds perceive some type of threat. At the bottom is a full day spectogram (starting at 5:43am)...guess when the bison got busy! This daily view becomes really useful when compare various days to one another, the the amount of data gets large quickly (this one day is almost 4 gig of data at 16khz sample rate). A sampling of some of the species that are captured in this recording are listed at the bottom (including the position of time on the recording which runs from 5:43am to 8:43pm), which you can play and download via the SoundCloud link). As you can see, there are multiple vocalizations that are difficult to identify...help us out and figure them out. Lastly, we provided a 10 minute soundscape (click on the player above) for your sleeping pleasure...except for the bison who decides to close out the piece. Enjoy.
Great Horned Owl
0:13:50 (minutes in)
Lots of Insects
White Crowned Sparrow
Robin with People
Robin, Bison, Magpies
Multiple Bird Alarms
Another interesting thing you can do with soundscape recordings is look at behavior over large timeframes. You can figure out the diversity of animals in a recording of a day, week, or month. Or you figure out when birds, for example, start and end vocalizing during a day. Below are seven spectograms display daily recordings from 6am til 9pm from Nov 1-7 2021 at one of our recorders - each day is stack one upon the other. Each hour is delineated by a vertical, dotted white line with the first line designating 7am. Days 4-6 are windy as designated by the blurry red blobs, but it is relatively easy to see that the local Chickadees start there vocalizations about the same time each morning.
Finding a Needle in a Haystack: Audio AI
Imagine having any entire month or year of recordings outside of your favorite cabin in the woods (about 300 gigabytes of data for a month). Then imagine you wanted to know when a particular species made a sound, such as a chickadee song, a wolf howling, or a cow giving a distress call, a raven meat call, or a poacher shooting a gun. With that data, you could look for patterns, such as how often, what times of day or month or year, different species (including us) are talking. And, let's make it even harder: what if you wanted to know which actual animal was doing that, in the same way when you pick up a phone you can quickly identify who the caller is.
Obviously, this doesn't happen much, because it would take a long time to go through all of the recordings: roughly two months if you decided to sleep. But, now, with various Artificial Intelligence (or Machine Learning) algorithms it is possible to shorten the amount of time it takes to days, and even hours. And, often, it is easy to miss a sound because of wind or car noise, for example. But, computer software can be much more precise in finding those sounds that your ear might miss. Below is an example of how we created AI classifiers to search through a month of recording to find wolf vocalizations and gunshots over a month of audio, in less than an hour.
And, what is even more powerful is a new trail camera developed by Grizzly Systems (see here) to listen for sounds in real-time and alert you when a user-defined event happens (like a gunshot). In the same way, Amazon Alexa can listen for "your" voice, this trail camera (or groups of them) can be programmed to listen for a rare species that researchers are studying for conservation purposes. This is especially useful for birds or animals that don't pass in front of a trail camera...in other words, this trail camera not only looks for specific objects, it listens for them as well, greatly expanding the range of a typical trail camera.
Using Artificial Intelligence Software to "Find" Wolf Vocalizations in Multi-Day Recordings
This beautiful chorus of wolves stands out distinctly in the below spectogram. And, it is pretty easy to see. So, when playing through a full recording, it would be pretty easy for anyone to see and hear that wolves were recorded at that moment. Note how wolf vocalizations are typically in the 300 to 900 hertz range, but vary regularly. Listen to and look at the vocalization and then try to guess how many wolves are in this chorus. Check out Earth Species and their Github cocktail party project to make it easier to identify individual animals in a recording. A big "thank you" to Dave Roberts and host Ali Donargo from Wildlife Acoustics for hosting a two part webinar on Cluster and Classifier analysis of audio recordings.
To put it in visual perspective, if you look at this one wolf chorus from the entire hour of recording, appears in the below circle.
And if you looked at the same wolf chorus within the entire day's recording, it would like what is in the below circle...a needle in a haystack. Imagine trying to find these "signals" in a month's worth of recordings.
This next spectogram is a lone wolf call in the early morning. As you can see, if you somewhat know what you're looking for (remember, wolves call in the 300-1000 hz range) then it is pretty easy to identify. It's in the bottom portion in the middle of the below graphic, represented by three lines: one that slopes down, a quick spike up, and then another descending line. Play the sound on the right to hear it.
Below is a zoomed in view. Note the almost 20 second duration of the vocalization.
This next spectogram is from a lone wolf a day later a littler later in the morning, when bison and ravens and magpies were also vocalizing near the recorder. You can see the beginning of the "howl" marked below. Note the harmonic (two distinct lines), one at 400 hz and the other at 800 hz. Above the wolf howl are five sequential lines, representing a raven, and below are bright burry bands representing bison snorting. The computer software was able to find this vocalization regardless of the other sounds.
Below is a zoomed in view. Note the higher pitch than the previous day call, and the roughly 3 second first note followed by 2 second second note.
This next spectogram is from a wolf vocalization several minutes after the previous example #3. It is almost impossible to hear or see without a lot of amplification and zooming in on the spectogram. This would have been difficult to find manually without the AI software. Play the sound to the right and notice how faint the call is. Is the 800 hz harmonic undetectable and not showing up on the spectogram? Compare to the previous call...is it the same wolf...same type of vocalization? And...what are the various whine sounds in the 1000 hz range that accompany the howl but seem closer to the recorder?
Below is a zoomed in view.
In summary, artificial Intelligence software doesn't replace the need for human observation and intelligence. It's primary value is to replace repetitive and time-consuming tasks, so that we can more quickly look at the data of interest. It still takes a lot of wisdom and actual "feet in the field" observation of animal behavior to turn this data in to insights.
There are basically two different things you need to get started soundscaping: a recording device and software to view the recordings as a spectogram. There are very expensive (and even free) approaches and more expensive (but not out of the average budget) approaches. Here are a few options.
Recording Device: Use your cell phone and download "Record the Earth" app for Android or iPhone
Spectogram Software: Cornell's Raven Lite
Optional: Cornell's Merlin App for identifying bird species
Wildlife Acoustic's Recorders ranging from $250 on up (I use the one pictured to the right)
Software: Wildlife Acoustic's Kaleidoscope Pro (or Raven Lite or Pro software listed above)
Optional: If you record video and audio Adobe's Audition software is superb for viewing a spectogram and video at the same time. If you are new to spectograms here is a good video to get you started (Google can point you to many more, as well as Cornell's Raven web site).
A Short Video on How to Understand a Spectogram
Interested in helping the Upper Yellowstone?
We are looking for a group of volunteers (and a leader) to coordinate efforts across the valley to record various soundscapes at different locations for long term study and comparison over time. We will be creating a website to display the spectograms with the ability to replay the soundscapes. If you are interested, please contact firstname.lastname@example.org.