On Working Between the Ears (producing binaural audio recordings)


From time to time people ask me for advice on making binaural (or “3D” or 360-degree) audio recordings.

First — have a read through Wikipedia’s ‘wiki’ page for binaural recording

Second — it needn’t be expensive (although it certainly can be). So listed below are some of the hardware and software tools that I have first-hand experience with over the past few years. Please note: none of these companies are endorsing me or paying me for mentioning them or their products. 

Third — I will update this page as I come across other tools and strategies.

But in the meantime, if you have questions just send me an email.

Have fun!

Field Recording


(previously) SP-TFB-2 – Sound Professionals – Low Noise In-Ear Binaural Microphones

(currently) Soundman OKM II Classic Binaural Microphones

(DIY alternative) If you have 2 identical microphones (ideally of the large capacitor type), a binaural audio experience can be created by positioning the microphones at the same vertical height but facing out in opposite directions from each other. The distance between the two microphone’s capsules should be the same as the width of the average human head. Then fill that space with materials having an equivalent material density (try a pillow or the head of a mannequin or wig display)

If the DIY approach appeals to you, Rob Cruickshank’s infinitely wise Musicworks tutorial How To Make Binaural Microphones is a must-read.

Sound Recorders

Zoom H4N

Software-based Encoding

For arranging pre-existing, non-binaural audio recordings into a binaural /3D audio format I’ve had good results from the VST implementation of the Facebook 360 Spatial Workstation.

Lately, I’ve also been doing binaural audio work inside the MaxMSP programming environment through a very useful library of Higher Order Ambisonic encoders by these lovely folks:

Recording Collections

My own (growing) collection.

You can also search the Freesound collaborative database of Creative Commons Licensed sounds for binaural recordings  — just make sure that if you end up using someone’s recordings in your own work that you adhere to the Creative Commons license stipulated by the recording’s creator!!


Sonification sketchbook: Audio Portrait — “REHEARSING SILENCE” (2018)

Speculative prototype for a binaurally immersive medical portraiture

(9 minute audio loop)

For best results, listen on noise-cancelling headphones.

Rehearsing Silence is part audio essay, part medical portraiture, part data sonification, part prosthetic design sketch. It proposes a binaurally-encoded, audio-based approach to portraiture that frames and compresses the gradual and inevitable diminishment of auditory perception as a consequence of aging and neurologically collapsing bodies. This design sketch stems, in part, from ongoing research focused on developing instruments and tools to support multi-sensory (non-visual) data analytics, and a continuing interest in how the effects of aging and sensory impairment manifest themselves as perceptual artifacts within an artistic practice (Claude Monet painted through cataracts, Beethoven composed through tinnitus, etc.).

It also proposes an alternative approach to data sonification in which data is represented as absences, mutations, disfigurements or erasures of a previously whole or intact sonorous entity.

This audio portrait contains simulations of high frequency tinnitus tones and frequency-based hearing loss which are different in each ear. If you currently suffer from tinnitus, listening to this portrait may exacerbate your symptoms if listened to at high volume levels.**

Further Reading:

Begault, Durand R. “The Virtual Reality of 3-D Sound.” In Cyberarts, edited by Linda Jacobson, 79-87. San Francisco: Miller Freeman, 1992.

Eggermont, Jos J. Tinnitus. Springer, 2012.

Gruener, Anna. “The Effect of Cataracts and Cataract Surgery on Claude Monet.” The British Journal of General Practice 65.634 (2015): 254–255. PMC. Web. 1 Feb. 2018.

Lupton, Deborah. The Quantified Self : A Sociology of Self-Tracking. Polity Press, 2016.

Marmor MF. “Ophthalmology and Art: Simulation of Monet’s Cataracts and Degas’ Retinal Disease.” Arch Ophthalmol. 2006;124(12):1764–1769. doi:10.1001/archopht. 124.12.1764

Mermikides, Alex et al. Performance and the Medical Body. Bloomsbury Publishing, 2016.

van Beethoven, Ludwig, and Paul Lewis. “Beethoven No. 3: Sonatas Op. 2, 7, 26, 27 ‘Moonlight’, 54, 57 ‘Appassionata’” (2005), Harmonia Mundi.

Additional field recordings by https://freesound.org/people/micndom/sounds/27340/

Related post: Sonification sketchbook: A sonification model based on variations or mutations of single sound objects?

Sonification sketchbook: A sonification model based on variations or mutations of single sound objects?

What if data were expressed, not as individual and discrete sound events, but as a sequence of variations, mutations, erasures, or distortions applied to iterations of a single ‘sound object‘ (in the Schaeffer-ian sense of ‘musique concrete’)? The medical practice of auscultation could serve as an existing model for this approachThis idea was inspired, in part, by the following data visualization:

‘Giorgia — week fifty-two’:‘Giorgia — week fifty-two'

This is an example from ‘Dear Data’ (http://www.dear-data.com/theproject/ (Links to an external site.)Links to an external site.) — “…a year-long, analog data drawing project by Giorgia Lupi and Stefanie Posavec, two information designers living on different sides of the Atlantic.” For each week of an entire year, they chose a different aspect of their daily lives to track and render as a ‘data drawing’ on blank postcards, and then mail them to each other.
Below is a brief explanation of Giorgia’s data visualization.

In one or two sentences, what story does it tell?

It chronicles one week of “good byes,” “bye byes,” and “goodnights” spoken by the author in chronological order.

Identify the data. What type of data is it?

The data is a combination of quantitative and qualitative data.

Identify the visual variables used.

According to legend on back of postcard:

  • Each element is a goodbye spoken by the author that week, arranged in chronological order.
  • Each element is comprised of the following:
    • A primary shape
    • A secondary downward pointing triangle beneath the primary shape
    • The occasional presence of a small dot at the top right-hand corner of the element.
  • Shape:
    • A set of 2-dimensional shapes are used to represent an array of variations pertaining to how each goodbye was articulated. Here, particular attention is paid to the communicational medium/context/situation in which each goodbye was spoken — i.e., ‘in public’, ‘over Skype/(Google)hangout’, ‘over the phone’, ‘in Real Life’
    • The addition of a downward- pointing triangle at the bottom of each shape indicates that the goodbye contained additional words, such as “good luck!”, “have fun!”, “thanks!” etc.
    • The presence of a small circle at the top right-hand corner of each goodbye shape indicates that physical contact was a part of the goodbye gesture.
    • The asterisk at the end of the 3rd row indicates a ‘missed goodbye’ — i.e. she feel asleep before her boyfriend that night.
  • Colour:
    • The colour of each ‘goodbye’ shape distinguishes the person to whom the goodbye was spoken — i.e. mother, boyfriend, friend, stranger, etc.
    • Each variation of ‘additional words spoken’ (the downward-pointing triangle) is identified by a different colour fill. NB: in some instances, distinctions between variations is hampered by the use of similar hues. For example, ‘have a nice day’ and ‘love you!’ have nearly identical colour assignments.
    • The colour hue of each ‘physical contact’ dot indicates whether the contact was a kiss, a hug or a handshake.
    • Position, size, orientation and texture are not utilized.

How many dimensions being visually mapped?

  1. The number of goodbyes spoken in one week
  2. The location/context in which each goodbye was spoken
  3. The relationship of the person to whom each goodbye was spoken
  4. Variations in the message content of each goodbye.
  5. Textual variations that were appended to each goodbye
  6. The occurrence of physical contact as part of each goodbye
  7. The type of physical contact engaged in as part of each goodbye.

Identify the type of visualization, or methods used.

I think this qualifies as a compound visualization.

Referring to the Venn Diagram for information design, comment on this visualization’s

  • Interestingness
    • Representation of communicational exchanges between people through hand-drawn shapes (rather than computer-generated). As a result, the shapes posses a man-made, artifact-like quality.
  • Integrity
    • The highly personal nature of the visualization is intriguing. On the one hand, this is a personal communication to her collaborator, so accuracy and integrity in the reporting of data is assumed. Yet because the data is of a personal nature, one can’t help but wonder if there was some degree of self-censorship involved.
  • Form:
  • Function:
    • Intriguing to explore how the colours and shapes evolve over through time.
    • Provides clear indication of prevailing and evolving trends in the author’s social exchanges through the week.
    • The entirely graphical nature of this visualization tends to encourage a period of prolonged perceptual engagement.
  • Where does it succeed and where does it fall short?
    • One absent (and potentially insightful) element is some time-stamped indication as to what day and/or time of day within the week each ‘goodbye’ occurred.
    • I wonder what effect the use of position, proximity/grouping would have (as akin to a social network array) in representing the ‘who’ of the goodbyes would have on the visualization’s effectiveness.
    • As mentioned above, a few of the colour/hue choices are too similar  and, at first glance, could lead to mis-interpretation.
    • I could also imagine a more rigorous colour scheme being used — i.e., the colour spectrum (warm <> cool) could be mapped to the degree of social familiarity or intimacy (i.e. strangers assigned cool cool colours, family/boyfriend assigned to warmer hues).
Edited by Richard Windeyer on Jan 31, 2017 at 8:36am

Waiting in the Wee (Red Bar)

“Party Game” @ the Edinburgh Fringe, Round #1
bluemouth inc. / Necessary Angel

DIGF5002 2017-03-31

Sonification demo — Bending Energy algorithm output mapped to synthetic speech synthesizer.

DIGF5002 2017-03-23

Early user experience design sketch (audio)
What this system could sound like as a imagine user walks down a busy city street while listening to music.

The detection system is housed by a cell phone application which is mounted onto a pair of the virtual-reality goggles.
As objects approach the listening user, the app applies real-time signal processing to the music that describes the shape of the object.

DIGF5002 2017-03-14

R2-D2 as a data sonification design prototype?

“R2-D2 was the most difficult non-human character to develop a voice for, Burtt says. He was a machine that was going to talk, act, and work opposite well-known actors. But he didn’t have a face or speak English. Initial voice tests for Artoo seemed to lack “a human quality.” After some trial and error, Burtt began imitating the sounds an infant might make, and he found that it worked: R2-D2 could convey emotion without speaking words. Thus, the idea was to combine mechanical and human sounds, and Burtt combined his voice with electronic sounds via a keyboard. It helped him understand how Artoo could inflect and, ultimately, deliver a performance.”

Burtt arrived at an integration of synthesized and human speech sounds, in part through an initial process of ‘vocal sketching’ — a useful technique for rapidly prototyping sound design concepts, such as for new devices, auditory displays or systems.