Tuesday, February 01, 2011

Sonic Branding with Sub Atomic Audio

Quantum Audio describes a variety of micro structural units of audio transmission which may be collectively described as scale conveyors of symbolic data.

Charles Seeger has coined a related term, ‘museme’ to mean a “basic unit of musical expression which in the framework of one given musical system is not further divisible without destruction of meaning.”

I think there is some overlap between his ideas and my own independent musical musings, but for the time being, let’s explore the notion and implications of this thing I have dubbed 'Quantum Audio'.

Readily observable examples of Quantum Audio elements include Meme at the upper range of scale, which is itself composed of at least four key elements: Semiotic Signifiers (a specific cultural expression of Archetype), Audio Archetypes (a plastic, primordial and universal concept), Nuance (representing both the physical expression of data, and often a unique data unit itself), and Artifact (often unintentional and resulting from mechanical manipulations, but sometimes capable of a performer's control).

And it may be said that as Archetype (Organic) inspires Signifier (Designed), Nuance (Intentional Communication) can be said to produce Artifact (Collateral Communication).

Obviously we are expanding definitions a bit in order to express newly identified phenomena. Therefore, it is natural to ask if Quantum Audio particles are 'real', or if this investigation is merely an intellectual exercise? As it turns out, this is akin to asking if light is composed of particles or waves, because the answer is both, depending on one's method of observation.

Quantum Audio describes what have heretofore been considered intangible concepts. However, we may nevertheless identify and re-define certain notions as observable 'elements', and we do this by acknowledging the very real impact they leave on an audience. Indeed, the measurable result of this collective phenomena can be found in the production of identifiable and undeniable bio musicological reactions in listeners who share a common cultural context. This is to say Quantum Audio elements produce real psychophysiological changes in brain activity, the way we might expect any external or environmental influence to do.

In this paradigm, the motivic Meme will stand for the equivalent of a musical molecule. Next, the building blocks of music long identified by traditional music theorists we will serve as audible atomic structures. But our study primarily concerns itself with even smaller elements, including Archetype and Signifier, as mentioned before, which despite their diminutive scale nevertheless gently impress upon listeners (who share a common cultural context) observational emotional markers, and thus can be said to capably convey symbolic data in the form of audio, which we now designate as Quantum Audio Components or particles.

For instance, it is Quantum Audio that provokes an emotional reaction from even a single beat of music, or any short snippet of sound. Devoid of melodic information or even a rhythmic pattern, this audio 'unit' might not even be said to be music (except on a quantum scale), but our senses tell us it nevertheless remains an entirely capable platform for communication.

What is this then? A burst? An edit? An expression? I think 'Gesture' is a good word, but whatever it is, it is not the word in a sentence, but it is perhaps the syllable in the word, whether verbalized or printed, so infused with emotive or contextual potency that one brief utterance alone may require no further support or enhancement.



However, it should be noted that within this system there is a world of Microsound which exist on an even smaller time frame than we are concerned with when we use the term Quantum Audio. The 'grains' of Grain Synthesis', for instance, however interesting and useful for composers, generally lie beyond the domain of this present discussion. A 50ms audio sample might certainly be itself composed from a carrier wave and other physical phenomena, but such assets are generally too small carry to meaning, which is what we are concerned with in this discussion. Suffice to say, the phrase 'sound quantum' as it is conventionally used typically describes physical units of sound, whereas Quantum Audio as it is used here is limited to describing indivisible semiotic constructs.

The application of Quantum Audio data is also what distinguishes the sound of a Rock Guitar from one that is merely amplified, independent of what kind of music is being performed. EFX units may be said to bathe source sounds with Quantum particles. Some may call this nuance; others may label it an algorithm. Either way, what is clear is that emotive quality appears rarely the result of one component, but rather more likely produced by a matrix of controlled minutiae.

Likewise, Quantum Audio data allows the carrier signal to 'push' symbolic data, either independently or in conjunction with melody. And it is these triggers that produce a resulting emotive impact on the listener, whether the musical expression is a full length work or a single beat. A capable musician or music designer can inflect/suggest a variety of 'meanings' on the same melody or audio design construction, transforming the material at hand into any number of rich and differently textured messages.

Interestingly, Quantum Audio is also that which allows listeners to identify one performer from another, creating a fundamental feature of sonic ID or audio branding design.

While not occupied with the natural world, the application of Memetic theory is a good starting point for anyone interested in Quantum Audio. The Memetic premise challenges us to deconstruct any communication until we can at last identify within it what might be called a ‘fundamental pattern’, that is, the smallest unit of cultural transmission capable of being replicated. (For more about music and memes, read: Music Memetics, 1 May 2010)

As with genes, memes follow biological laws –in fact memetic theory parallels genetic theory. But Quantum Audio components, like quantum particles, behave differently, as we have established, depending on how they are observed. Which is to say, while we might find universal agreement in the underlying Archetype, the Signifiers they inspire will likely mean different things to different groups (or nothing at all), depending on local definitions, cultural perception and demographic norms.

Unlike memes, which we can more rightly say are composed of Quantum Audio Components, Quantum Audio itself does not form 'a fundamental pattern', nor do such particles lend themselves to replication. No one hums a Quantum Audio Component, for instance. Rather, it is the assemblage of such elements into an audible matrix that create the potentially (and hopefully) exponentially, replicating musical gene/meme. They are the stuff from which patterning is made, and on the sublime and oft-seeming sub sensory level.


In physics, a quantum is the minimum unit of any physical entity involved in an interaction, and because this study conducts itself with the examination of musical microstructures, we borrow 'quantum' as a loan word.

Speaking of musical microstructures, an early inspirations for this inquiry arrived as a result of studying bowed stringed instruments as a child. Unlike a piano, or software instrument, a violinist must first spend many days, weeks, even months learning how to produce a suitable tone before they can hope to produce anything that resembles 'music' as the term is commonly understood. The result of such rigorous application is the development of an acute awareness, and the ability to control both musical and non-musical resonances, which produced in tandem as successive and overlapping sounds, and generated at what might be described a frame-rate scale, when mastered and summed, finally serve to create a single satisfying tone.

Years ago I even wrote an essay describing my fascination with the process of coaxing a usable tone from the various members of the string instrument family, and as thorough as I might have thought myself at the time, it's probable that I was still incomplete in my description of the process. (Contact: The Character of Sound)

One may argue a pianist must also learn ‘touch’, or that the operator of a digital instrument must also acquire technical skills necessary for modulation. But neither is responsible for their respective chosen instrument's tone. The sound of a violin is not the sound of the instrument itself. Don't believe me? Pick up a violin and bow and tell me if you can produce anything that resembles what we commonly think of as a the sound of a violin.

Unless you already have a a degree of competency with stringed instruments, or you think violins generally sound like screeching birds, the answer is probably not. Rather, the pleasing sound, timbre or wave form capable of being produced by a violin –as we commonly think of a violin sound– is that of a violinist engaging with a deceptively complex mechanism that he or she has mastered, and which he or she has formed a symbiotic relationship with. Contrast this with the sound of a piano -or an organ, or a harpsichord– which requires interaction with an interface, but little that might be regarded as actual symbiosis. Thus the sound of a piano –its essential timbre– is always the same regardless of who is pressing down any given key.

It is perhaps like the difference between flying a kite and using a hammer.

To be clear, we are limiting this discussion to the production of tone in the singular, not the full or even partial performance of a musical work.

Granted, the sounds generated by a synthesizer may be modulated in realtime by the electronic musician. And no doubt we can recognize an interface as a tool, the computer itself a collection of tools. That some are masters of this kit should therefore be a given. But it would be an over simplification to state that electronically generated sound is produced as a result of symbiois between man and machine.

This is not to disparage any single kind of musician. I've been a fan of the synthesizer and an electronic and computer musician for thirty years. And perhaps it is for this reason that I can easily recognize that one of the main reasons such instruments enjoy wide and current popularity is because aspiring musicians can get right down to the business of making music without first spending years training their ears and their bodies to work in tandem in order to develop the bio mechanical skill-set necessary to creating a reasonable steady state tone, much less actually make music. In short, perfection, in this instance, does not necessarily require practice.

It's important to recognize this fact, because it is only by the deft integration and application of sound that an audio designer can endow an otherwise sterile set of patches or samples with a gesture that causes his or her composition or construct to suddenly resonate with thrilling humanity, and therefore actually 'touch' people, which one assumes is always the desired result.

As a result, it's worth mentioning that a violinist's relationship with tone is never steady state. String players do not trigger a 'patch' or a sample, and so organic tone is never one static thing, but rather the summed event experience composed of many elements (some imperfectly made) and made manifest over a given time-line by a constantly changing and potentially infinite number of variables, controlled only by the fingertips, and the brain, of course –indeed the entire anatomical structure. This is to say that a violinist's tone is produced by his or her entire body literally working in concert with instrument and bow.

Another way of looking at it is to say that Tone, deconstructed, is not simply the pitched 'musical' or sonic focal point which listeners train their ears on, but includes an infinite number of peripheral, evolving musical sounds plus non musical 'Artifacts', which together produced and controlled by a performer in real-time over a given timespan, serve to enhance each subsequent pitch center with suggested emotive feeling or symbolic data.

To the trained and untrained ear both, Artifacts, in and of themselves, are rarely considered musical, but summed with tonal information, they create Expression, and as such they often prove to be the very reason we feel the way we do about a given performance.

Obsess upon a given Artifact, however, and the sounds of fingers scraping steel, the clacking of keys, the singer's quick inhalation (to name just a few examples) become annoying. But attempt to perfect a recording by eliminating such sounds altogether and we find that what we are left with is sterile by an inhuman degree.

Another fascinating –and easily observable– example of Quantum Audio at work is in the oft heard radio contests that ask listeners to identify a work based on a short clip. These bursts of sound may be as short as a single beat, or even smaller. Yet, regardless of whether one is familiar with the source track, or even if one can't identify the source of the edit, these short snippets nevertheless reveal themselves as fully capable of triggering any number of cognitive reactions, from emotion to memory to inspiration.

At any rate, it is clear that fundamental sonic elements, which we can identify as (micro) Gestures capable of conveying (meaningful) Expression –or Message– and which we collectively call Quantum Audio, is worth continued investigation, especially if one is engaged in the delivery of symbolic data for the benefit of a commercial client, such as in the production of sonic branding, or for some other utilitarian purpose.

* * *

Photo Collage by Terry O'Gara