Wednesday, July 01, 2009


Music, Linguistics and Network Theory form a magical triangle.

The combined power of the three concepts is as of yet untapped, but Network Theory will eventually develop widespread applications for audio and music, especially in the areas of advertising and commercial media.

So, where to start?

First one must realized that Music, Linguistics and Network Theory are not three distinct ideas, but areas of study themselves linked into a sort of 'small world' network.

That is, the ideas are closely connected, so much so that any distinctions between them are separated by but a few degrees.

• Music and linguistics are both forms of organized sound.
• All functional sound is organized for the purpose of conveying transactional communication via linked 'small world' networks.

One beauty of music is that it presents our ears with a perfect symbiosis of language and mathematics. It is math that performs like language, in the same way that light behaves as both particle and waves.

Musicians have long formed networks, but the sounds composers, music designers and musicians make is itself not yet connected as it can be. Once it is, we will find that we can build intelligently designed audio carriers capable of distributing communications via systems that behave in a manner not unlike other networks made up of independent, living organisms.

When this happens, audio producers will be able to produce distribution networks that can self-program performance material which may at first appear random, but actually possess a discreet intelligence.

Consider a quiet summer night suddenly enveloped by the apparent random song of locusts and apply that to transactional advertising messaging in public venue, say Times Square for instance.

If we combine artificial intelligence with network theory and draw on a portfolio of modern playback systems, such as Hypersonic Sound Technology, which can "focus sound into a tight beam for optimal sound directionality", then the result could very well be targeted communications that nobody hears, except for a limited demographic.


Words and music have always connected for me. Not just as in song, where lyric is attached to melody, but at a more fundamental cognitive level, where I'm sure both music and speech were born.

Within a given song, there are at least three data streams at play:

1. The social network comprised of musicians (which we are not so much concerned with)
2. Lyrical content
3. Non verbal Audio Encoded Data

No surprise then that if music can be thought of as a stream of Audio Encoded Data, that it must be capable of acting as a carrier of information, beyond the emotional response audiences may get from it.

And by Audio Encoded Data I do not mean coded or digitally compressed audio, but rather the inherent or perceived meaning conveyed by non-verbal sound. It is 'coded' because otherwise meaningless tones have to enter our brains before meaning is perceived, and not every demographic shares the same conventional ideas about sound. Like a foreign language, only those fluent in it will be able to understand it. Fortunately, others are often all too happy to provide a translation.

However I can't write about a piece of music and trust you will share my full emotional experience of it. However, after 'decoding' I will at least be able to convey to you information contained in a given work (not to mention my reaction of it).

For instance, I can say that the thunder was loud and frightening, and you may nod your head, which may suffice. But were you to have heard the thunder yourself, you may have found yourself running for the basement. So the sound of the thing and the description of it do not always produce the same reaction.

But what I find further interesting, is the potential to embed a sound with information beyond its emotive impact, and then the potential to trigger that and a multitude of other communications via some algorithm that mimics the natural ecology.

I do not mean to inspire a method that produces subliminal advertising, but rather to consider how a system might limit its broadcast to only those who want to hear a message, need to hear a message, or are predisposed to hearing it. And all eavesdroppers may as well think that they hear the sound of the ocean or something else, because for all intents and purposes, they don't have the skill set to decode a particular communication, which should therefore render it innocuous audio to all those beyond a given set.

If only via anecdotal evidence, we recognize archetypal sounds, organized sounds, and this thing we call music is extraordinarily powerful because of its capacity to convey non-verbal, meaningful transmissions.

We can take Lyrical Data at face value, regardless of metaphorical intent. For the purposes of this article let us agree that the word, whether spoken or sung, means what it means.

What is interesting is not the words, but how they are employed, especially if several independent, contrapuntal lines are sung by several tandem voices.

In an earlier post I described how as a child I wondered why people conversed as they do, often in monotones, instead of exchanged information via song. It seemed so much more efficient to me to sing, given that two or more people employing identical keys and meters, and engaging in a contrapuntal exchange, could talk to one another and overlap one another, and yet be completely understood by themselves and others.

I still like to imagine the possibility of one day walking into a crowded room and finding fifty or more intertwining inner voices engaged in simultaneous melodic small talk, the result immediately perceivable as a linguistic fugue, a musical tapestry composed of multiple communications, and yet were I to focus my hearing in any one direction, any and all communications would be perfectly legible.

The application of contrapuntal theory to distinct musical and non-musical audio communications is also a fundamental characteristic of Green Sound, whereby multiple independent transmissions carry distinct, legible information, without degradation, or introducing noise into a given environment.

But with or without the spoken or sung word, Music is still capable of signifying language, which is obviously why it is such a powerful communications tool.


Our daily use of language –conversation– is generally non-melodic sound, and yet performed at a rate dictated by an unconscious rhythm metered out by a neuro biological clock (or metronome).

It seems that both literally and figuratively speaking, everybody is a talking drum.

Even when I read prose, depending on the density of the document, it may strike me that I am holding in my hands nothing less than a symphonic score composed of text. Not quite alive when I first begin skimming the book, eventually the writer's rhythms become apparent, until I am at last engrossed by the work. And as though performing from a score, were I to read the document aloud, the author's own breath, evident in his or her phrasing, would also become my own.

Sometimes I think what little I know about linguistics informs everything I know about music.

I'm certain that the same parts of the brain given to improvisational conversation are tapped when producing improvisational music. When musical improvisations are melodic –and therefore capable of serving as a carrier for language– they are emotive. When the melody is flat lined or removed, leaving only rhythm and harmony, the music is no less emotive, but pattern recognition replaces melody at the forefront of our consciousness and the math is allowed to dance.

I tend to group the musical works I like into three basic categories.

1. Works that induce induce a kind of meditative state such as trance. I think of such music has having a light (cognitive) gravity.
2. Works that lend themselves to providing a platform for physical activity (such as dance) or moving picture.
3. Complex works which shut down physical response, as though doing anything else beyond tapping a finger or toe requires so much effort. I think of such music as having a heavy (cognitive) gravity.

However one thinks of it, when I hear an inspirational sound, I want to describe it. It's these verbal descriptions that reinforce the way I feel about a given work. Music may convey or elicit emotion, but Language allows me to qualify what I feel and in turn, express those feelings, and play a role in allowing me to decode the music, so that others might also receive the same message (via my translation, of course).

Like a given word, a given sound may be the result of an arbitrary choice. But unlike a body of words, it is listeners that usually define the meaning of a given piece of music. Certainly an independent listener may have an immediate opinion about a work, but it is the consensus that has the last word: i.e. is it good? Bad? Cool? Cheesy? Is it the 'Best Rock Song Ever'. Is it a work of genius or the work of a hack?

A given individual might certainly hold an opinion contrary to that held by a group or cluster, but we are interested in individuals who are hubs, conduits for ideas. Our interest is not people whose minds are made up, and who use opinions to shut down discussion, but people with flexible opinions, people who can change minds, people who provide links to others in a network. What good is art of any kind, except as a psychological exercise, if it meets its end at a dead end (of an unyielding brain)?


A potentially significant element of audio applied or music related Network Theory comes into play when we begin to inquire how groups of people (audiences) all arrive at the same opinion and the same time. A performance ends and everybody either leaps to his or her feet and cheers, or they don't. Sometimes audience members continue an act of applause long after a performance is over. Hence, the necessity to inform fan clusters that 'Elvis has left the building'.

Equally interesting is the effect applause has when it runs back through the network to the performers, sometimes causing them to produce an impromptu performance or encore, not always pre-rehearsed, often better when it isn't. At that point we witness not a performance and an audience's reaction to it, but a loop circulating through the network from performer to audience and back again.

I often think that it is this kind of audience/performer communication loop that could be employed to good effect in a brand/consumer relationship, and that sound would play an integral role in this paradigm.

But does music convey anything but emotion? Isn't that enough? My gut reaction is to say 'No' and 'Not always', but the answer to either question, of course, relies on our definition of music.

Some define music as organized sound. It is, but quite often when we listen to the natural acoustic ecology of a given environment, the result can be music to our ears. Consider the sound of the beach (not just the surf). Consider the sound of the night. Consider what Sunday morning sounds like.

In truth, music is not always sound organized by the composer. It can also be 'found' sound, which is then organized by our brains the moment we receive it.

And it may very well be that in the future we think of music as not simply organized sound, but as audio emissions that conform to a network theory.

If we include pure rhythm and pulse as examples of music, and I think we should, then a snippet of Morse Code is every bit as valid (as music) as composer Elliott Carter's 'Eight Pieces for Four Timpani'.

In fact, Morse Code might even be considered a proximal relative to both American Indian and Haitian Vaudou drumming, because (as with Morse Code), both native drumming conventions use rhythm to communicate information beyond the passive emotional response experienced by listeners who don't understand the language of the drum.

Although most people understand 'the drum', at least when it comes to 'feeling' Morse Code, even if most aren't fluent in its language of sonic dots and dashes. We know that because there appears today not a single televised news show whose main theme isn't arranged around its urgent rhythm.

Which is to say, whether Morse Code, Indian drumming or a jingle:

What enters our ears may be first defined as music (or not), but what actually enters our brains is raw data: sometimes emotive, sometimes numerical, as often as not verbal. And if it is also designed as an information carrier, whose message is not produced as verbal communications, it is capable of being decoded by those fluent or versed in the music designer's particular brand of cryptography.

Which is not so difficult as it seems, because what I mean by 'cryptography' is cultural agreement.

And once we have an agreement of terms in place, then the Music + Linguistics + Network Theory magic triangle acts like conceptual jet fuel.

The fundamental act of listening to music is a whole brain activity. Linguistics allows for infinitely complex communications, and Network Theory provides us with a highly effective distribution model.

Thus the power of the MLN magic triangle is that though it may be a recent mutation, born from combined DNA, it is readily adaptable, composed of both modern communications intelligence and ancient archetype.


TCT said...

hi there this is a great post. we have yet to truly tap the aspects of semantics, predictive analytics and the psychoacoustic aspects of music that coalesces these aspects into an "operating network" and shared experiences. what concerns me is that no one today really cares about the quality of the music rather the immediate shared meme of the music as a vehicle to discuss or connect upon. given that we may be able re-initiate the importance of production and recording quality through usage of various machine learned processes.

Terry O'Gara said...

A belated thanks for your comment, Theodore. It's interesting, though, because the culture appears to be surfing two trends which at first glance are appear at odds with one another. Agreed, the issue of quality in audio has greatly diminished –and maybe it's because listening to music today is very often a (ear) buds to (ear) drums experience. When does anyone listen to music anymore, when it must first push its way through air molecules before finding its way to your brain? It's a significant difference in the way we consume audio. At the same time, ironically, man made sound has never been more ubiquitous; nor various species of sound designers so abundant and diverse. Which is why one can assume that the practical application of predictive analytics and psychoacoustics is inevitable.

backpackingamerica said...

I just read through the entire article of yours and it was quite good. This is a great article thanks for sharing this informative information.

producer chris young