Showing posts with label Music and Linguistics. Show all posts
Showing posts with label Music and Linguistics. Show all posts

Saturday, March 03, 2012

Listening to Be Bop: Staring at an Eclipse of the Sun

Photo Credit Luc Viatour
In the same way the performance of a score is different from an improvisation, I like to consider books, scripts and prepared presentations designed, but conversation as impromptu speech and therefore an unadorned chassis of thought, –engine, transmission and framework inclusive.

That's why we might take offense if someone stops us mid sentence to correct our use of a certain word, and we in turn them accuse them of being pedantic. Because perhaps the graver error is not the so-called mistake, but that the listener has given themselves away as being tone deaf to nuance.

Yes, a shared agreement on the meanings of certain words might make a given case better, but perfect usage, syntax or pronunciation does not always produce the most efficient means of conveying an idea or delivering a message. Sometimes the most engaging delivery requires one to take artistic liberties, to stretch the boundaries of language to its breaking point; but in order for such transmissions to be successful, we require an audience that can decipher new codes as they are being invented, that will forgive us errors in flow, and that might even find beauty in the way that we stumble.

Otherwise, what use poetry?

MICROEXPRESSIONS IN MUSIC

We are human, after all, and as such we frequently 'color' our codes with nuance or variation, or we deviate altogether, and beg our listeners to follow us down some slippery linguistic slope, and hopefully the challenge is worth it. Indeed, we often employ nuance and other such tactics to embed meaning into an otherwise incomplete statement. That is, we force micro expressions, not to mention body language, to do the heavy lifting when words fail us.

And indeed, a smile is very different from a smile and a wink. A wink can change everything. A wink can transform whatever has been said into it's complete opposite. Similarly, there are winks in music, too, if you can hear them. Although, sometimes you need to watch the performer to catch them as they sail by the senses into the last passing moment.

Not to mention that you can't convey emotion in music if you stay perfectly in tune. You can, however, express the absence of emotion, which often seems an art unto itself in modern music.

MUTUALLY ASSISTED MUSICAL ORGANIZATION

Misunderstandings, however, are not always the fault of the listener; and in fact, any communicator must take responsibility for being understood, the same way a soldier has to take responsibility for discharging friendly fire. 'I didn't mean to do it' or 'I didn't mean to say that' or 'You don't understand me' can be seen as attempts to discharge blame for one's own actions on audience members. But the musician who strives for a blue note and doesn't quite bend far enough has only his or herself to blame, if later someone else interprets the action as an error or high brow attempt at chromaticism.

"Do you get the gist of what I'm saying/ playing?" We might ask when we realize that although our brains are on fire, ignited by the passion or inspiration of the moment, we've been talking too fast for our mouths to actually articulate properly.

That said, it may be that a performer plays perfectly (if we can ever use that phrase for art), but audiences perceive the result as noise. The interpolating harmonies of some Asiatic musics often strike western listeners as out of tune. So, it bears pointing out that the perceived organization of any given data set is not only or always the intention of a given organizer. Which is to say sometimes I look at something and you look at something and we see different things. Happens all the time. That's why even eyewitness accounts have to be corroborated by evidence.

Not to mention that sometimes a data set –what is observed– is often subject to an overlay of meaning by the observer, an overlay that may or may not be the intention of the designer, especially if the circumstances by which the data set arrives upon our senses are random in origin.

FERTILITY GODDESS OR BARBIE DOLL?

'What does it mean?' You may ask listening to either Stravinsky or Bebop.
'What does it mean?' You may ask staring at Jackson Pollack painting.
'What does it mean?' You may ask staring at the Grand Canyon, or an eclipse of the sun.

Whatever it is, it may mean something or nothing at all. But if it has any meaning at all, such meaning is either communicated by the creator, or projected by the congregation or consumer.

Likewise, it may mean one thing for the culture and another for the commuter; one thing for the civilization and another for the one who unearths it.

For instance, is that a Barbie Doll the little girl holds in her hands? Or a fertility goddess? I'm presently inclined to think that as the centuries advance, the phrase 'Barbie Doll' will become synonymous with fertility goddess.

Wednesday, July 01, 2009

MUSIC, LINGUISTICS AND NETWORK THEORY

Music, Linguistics and Network Theory form a magical triangle.

The combined power of the three concepts is as of yet untapped, but Network Theory will eventually develop widespread applications for audio and music, especially in the areas of advertising and commercial media.

So, where to start?

First one must realized that Music, Linguistics and Network Theory are not three distinct ideas, but areas of study themselves linked into a sort of 'small world' network.

That is, the ideas are closely connected, so much so that any distinctions between them are separated by but a few degrees.

• Music and linguistics are both forms of organized sound.
• All functional sound is organized for the purpose of conveying transactional communication via linked 'small world' networks.

One beauty of music is that it presents our ears with a perfect symbiosis of language and mathematics. It is math that performs like language, in the same way that light behaves as both particle and waves.

Musicians have long formed networks, but the sounds composers, music designers and musicians make is itself not yet connected as it can be. Once it is, we will find that we can build intelligently designed audio carriers capable of distributing communications via systems that behave in a manner not unlike other networks made up of independent, living organisms.

When this happens, audio producers will be able to produce distribution networks that can self-program performance material which may at first appear random, but actually possess a discreet intelligence.

Consider a quiet summer night suddenly enveloped by the apparent random song of locusts and apply that to transactional advertising messaging in public venue, say Times Square for instance.

If we combine artificial intelligence with network theory and draw on a portfolio of modern playback systems, such as Hypersonic Sound Technology, which can "focus sound into a tight beam for optimal sound directionality", then the result could very well be targeted communications that nobody hears, except for a limited demographic.

WORDS AND MUSIC AS DATA STREAMS

Words and music have always connected for me. Not just as in song, where lyric is attached to melody, but at a more fundamental cognitive level, where I'm sure both music and speech were born.

Within a given song, there are at least three data streams at play:

1. The social network comprised of musicians (which we are not so much concerned with)
2. Lyrical content
3. Non verbal Audio Encoded Data

No surprise then that if music can be thought of as a stream of Audio Encoded Data, that it must be capable of acting as a carrier of information, beyond the emotional response audiences may get from it.

And by Audio Encoded Data I do not mean coded or digitally compressed audio, but rather the inherent or perceived meaning conveyed by non-verbal sound. It is 'coded' because otherwise meaningless tones have to enter our brains before meaning is perceived, and not every demographic shares the same conventional ideas about sound. Like a foreign language, only those fluent in it will be able to understand it. Fortunately, others are often all too happy to provide a translation.

However I can't write about a piece of music and trust you will share my full emotional experience of it. However, after 'decoding' I will at least be able to convey to you information contained in a given work (not to mention my reaction of it).

For instance, I can say that the thunder was loud and frightening, and you may nod your head, which may suffice. But were you to have heard the thunder yourself, you may have found yourself running for the basement. So the sound of the thing and the description of it do not always produce the same reaction.

But what I find further interesting, is the potential to embed a sound with information beyond its emotive impact, and then the potential to trigger that and a multitude of other communications via some algorithm that mimics the natural ecology.

I do not mean to inspire a method that produces subliminal advertising, but rather to consider how a system might limit its broadcast to only those who want to hear a message, need to hear a message, or are predisposed to hearing it. And all eavesdroppers may as well think that they hear the sound of the ocean or something else, because for all intents and purposes, they don't have the skill set to decode a particular communication, which should therefore render it innocuous audio to all those beyond a given set.

If only via anecdotal evidence, we recognize archetypal sounds, organized sounds, and this thing we call music is extraordinarily powerful because of its capacity to convey non-verbal, meaningful transmissions.

We can take Lyrical Data at face value, regardless of metaphorical intent. For the purposes of this article let us agree that the word, whether spoken or sung, means what it means.

What is interesting is not the words, but how they are employed, especially if several independent, contrapuntal lines are sung by several tandem voices.

In an earlier post I described how as a child I wondered why people conversed as they do, often in monotones, instead of exchanged information via song. It seemed so much more efficient to me to sing, given that two or more people employing identical keys and meters, and engaging in a contrapuntal exchange, could talk to one another and overlap one another, and yet be completely understood by themselves and others.

I still like to imagine the possibility of one day walking into a crowded room and finding fifty or more intertwining inner voices engaged in simultaneous melodic small talk, the result immediately perceivable as a linguistic fugue, a musical tapestry composed of multiple communications, and yet were I to focus my hearing in any one direction, any and all communications would be perfectly legible.

The application of contrapuntal theory to distinct musical and non-musical audio communications is also a fundamental characteristic of Green Sound, whereby multiple independent transmissions carry distinct, legible information, without degradation, or introducing noise into a given environment.

But with or without the spoken or sung word, Music is still capable of signifying language, which is obviously why it is such a powerful communications tool.

SPEECH AS MUSIC

Our daily use of language –conversation– is generally non-melodic sound, and yet performed at a rate dictated by an unconscious rhythm metered out by a neuro biological clock (or metronome).

It seems that both literally and figuratively speaking, everybody is a talking drum.

Even when I read prose, depending on the density of the document, it may strike me that I am holding in my hands nothing less than a symphonic score composed of text. Not quite alive when I first begin skimming the book, eventually the writer's rhythms become apparent, until I am at last engrossed by the work. And as though performing from a score, were I to read the document aloud, the author's own breath, evident in his or her phrasing, would also become my own.

Sometimes I think what little I know about linguistics informs everything I know about music.

I'm certain that the same parts of the brain given to improvisational conversation are tapped when producing improvisational music. When musical improvisations are melodic –and therefore capable of serving as a carrier for language– they are emotive. When the melody is flat lined or removed, leaving only rhythm and harmony, the music is no less emotive, but pattern recognition replaces melody at the forefront of our consciousness and the math is allowed to dance.

I tend to group the musical works I like into three basic categories.

1. Works that induce induce a kind of meditative state such as trance. I think of such music has having a light (cognitive) gravity.
2. Works that lend themselves to providing a platform for physical activity (such as dance) or moving picture.
3. Complex works which shut down physical response, as though doing anything else beyond tapping a finger or toe requires so much effort. I think of such music as having a heavy (cognitive) gravity.

However one thinks of it, when I hear an inspirational sound, I want to describe it. It's these verbal descriptions that reinforce the way I feel about a given work. Music may convey or elicit emotion, but Language allows me to qualify what I feel and in turn, express those feelings, and play a role in allowing me to decode the music, so that others might also receive the same message (via my translation, of course).

Like a given word, a given sound may be the result of an arbitrary choice. But unlike a body of words, it is listeners that usually define the meaning of a given piece of music. Certainly an independent listener may have an immediate opinion about a work, but it is the consensus that has the last word: i.e. is it good? Bad? Cool? Cheesy? Is it the 'Best Rock Song Ever'. Is it a work of genius or the work of a hack?

A given individual might certainly hold an opinion contrary to that held by a group or cluster, but we are interested in individuals who are hubs, conduits for ideas. Our interest is not people whose minds are made up, and who use opinions to shut down discussion, but people with flexible opinions, people who can change minds, people who provide links to others in a network. What good is art of any kind, except as a psychological exercise, if it meets its end at a dead end (of an unyielding brain)?

MUSIC THEORY VS. NETWORK THEORY


A potentially significant element of audio applied or music related Network Theory comes into play when we begin to inquire how groups of people (audiences) all arrive at the same opinion and the same time. A performance ends and everybody either leaps to his or her feet and cheers, or they don't. Sometimes audience members continue an act of applause long after a performance is over. Hence, the necessity to inform fan clusters that 'Elvis has left the building'.

Equally interesting is the effect applause has when it runs back through the network to the performers, sometimes causing them to produce an impromptu performance or encore, not always pre-rehearsed, often better when it isn't. At that point we witness not a performance and an audience's reaction to it, but a loop circulating through the network from performer to audience and back again.

I often think that it is this kind of audience/performer communication loop that could be employed to good effect in a brand/consumer relationship, and that sound would play an integral role in this paradigm.

But does music convey anything but emotion? Isn't that enough? My gut reaction is to say 'No' and 'Not always', but the answer to either question, of course, relies on our definition of music.

Some define music as organized sound. It is, but quite often when we listen to the natural acoustic ecology of a given environment, the result can be music to our ears. Consider the sound of the beach (not just the surf). Consider the sound of the night. Consider what Sunday morning sounds like.

In truth, music is not always sound organized by the composer. It can also be 'found' sound, which is then organized by our brains the moment we receive it.

And it may very well be that in the future we think of music as not simply organized sound, but as audio emissions that conform to a network theory.

If we include pure rhythm and pulse as examples of music, and I think we should, then a snippet of Morse Code is every bit as valid (as music) as composer Elliott Carter's 'Eight Pieces for Four Timpani'.

In fact, Morse Code might even be considered a proximal relative to both American Indian and Haitian Vaudou drumming, because (as with Morse Code), both native drumming conventions use rhythm to communicate information beyond the passive emotional response experienced by listeners who don't understand the language of the drum.

Although most people understand 'the drum', at least when it comes to 'feeling' Morse Code, even if most aren't fluent in its language of sonic dots and dashes. We know that because there appears today not a single televised news show whose main theme isn't arranged around its urgent rhythm.

Which is to say, whether Morse Code, Indian drumming or a jingle:

What enters our ears may be first defined as music (or not), but what actually enters our brains is raw data: sometimes emotive, sometimes numerical, as often as not verbal. And if it is also designed as an information carrier, whose message is not produced as verbal communications, it is capable of being decoded by those fluent or versed in the music designer's particular brand of cryptography.

Which is not so difficult as it seems, because what I mean by 'cryptography' is cultural agreement.

And once we have an agreement of terms in place, then the Music + Linguistics + Network Theory magic triangle acts like conceptual jet fuel.

The fundamental act of listening to music is a whole brain activity. Linguistics allows for infinitely complex communications, and Network Theory provides us with a highly effective distribution model.

Thus the power of the MLN magic triangle is that though it may be a recent mutation, born from combined DNA, it is readily adaptable, composed of both modern communications intelligence and ancient archetype.

Friday, January 04, 2008

Sing Me a Song (or Contrapuntal Conversation)

Birds sing and so do dogs and whales. So when I was a little kid I wondered why humans spoke with a 'conversational voice', a 'speaking tone', instead of singing communications to one another via song.

I may have been reprimanded for an interruption while adults were speaking. I may have been asked to 'tone it down'.

Silence may be golden, but at some point, the musician in the child realized that contrapuntal conversation would allow for the simultaneous flow of conversation, including interruptive comments, while never once forsaking clarity.

In fact, singing allows for two or more people to deliver a message at the same time, assuming they follow a few rules regarding key, rhythm and harmony. Given this scenario, no one would ever have to tone it down, ever, and resulting participants in a conversation could voice their thoughts and concerns, as well as receive and comprehend content delivered by others all at the same time!

Because if one is a singer, one doesn’t have to pause in order to give someone else the floor, but rather –as you have no doubt witnessed in opera and musical theater– several people can communicate completely different thoughts and yet remain intelligible, if they deliver their words via separate, individual and harmonized melodies.

Naturally this would require each and every person to work on their pitch, timing and tone as they grew from childhood to adulthood, but wouldn't we be a better species for it? Not to say musicality would bring peace, not when some of the most memorable music is born of strife.

Although there are instances when musical communications really could diminish the potential for conflict. Instead of telling noisy children or movie theater patrons to shut up or be quiet, you would simply ask the offending party to harmonize or vocalize pianissimo.

Overlapping musical lines would result in interesting situations: At Presidential or political debates, all the candidates or participants could talk at once and be fully understood by all. Moreover, the events themselves would last mere minutes, and might actually be enjoyable, even entertaining as well as informative.

Instead of race or gender being issues, the press and pundits might get in trouble for suggesting that one candidate or another is more or less deserving of office because the person in question either has or doesn't have rhythm.

A circumstance might even arrive where we all suspect one of the major party players uses Autotune or lip syncs his arguments when he or she is on stage.

Now, that would be a scandal!

Can you imagine war? Not only would we have Morse code, but spies recruited out of musical theater programs and ballet academies would also be trained in tap code, passing secrets across the dance floor from underground hoofers and Special Tango Forces to covert drummers until the message could at last be crooned or beat boxed to allied command.

Even more interesting, perhaps, how would transcripts appear in a newspaper?

I wonder if the musicality of a given orator or editor would influence my conclusions about a given argument, as much as the opinion itself would.

Regardless, I love the idea of perhaps being able to sit down and read two opposing op-ed pieces presented as a fugue, and then discussing the contents with friends, presenting our own ideas as a variations on the theme.

Tuesday, August 01, 2006

AI 01: Aural Intelligence

Scientists studying bird song believe that the process by which birds learn to sing may be relevant to understanding how people process speech.

I believe the reverse is also true in humans. That is, how and what we learn to speak has a direct casual effect on our relationship to music.


When I consider how music may have shaped my own ability to process sound, –whether as a human being, generally speaking, and as a unique individual– the first thing I consider is the music and sounds that I was exposed to in early childhood.

Before I continue let me offer a definition of the term: 'Ability to process sound'. I think of it as the summation of the following several actions:

A) Hearing
B) Listening (Focusing on specific incoming information)
C) Processing/Understanding
D) Executing an appropriate physical and often vocalized response

Early childhood naturally represents a critical period of human cognitive development. Researchers believe that by the time a child is five years old, they will have accumulated a 2,500 to 5,000 word sized vocabulary.

It has occurred to me that not only does this finding exemplify a fact of human language development, but it also indicates a more general and innate ability in all people to comprehend and communicate improvised, sophisticated patterns of sound (conversation), and from a very young age onward. As a skill, this does not strike me as too much unlike what I would describe as basic musicianship.

It follows then, congenital deafness notwithstanding, that as dependent on the ear as learning language is, language may in fact turn out to be a critical component in the development of musicianship.

I’m therefore also inclined to believe that the music I heard and learned as a child had a primary effect on my musical ear, whereas the music that captured my ear as a teenager –rock, for instance– had a secondary or even ternary, and primarily stylistic effect. –Not a negligible effect or influence, but neither a dominating one.

First there is the essential self, and all its birthright gifts, which some believe –I do– contains some fundamental musical information (See Ur-Song ). Then there is Knowledge: What we learn from the environment, and it follows what we hear in it. And then there is Stylistic Choice: How we chose to distribute to others the knowledge we've acquired. However, whether or not a role model presents itself, our brains will figure out their own way to distribute that knowledge, endowing you with a sort of innate style all your own.

And that's why classical musicians playing jazz and European musicians playing Asian melodies sounds wonky to even the least discerning ear. I suspect it may also be why an adult who learns a foreign language can rarely –if ever– completely hide their original accent.

You are what you are: An American in Paris, maybe? Everyone can tell, baby.

The clothes don't make the soul. Nor a new coat of paint change where the kitchen is. You can learn to dance but who taught you to walk like that? –now I'm on a riff, but anyone have a better metaphor?

If it still sounds convoluted, allow me to define yet another term.

I define ‘Influence’ as:

A compelling, but nevertheless indirect persuasion upon one’s behavioral patterns. A shot of vodka and a beautiful woman might influence my behavior, but neither precipitates my core personality.


Although, my father may reasonably disagree with that assessment.


To read the complete Aural Intelligence Article, follow the links:

AI 01: Aural Intelligence
AI 02: AI Quotient
AI 03: Aural Stimuli & Influences

, ,

Thursday, April 07, 2005

The Lizard King

Capped off my first performance at CBGB's 313 Gallery with the following original song, The Lizard King.

The song itself is partly inspired by Jim Morrison and partly the product of chance. Each individual line –and the odd couplet– in the song once belonged to another song I'd written over the previous 20 years.

While sitting at my desk in 1998, waiting for inspiration to hit, I stared down at my feet, and among the scattered pages of lyrics I picked out a line here, a line there. It began as a lark, really. But the result was a unified composition which incredibly makes some kind of sense. I guess today we'd call it a mashup.

Also the product of synchronicity, the choruses each begin with an allusion to the Virgin Mary ('Dolores Our Lady of Sorrows', 'Lolita' and 'Ave Maria').

Somehow, it all just works.


The Lizard King

By Terry O’Gara
©1998

And so the lizard king will not return
And we will never know
And we will never learn
What never was will never be
I guess I was one of those that never did believe

In anything, in anything but myself
And I don’t want your help
I know all I need to know of love
And that’s enough
I want to feel the pain
I think it’s better this way

Dolores Our Lady of Sorrows
Was seen at a truck stop in Ohio
But you had to stand on the outside looking in
Well isn’t that typical, hypocritical
But we all gave up a hymn

Singing ‘Holy, Holy
Never let go of me’
And ‘Save me, save me’
From the fric and the frac
Oh if only I believed
In all of this crap

Lolita Our Lady of Toenails
Sits in the grass while I sip my cocktails
Shivinanda says sit still and focus
But I twitch like a hundred thousand locusts

Meanwhile a rock’n’roll angel eats her cocoa puffs in heaven
While a video camera swallows your every move
Desperately you search the house for meaning
While she sits alone in her room

Ave Maria Oh baby
Ave Maria my love
How is it everything we put our faith in
Couldn’t save us in the end

And so the lizard king will not return
And we will never know
And we will never learn
What never was will never be
I guess I was one of those that never did believe–

In anything.

Tuesday, December 31, 2002

BLISTER MEDIA: 10.15.98 -12.31.02

We'd worked together since 1991, but didn't begin our formal partnership until the beginning of September 1998. Our venture remained unnamed until mid-October. 'Blister' was not our first choice, but in fact our seventh choice. It seemed that no matter what eclectic name we thought up, someone somewhere already had a music production studio with the same name. Legal fees related to formal trademark searches were draining a tidy hole in our launch budget. Eventually we came to a point of exasperation, where it was like, let's just use the first thing available.

If I had to do it over again, I would have gone into business with our first choice and simply waited for a cease and desist letter to arrive from an offended party. That way, we could have hit the ground running faster and had a little more cash in our pocket when we did –and honestly, I think it would 100% have been cheaper.

We founded our business with a mission to become the Interactive medium's premier audio supplier –delivering not just sound, as was typical for audio shops, but the applicable coding to insure the highest quality integration and playback. In the spirit of the burgeoning dot com age, we added the identifier 'Media'.

Other shops provided 'Recording', 'Mixing', 'Music', 'Music and Sound Design', or simply 'Audio'. I knew we were different because I knew our approach to the process was different. Dramatically so. There wasn't another commercial shop I could think of that considered the issues programmers faced during the construction of these new Interactive experiences.

In fact, there were only two other ventures I could find in those early days that could describe themselves as independent audio production studios defining themselves as leaders in interactive entertainment and the new emerging Internet economy. One was Cathryn Ramin's Team Audio project out of San Francisco, whose roster of talent included Ron Ramin and X-Files composer Mark Snow, and the other was Team Fat, a game audio collective led by George Sanger out of Austin.

Team Audio and Team Fat, and then Michael and I at Blister, –all three little boutiques who had figured something out before anyone else had, and in my mind were now racing to corner the market for interactive audio asset production.

I did not think the market was yet big enough for the three of us, so Ramin's and Sanger's ventures made me feel a bit like Lex Luthor from our studio penthouse in New York City. H'm, how best to defeat these two teams whose combined roster read like a league of legends.  If I had been smarter, I might have suggested a collaboration, which no doubt would have led to partnerships and eventual world domination by Fat Blistering Audio today.

Either way, I still thought, 'We're in darn good company!"

At any rate, I'd like to believe that I brought to the table an understanding of market considerations coupled with an ear for maximizing the entertainment value of any given sonic experience, and Michael was a master of interactive composition, and in a pre-mp3 standard world, he had a magical way with compression that insured our audio sounded super streaming off the web.

As for branding, I wanted our message to indicate we carved beauty from chaos. I wanted to say 'disciplined but experimental'. I wanted to signal to our Madison Alley (Madison Ave + Silicon V/Alley, i.e. the ad tech community)neighbors that we were one of them, and not simply a place where bands just hung out and made demos, or the same kind of tired old studio that churned out dittys out of million dollar rooms. We were digital audio pioneers and we were one step ahead of the pack in understanding that interactive game audio techniques were essential to the next wave on advertising and entertainment assets. We eventually settled on the tag 'Music Noise Code'.

I love language as much as I love music –both are different manifestations of communicable sound– that memorable branding and marketing strike my ear as linguistic melodies.

Of course, I was quite pleased that when Mix Magazine's Internet Audio supplement published their April 2001 story about us (Interactive Composition Comes of Age), they even used the tag as section headers to tell three aspects of our story: Music, Noise and Code.

Blister Media – Music. Noise. Code

I still think it rocks.