Tuesday, December 01, 2009

Sound of the Year – 2009: Auto-Tune

Photo Credit: Clyde Robinson
A little over 10 years ago pop singer Cher released her 1998 hit, “Believe”. It was remarkable not simply for its success on the charts, but because it presented the public with the first widely heard effect called by the name given the device by its manufacturer: Auto-Tune.

Since then other devices which produce similar results have entered the market, but they all serve the same general purpose: pitch correction.

Artificial Pitch correction by studio technicians is not new, but its ease of deployment was. Prior to the introduction of these devices and software solutions, one way producers and engineers would correct a wobbly vocal performance would be to sample an entire vocal line from analog tape, then chop each line up into its component words or even syllables, and then manually trigger the vocal performance back onto tape, using the pitch modulation wheel of a synthesizer in real time  to  align and conform a pitchy performance into a stable, recognizable key.

The result was a pitch perfect performance –the equivalent of Photoshop retouching for singers. Except that over time, what started off as a transparent, corrective effect became a highly recognizable novelty effect when used to extremes, slamming off key performances into a rigid pitch grid that in turn transformed an organic performance into something resembling a robotic vocoder effect. Less retouched, so to speak, and more re-made.

Those who think musical performances should represent reality despise the effect, of course, suggesting those singers who use it can't actually sing. However, young music fans love the novel effect. There is a semiotic paper to be written here on why a new generation prefers its singers to sound synthesized and soulless.

Are we entering an age when we feel less than human? Or to attempt a positive spin on it, with mobile phones stuck to our ears and screens increasingly providing us a 24/7 pixelated lens to the world, maybe we actually feel more like cyborgs than humans, and therefore, no surprise, the general public's taste in music has evolved to reflect this transformation upon our psyche and the contemporary digital zeitgeist.

Whatever the reason, the use of Auto-Tune, whether employed only for pitch correction, or to distort and otherwise transform a performance, has become so ubiquitous, that it is arguably now nothing less than the sound of our times.

And that’s why Auto-Tune is the 2009 Critical Noise Sound of the Year.

+ + +

HOW THE SOUND OF THE YEAR IS SELECTED:

The Critical Noise Sound of the Year goes to that sound source, event, entity, happening or concept which so effectively produces wide response and reaction, whether intentional or not, such that it stirs collective emotion, inspires discussion, incites action, or otherwise lends itself to cultural analysis and resonates across the globe.

Prior Sound of the Year winners include The Housing Implosion (2008) and Mother Nature's Howl (2005)

Sunday, August 09, 2009

Can Jazz Be Saved?

In the August 8, 2009 issue of the Wall Street Journal, Terry Treachout, the Journal's drama critic, asks 'Can Jazz Be Saved?'

Treachout begins:

"In 1987, Congress passed a joint resolution declaring jazz to be “a rare and valuable national treasure.” Nowadays the music of Louis Armstrong, Duke Ellington, Charlie Parker and Miles Davis is taught in public schools, heard on TV commercials and performed at prestigious venues such as New York’s Lincoln Center, which even runs its own nightclub, Dizzy’s Club Coca-Cola.

Here’s the catch: Nobody’s listening.
"

As it happens, I addressed the same topic in March 2007, but arrived at a different conclusion than Mr. Treachout. If jazz is no longer at the center of our popular culture, or 'living large' as they say, Jazz is nevertheless alive and it's actually living quite well (Jazz is Dead! Jazz is Dead! Long Live Jazz!, Saturday, March 03, 2007).

Interestingly, Treachout and I do appear to agree on at least one point: Jazz is everywhere. So, if it is true that no one is listening to it, then perhaps the apparent collective phenomenon of selective hearing on a such grand scale is in fact the price of ubiquity.

In other words, the cost of going global, and winning universal appeal, as Jazz undoubtedly has, is ironically measured by a sort of cultural transparency.

Does that sound absurd? Consider then how Rock, in the sixties, was once literally thought of as revolutionary by its fans. Today those same 'revolutionary' songs are being sung by children who think them merely a fun way to pass the time.

Or consider symphonic music, which we are also told is 'dead'. Dead as it may be, it also conversely lives a larger than life in cinema, which arguably makes it more popular than it ever has been, really. And yet, however popular, symphonic music as a genre holds little significance at the gravitational center of the pop universe.

But great trends and their resulting works do not end up devoured by a cultural black hole, never to further evolve, entertain, be heard of or performed again. Rather, they remain like points of light in our consciousness.

Which is to say, though such things may always be admired, they do not always continue to excite or inspire us, not with any consistent measure, anyway. But any inherent beauty can still be tapped and even become transforming if only we choose to linger with conscious intention, because now they are so well a part of us, that that which was once external and novel is now internalized so as to become us.

The jitterbug becomes old hat, but only once it has been assimilated. So does a seventh chord, or E7#9 for that matter.

It happens with people we love; it happens with music we love.

This is part of being human:

Eventually we all somehow learn to take for granted something that is even as massively great as a burning star, not to mention Bach, Ballet, The Beatles, too!

–and certainly Jazz.

Wednesday, July 01, 2009

MUSIC, LINGUISTICS AND NETWORK THEORY

Music, Linguistics and Network Theory form a magical triangle.

The combined power of the three concepts is as of yet untapped, but Network Theory will eventually develop widespread applications for audio and music, especially in the areas of advertising and commercial media.

So, where to start?

First one must realized that Music, Linguistics and Network Theory are not three distinct ideas, but areas of study themselves linked into a sort of 'small world' network.

That is, the ideas are closely connected, so much so that any distinctions between them are separated by but a few degrees.

• Music and linguistics are both forms of organized sound.
• All functional sound is organized for the purpose of conveying transactional communication via linked 'small world' networks.

One beauty of music is that it presents our ears with a perfect symbiosis of language and mathematics. It is math that performs like language, in the same way that light behaves as both particle and waves.

Musicians have long formed networks, but the sounds composers, music designers and musicians make is itself not yet connected as it can be. Once it is, we will find that we can build intelligently designed audio carriers capable of distributing communications via systems that behave in a manner not unlike other networks made up of independent, living organisms.

When this happens, audio producers will be able to produce distribution networks that can self-program performance material which may at first appear random, but actually possess a discreet intelligence.

Consider a quiet summer night suddenly enveloped by the apparent random song of locusts and apply that to transactional advertising messaging in public venue, say Times Square for instance.

If we combine artificial intelligence with network theory and draw on a portfolio of modern playback systems, such as Hypersonic Sound Technology, which can "focus sound into a tight beam for optimal sound directionality", then the result could very well be targeted communications that nobody hears, except for a limited demographic.

WORDS AND MUSIC AS DATA STREAMS

Words and music have always connected for me. Not just as in song, where lyric is attached to melody, but at a more fundamental cognitive level, where I'm sure both music and speech were born.

Within a given song, there are at least three data streams at play:

1. The social network comprised of musicians (which we are not so much concerned with)
2. Lyrical content
3. Non verbal Audio Encoded Data

No surprise then that if music can be thought of as a stream of Audio Encoded Data, that it must be capable of acting as a carrier of information, beyond the emotional response audiences may get from it.

And by Audio Encoded Data I do not mean coded or digitally compressed audio, but rather the inherent or perceived meaning conveyed by non-verbal sound. It is 'coded' because otherwise meaningless tones have to enter our brains before meaning is perceived, and not every demographic shares the same conventional ideas about sound. Like a foreign language, only those fluent in it will be able to understand it. Fortunately, others are often all too happy to provide a translation.

However I can't write about a piece of music and trust you will share my full emotional experience of it. However, after 'decoding' I will at least be able to convey to you information contained in a given work (not to mention my reaction of it).

For instance, I can say that the thunder was loud and frightening, and you may nod your head, which may suffice. But were you to have heard the thunder yourself, you may have found yourself running for the basement. So the sound of the thing and the description of it do not always produce the same reaction.

But what I find further interesting, is the potential to embed a sound with information beyond its emotive impact, and then the potential to trigger that and a multitude of other communications via some algorithm that mimics the natural ecology.

I do not mean to inspire a method that produces subliminal advertising, but rather to consider how a system might limit its broadcast to only those who want to hear a message, need to hear a message, or are predisposed to hearing it. And all eavesdroppers may as well think that they hear the sound of the ocean or something else, because for all intents and purposes, they don't have the skill set to decode a particular communication, which should therefore render it innocuous audio to all those beyond a given set.

If only via anecdotal evidence, we recognize archetypal sounds, organized sounds, and this thing we call music is extraordinarily powerful because of its capacity to convey non-verbal, meaningful transmissions.

We can take Lyrical Data at face value, regardless of metaphorical intent. For the purposes of this article let us agree that the word, whether spoken or sung, means what it means.

What is interesting is not the words, but how they are employed, especially if several independent, contrapuntal lines are sung by several tandem voices.

In an earlier post I described how as a child I wondered why people conversed as they do, often in monotones, instead of exchanged information via song. It seemed so much more efficient to me to sing, given that two or more people employing identical keys and meters, and engaging in a contrapuntal exchange, could talk to one another and overlap one another, and yet be completely understood by themselves and others.

I still like to imagine the possibility of one day walking into a crowded room and finding fifty or more intertwining inner voices engaged in simultaneous melodic small talk, the result immediately perceivable as a linguistic fugue, a musical tapestry composed of multiple communications, and yet were I to focus my hearing in any one direction, any and all communications would be perfectly legible.

The application of contrapuntal theory to distinct musical and non-musical audio communications is also a fundamental characteristic of Green Sound, whereby multiple independent transmissions carry distinct, legible information, without degradation, or introducing noise into a given environment.

But with or without the spoken or sung word, Music is still capable of signifying language, which is obviously why it is such a powerful communications tool.

SPEECH AS MUSIC

Our daily use of language –conversation– is generally non-melodic sound, and yet performed at a rate dictated by an unconscious rhythm metered out by a neuro biological clock (or metronome).

It seems that both literally and figuratively speaking, everybody is a talking drum.

Even when I read prose, depending on the density of the document, it may strike me that I am holding in my hands nothing less than a symphonic score composed of text. Not quite alive when I first begin skimming the book, eventually the writer's rhythms become apparent, until I am at last engrossed by the work. And as though performing from a score, were I to read the document aloud, the author's own breath, evident in his or her phrasing, would also become my own.

Sometimes I think what little I know about linguistics informs everything I know about music.

I'm certain that the same parts of the brain given to improvisational conversation are tapped when producing improvisational music. When musical improvisations are melodic –and therefore capable of serving as a carrier for language– they are emotive. When the melody is flat lined or removed, leaving only rhythm and harmony, the music is no less emotive, but pattern recognition replaces melody at the forefront of our consciousness and the math is allowed to dance.

I tend to group the musical works I like into three basic categories.

1. Works that induce induce a kind of meditative state such as trance. I think of such music has having a light (cognitive) gravity.
2. Works that lend themselves to providing a platform for physical activity (such as dance) or moving picture.
3. Complex works which shut down physical response, as though doing anything else beyond tapping a finger or toe requires so much effort. I think of such music as having a heavy (cognitive) gravity.

However one thinks of it, when I hear an inspirational sound, I want to describe it. It's these verbal descriptions that reinforce the way I feel about a given work. Music may convey or elicit emotion, but Language allows me to qualify what I feel and in turn, express those feelings, and play a role in allowing me to decode the music, so that others might also receive the same message (via my translation, of course).

Like a given word, a given sound may be the result of an arbitrary choice. But unlike a body of words, it is listeners that usually define the meaning of a given piece of music. Certainly an independent listener may have an immediate opinion about a work, but it is the consensus that has the last word: i.e. is it good? Bad? Cool? Cheesy? Is it the 'Best Rock Song Ever'. Is it a work of genius or the work of a hack?

A given individual might certainly hold an opinion contrary to that held by a group or cluster, but we are interested in individuals who are hubs, conduits for ideas. Our interest is not people whose minds are made up, and who use opinions to shut down discussion, but people with flexible opinions, people who can change minds, people who provide links to others in a network. What good is art of any kind, except as a psychological exercise, if it meets its end at a dead end (of an unyielding brain)?

MUSIC THEORY VS. NETWORK THEORY


A potentially significant element of audio applied or music related Network Theory comes into play when we begin to inquire how groups of people (audiences) all arrive at the same opinion and the same time. A performance ends and everybody either leaps to his or her feet and cheers, or they don't. Sometimes audience members continue an act of applause long after a performance is over. Hence, the necessity to inform fan clusters that 'Elvis has left the building'.

Equally interesting is the effect applause has when it runs back through the network to the performers, sometimes causing them to produce an impromptu performance or encore, not always pre-rehearsed, often better when it isn't. At that point we witness not a performance and an audience's reaction to it, but a loop circulating through the network from performer to audience and back again.

I often think that it is this kind of audience/performer communication loop that could be employed to good effect in a brand/consumer relationship, and that sound would play an integral role in this paradigm.

But does music convey anything but emotion? Isn't that enough? My gut reaction is to say 'No' and 'Not always', but the answer to either question, of course, relies on our definition of music.

Some define music as organized sound. It is, but quite often when we listen to the natural acoustic ecology of a given environment, the result can be music to our ears. Consider the sound of the beach (not just the surf). Consider the sound of the night. Consider what Sunday morning sounds like.

In truth, music is not always sound organized by the composer. It can also be 'found' sound, which is then organized by our brains the moment we receive it.

And it may very well be that in the future we think of music as not simply organized sound, but as audio emissions that conform to a network theory.

If we include pure rhythm and pulse as examples of music, and I think we should, then a snippet of Morse Code is every bit as valid (as music) as composer Elliott Carter's 'Eight Pieces for Four Timpani'.

In fact, Morse Code might even be considered a proximal relative to both American Indian and Haitian Vaudou drumming, because (as with Morse Code), both native drumming conventions use rhythm to communicate information beyond the passive emotional response experienced by listeners who don't understand the language of the drum.

Although most people understand 'the drum', at least when it comes to 'feeling' Morse Code, even if most aren't fluent in its language of sonic dots and dashes. We know that because there appears today not a single televised news show whose main theme isn't arranged around its urgent rhythm.

Which is to say, whether Morse Code, Indian drumming or a jingle:

What enters our ears may be first defined as music (or not), but what actually enters our brains is raw data: sometimes emotive, sometimes numerical, as often as not verbal. And if it is also designed as an information carrier, whose message is not produced as verbal communications, it is capable of being decoded by those fluent or versed in the music designer's particular brand of cryptography.

Which is not so difficult as it seems, because what I mean by 'cryptography' is cultural agreement.

And once we have an agreement of terms in place, then the Music + Linguistics + Network Theory magic triangle acts like conceptual jet fuel.

The fundamental act of listening to music is a whole brain activity. Linguistics allows for infinitely complex communications, and Network Theory provides us with a highly effective distribution model.

Thus the power of the MLN magic triangle is that though it may be a recent mutation, born from combined DNA, it is readily adaptable, composed of both modern communications intelligence and ancient archetype.

Monday, June 01, 2009

The Greening of Sound

As a result of growing awareness over climate change, there has now emerged a trend whereby individuals, communities, corporations and even governments, are seeking ways to position themselves as in harmony with the planet and its inhabitants.

In what has become a global effort, businesses of every kind –from agriculture and textiles to energy– are actively asking the public to identify their concerns as eco friendly, organic, or 'Green'.

In some cases, a given company's effort may only extend as far as its branding campaign (green, blue, yellow, approachable, minimal, friendly, caring, feminine, concerned –you've all seen the new logo designs).

In sincere initiatives, efforts generally boil down to a statement of commitment to reduce a measured impact on the environment (which consumers must hope is thereafter acted upon).

The most common measurement we hear of is one's carbon footprint. As you probably already know, one's 'carbon footprint' is calculated by taking a full assessment of the greenhouse gas emissions produced by a given subject of study, be it an individual or a venture of some kind.

Reducing one's carbon footprint does not lead to the complete cessation of operations, but rather means taking into consideration what the environment can accommodate and not stressing it beyond that point. In order for any carbon footprint reduction scheme to be effective, it can not be implemented in isolation. Rather, it must be implemented as part of a grand, cohesive strategy in which all human sources of carbon emissions collaborate. In other words, information is shared and competing entities willing cooperate, for the sake of the greater natural environment.

Sound and Media both, can also be defined as a pollutants.

Can Sound be recycled?

It happens all the time in today's sample based music. But there is no issue of old sounds piling up in mounds of garbage (unless you count discarded CDs).

A better question is: Can Sound be Green?

And it's a question I've been mulling over a lot recently.

I think as populations get denser, producers of non-entertainment audio must include environmental awareness into their skill set. And in public areas where sound emerges from multiple, competing sources, devices need to learn when to speak and when to listen, independent of client concerns –this is what I mean by 'Green Sound'.

So, just as industrial centers are learning that they can minimize pollution and still turn a profit, I believe advertising and entertainment providers are bound to discover that they can also produce equally effective communications without making a big noise (or a big stink).

In 2008 I discussed the effect of Silence by re-framing it as a concept I introduced called Black Noise Branding. Black Noise Branding describes how the skillful and intentional use of negative audio space can prove a powerful platform from which to feature other audio assets.

Green Sound, also introduced in 2008, is a related concept, in so far as it describes an anecdote to noise pollution, without requiring media producers to minimize their messaging.

Not as pithy as I would like, but a formal definition of Green Sound might read as:

The sum effect of simultaneous and coordinated communications, so that they harmonize –or collaborate– with existing environmental audio factors and sources.

If we accept this definition, then there are at least two or three methods at our immediate disposal by which we can turn sound green.

• Include an environmental audit as part of our creative process (and then compose or construct any resulting assets with such evident considerations in mind).
• Create smart devices (devices that listen to the environment and to each other, and 'behave' appropriately)
• Create a 'Domestic Audio Code' whereby all interior sound emitting appliances speak with the same 'voice', regardless of make or manufacturer.

In regards to the latter two points: Essentially, we require the employ of a standard, thereby insuring that all sound-emitting electronic devices operate as part of a single network, and therefore communicate with one another, so that their communications to us will be organized and not interruptive, welcome and not intrusive, harmonized and not noise contributing.

Since technological considerations are out of our hands as audio producers, I will mostly focus this article on those factors directly under the control of a composer, sound designer, music designer or other similarly defined audio professional.

ACOUSTIC ECOLOGY

In many Sonic Branding or Environmental Sound projects, those commissioned to execute the task might first do a brand audit. They might even research a given space or platform. But few will actually execute an analysis of the immediate Acoustic Ecology.

In a natural environment the Acoustic Ecology might be typical forest sounds –birds, chipmunks, frogs, geese and duck on the water, etc.

In a domestic environment, the Acoustic Ecology might include sound emissions from family chatter, neighborhood noises, dogs barking, children playing, a TV, PC, music player, game console and various kitchen appliances.

In an office environment, the Acoustic Ecology might be comprised of mixed channels of verbal communication, perhaps a music system, or one or more TVs turned to the News or Financial cable channels. Consider also that todays open offices combine operations and creative departments into one single space, or that one's office might actually be Starbucks or where ever you can with down with a laptop and a mobile phone.

Does it matter if the telephony assets are branded if the sounds themselves compete with external sonic data? Should it matter? In an age when a phone call is often part of a multitasking experience it should matter.

ECOMAGINATION™


So how should members of the creative class go about developing a corporate audio strategy?

Unless you haven't watched TV for the last four years, you already know that GE launched a 2005 'Ecomagination™' image campaign as part of a strategy to position the company as 'Green' .

GE's image strategy is arguable, but the Ecomagination™ concept itself has merit for music professionals who will find it well worth considering –and borrowing for execution their own assignments.

But first, what is Ecomagination™?

GE unfortunately does not actually define Ecomagination™ (as of 6/1/09) on the Ecomagination™ website (or they make the definition difficult to locate), although they do lead the homepage off with the question, "What is Ecomagination™?"

Rather they answer the question by suggesting the philosophical platform enables them to "solve the worlds biggest environmental challenges while driving profitable growth" for the company.

I'd like to adapt the Ecomagination™ concept for Music Designers, so therefore I'll define (or redefine) the it as follows:

Ecomagination™ for Sound, Music and Audio Professionals:

The consideration and implementation of gained results from the analysis of a given environment executed in the effort to compose or design audio for said environment, with the intention that the resulting audio will not compete but integrate into or cooperate with the existing acoustic ecology, without being constructed as so transparent that it is rendered inaudible or illegible.

Green audio designers therefore inquire not just of brand strategy, or what might be an effective means to enhance a story or deliver a brand message with sound and/or music, but how will the resulting asset/s work within one or more environments.

Note that with traditional assignments, the task is limited to the specific creation of a sound mark, score or music packaging. In those assignments when space or environment is also considered, strategy usually boils down to baffles, speaker placement and volume.

But Green Sound assignments will employ aesthetic judgment in tandem with algorithmic controls, with the result being a symphony of multi source audio emissions (and silence) engaged in a non competitive, comprehensive and collaborative approach that sounds if not musical, still anything but like noise. As such, these assignments will include not just a creative brief detailing the immediate task, but also an environmental and device brief that provides predictive assessments of how various sonic solutions will play and interact within an existing Acoustic Ecology.

Of course, such briefs might be de rigueur for device manufacturers, but are rare if non existent in the conference rooms of media creators. Not to mention that device makers have as yet to employ some systematic network theory so that all devices are governed by a cooperating set of communications rules.

CO-EXIST

So how do you begin working within a Green Sound framework or philosophy?

Using one's 'Ecomagination™', and composing audio from a Green Sound mindset suggests Music Designers consider as many such possibilities as they can before embarking on a given assignment. Analysis will resemble a quadrant composed of 4 areas of study:

• Brand or Message
• Arc or Story
• Natural Environment or Acoustic Ecology
• Gadget or Device Bearing Demographic.

In practice, analysis may require Brand or Story and not both. But sound does not go green without equal consideration given to both environment and device bearing demographic.

Otherwise, we would do well never to assume or define any sounds original to a pre-existing natural environment –the acoustic ecology assets– as sources of conflict. Conflicts only result when a Gadget or Device Bearing Demographic is introduced into an otherwise natural environment, or confined urban space.

In either instance, however, we can safely assume that whatever existent sounds there are play a supporting role in defining an environment as one place or another. For the moment, since these are easy examples, consider a carnival or casino. Creators of both carnival and casino audio are required to create sounds that contribute to existent environments (or 'experiences'). Therefore, in order to prove effective, new audio assets must function in way that simultaneously draws attention to the source without overwhelming or negating co-existing messaging.

But Green Sound designers will also think not in terms of site specific audio, but also how a transient audio set plays through a site and operates in play in multiple environments, whether momentarily positioned or in motion, whether public, private or domestic.

What if all the domestic appliances in your kitchen our household shared a common key and were drawn from a unified sound palette? Then the microwave going off at the same time as the coffee pot wouldn't sound like a racket, but like a symphony (albeit it on a micro scale).

Does the car alarm have to go off in the car and wake up the neighbors? Why can't a vandalized automobile transmit an alarm signal to your location so that you are instantly aware of events at hand.

Do you have to remember to turn your mobile phone to vibrate in a movie theater or court room? Why can't the phone know where you are and handle such accommodations itself.

We will as often as not learn that it is not simply that the bird's song that is so pleasing to our ears, but that our feelings regarding the reception of the song is the result of prior or simultaneous modifications to our disposition by other factors that may play upon all our senses.

That is, the bird sounds so magnificent because the stage upon which he or she sings (the environment) simultaneously appears magically or divinely ordered, with the result being that even random audio emissions seem performed according to some artful, intangible or supernatural plan.

SOUNDSCAPE AWARENESS


Essentially, Green Sound is the study, practice and application of Soundscape Awareness.

Consider not simply what is on film or video, but who will be receiving a given communication, and whether they will be focused or multitasking during transmission.

Consider not only the voice of the brand, or how a device should sound, but how it will play (in a given environment), and how it might cooperate and harmonize with other brand messaging or devices so all sonic data is legible and none perceived as interruptive.

Likewise, ask (yourself) not only how the mix sounds in Mono, Stereo and Surround, but also where it will play in Mono, Stereo and Surround, and how it will play. We all well know that how and where a sound plays can negate a million dollars of prior production decisions. Simply consider the erosion of sonic value by poor mp3 compression.

But Green Sound is not medium specific. Rather a Green Sound mindset connotes a general awareness of environmental factors, and leads to the production of audio assets that 'collaborate' within a given Acoustic Ecology, and which neither contributes to noise nor gets buried by it. Green Sound designers take into account that some playback devices are positioned, while others are mobile.

And ideally, Green Sound strategies effectively coordinate man made sounds with natural sounds, and new sounds with pre-existing ones, in a way that resembles a nature and her cycles. It is therefore conceived and executed with both Music and Network theory in mind.

As such, sound can only be said to be Green when it is designed as both distinctive and collaborative. It does not require silence nor assumes constant or sustained focus on order for a communication to be effective (in contrast to long form entertainment). It does not intrude, it's not about being louder, but it's not so muted that the result is muffled.

Rather Green Sound is experienced as the simultaneous transmission of uniquely positioned, separate sound sources, working together like lines of counterpoint composed in a symphonic manner, so that multiple distinct messages are made nevertheless clear and comprehensible, via data awareness of proximal devices, and an intelligent, algorithmic assessment of the environment that takes into account what other humans and animals might also inhabit that same environment.

And Green machines have a built in intelligence so that they know where one environment ends and another begins, and makes suitable accommodations to subsequent sound emissions.

By taking into consideration the immediate Acoustic Ecology, story tellers and communications producers will no doubt be able to communicate far more effectively and economically than they ever have before, with the sum effect being quite the opposite of noise, and yet, perhaps only different than noise by nominal degree.

To advertisers, Green Sound is the strategic positioning of your message (or your client's brand mandate) in a given environment so that it will be heard, not by negating or overwhelming a competitor's messaging (or the natural acoustic ecology), but by carving out its own distinct space, customized to parameters provided by the space, with the net result being that both natural and urban environments feel quieter and more habitable, even though they may actually play host to more communication transactions than ever.

Tuesday, May 12, 2009

Emoticons as Evidence of Evolution

Talk about impact, nearly a year later, people continue referencing Nicholas Carr's July 2008 Atlantic article, 'Is Google Making Us Stupid?'.

For instance, 'In Why Can't We Concentrate?' writer Laura Miller leads with Carr in her recent Salon review of Winifred Gallagher's new book, 'Rapt: Attention and the Focused Life'. 'Rapt' is about attention, itself.

I crafted a musician's response to Carr's article in September 2008, shortly after I became aware of it. It's the sort of topic that everybody wants to weigh in on, which is why we're still returning to it. It simply takes time for nine billion people to link to the Atlantic and post a comment.

Expanding on the notion, I don't believe the use or increasing reliance on interactive tools for communications is actively dumbing us down.

Rather, I think the collective medium of Internet and Interactive technologies combined has changed the way we think (and continues to do so).

There maybe as of yet little or no scientific analysis to support this claim, but my position is nevertheless based on sound reasoning.

First, you may be happy to learn, that: No, I don't think you're stupid.

Everybody else thinks you're stupid because they feel stupid, and they assume you must share the same inability to concentrate as they do. What's going on? Carr blames it on Google. Parents blame it on television, the internet, MTV, video games, et al. But those conclusions are symptomatic of a mass delusion, if you ask me.

I believe we've simply evolved. Maybe you didn't realize it until now. Or if you did intuit a gradual change in your cognitive processes (that couldn't be linked to age related causes, drug use or love), perhaps you didn't understand (and perhaps still don't understand) how your newly transformed brain functionality has changed you (and continues to change you).

In fact, the process draws a neat parallel to a washing machine cycle: Rinse, recycle, repeat. Or a feedback loop running circles through a Marshall amp.

I offered the following supposition in the September, 2008 article, 'Musician Under the Influence (of Technology)':

"Maybe we do read less books, but we're arguably CREATING more. It's also possible any diminished interest in text is the result our cognitive systems are undergoing a reorientation towards (or evolutionary preference for) pictographic writing systems (SMS shorthand, emoticons, branding, etc...), over traditional communication via the written word."

It may strike you as funny, but note that the underlying subtext is:

Exposure to –and sustained use of– Interactive technology results in the restructuring of cognitive processes so that so-called 'normal' people's brains increasingly resemble brains belonging to musicians and designers.

No surprise then if so many suddenly feel discombobulated. It's as if recent new media technologies have injected world's population with a shot of neurocircuit Be Bop.

But before Google, before Twitter, before Facebook, AOL, Explorer, Netscape and the World Wide Web itself, before Apple and before Microsoft:

Anyone capable of reading and performing from traditional music notation had already arrived in the future a long, long time ago. That's because a music score is essentially a pre-digital Graphical User Interface.

Interested in reading the original rebuttal in its entirety? Click on the link (if you're still among the focused few):

Musician Under the Influence (of Technology)
Originally posted Friday, September 05, 2008

Saturday, May 09, 2009

Audio Imaging and Radio's Multimedia Future

Sonic Branding is by no means a new idea, although the phrase only seems to have come into common parlance with the new millennium. In the early nineties I contextualized the concept as 'Branding with Audio'. My old bosses, Scott & Jonathan Elias, preferred the phrase 'Sonic Identity' or 'Sonic ID' for short.

But radio stations have long distinguished themselves from one another using a technique the trade alternately calls Audio Imaging or Music Imaging.

Imaging is all the patter, Voice Over, music, interludes, along with every other branded sound asset, etc., that is inter-dispersed between programming. The composite effect serves to differentiate one station from others and provide a unique radio presence or identity.

In most important respects, Audio Imaging is a precursor of, and synonym for, Sonic Branding. I say "In most important respects" because the result of Audio Imaging is not a single brand asset, but a portfolio of assets whose resulting implementation is closer to what I define as 'packaging' than branding. In my brand mythology, a brand represents a singular asset; while packaging indicates a portfolio of assets that includes branded assets, but is not limited to them. That said, many professionals use the terms interchangeably, and for the sake of this article, I will, too.

What I find personally interesting is that unlike common Sonic Branding, which is usually implemented in order to distinguish similarly tangible goods and services from one another, Audio Imaging uses one palette of sounds (the packaging) in order to provide context to another group of sounds (the transmission).

A music supervisor (and his or her client) commissioned to create programming acts on the premise that the net result of a curated playlist will 'brand' a given (retail) environment. It may or may not: results vary. Muzak has developed a long, successful business model based on this premise. But other Sonic Brand professionals believe otherwise. Radio professionals, it seems, have found the practice lacking.

The reason being: radio playlists are in constant flux. Algorithm based programming will certainly distinguish one genre based station from another –tune in and you know immediately if you're listening to Country, Pop, Urban or Rock. But although they play an important role as an indicator of content/programming philosophy, playlists by themselves (in saturated markets) do not provide listeners with enough information to distinguish one station from another (that delivers similar/overlapping programming).

You may already know that the radio industry is in crisis. Many stations, which in the last decade or so adopted a music only platform, now find themselves facing obsolescence in the wake of Apple's iPod launch.

iPods and other hard disc players have almost single handedly eliminated the necessity for radio.

If radio is going to survive, it will need something that iPods and other hard drive players lack. The industry is still grappling for the secret recipe that will pull audiences back to their programming. One ingredient can be found in a re-tooled Audio Imaging approach. Digital music players provide the capacity to create user-generated playlists, but such playlists as programming lack general accessibility. Unlike radio programming, they do not represent communication from one human to another, but rather serve the utilitarian function of providing a frame for a single person's individual experience (excepting playlists created in real time by a dance DJ for a live audience).

If a relayed signal requires at least one source and one receiver (which often alternate roles), can our own thoughts be defined as communication? Maybe. Personally, I think of thoughts as akin to cognitive reflections, which seems to be the antithesis of what we think of as broadcasting (or even narrow casting).

So although the hardware and technology that makes portable hard drive music players possible seems like magic, the experience a single playlist delivers can also prove less than magical once shared with others beyond the creator's own ear buds.

What we require is context, and the kind of disruptive surprise that only human logic choices in real time can provide, and that pre-programmed random play by algorithm simply has yet to demonstrate a capacity to deliver.

In the past, radio did not simply deliver experience; it was quite often central to the experience. Arguably, it was the experience. Before the invention of television friends and family members listened together, and while listening, they often stared at the apparatus. In effect, radio became the hearth, the fire we all sat around. And then enter TV. Now audiences actually had something to look at. Radio responded by scaling smaller and becoming portable, allowing users to multi task.

But in some cases chatter is perceived as an interruption. Naturally, competing stations eliminated chatter in favor of music-only programming.

Either way, portable digital music players have rendered both strategies ineffective.

So how can radio compete?

I would like to say that Audio Imaging can do it all, but neither branding nor packaging alone can prevent obsolescence. The task is how to prove necessity of being in the face of competing information and entertainment sources. Ultimately, unique and engaging 'Destination Content' will be key. But there are other things we can do, too.

Certainly greater brains than mine are already trying to figure this out, but why wait when I want to save radio right now:

One must ask ones self, what makes radio distinct from other forms of media? What is radio anyway? Is it the box or is it what is in the box? Up until the advent of PC distributed digital audio we have considered radio to be the composite of media and platform. In the face of current technologies, radio –like newspapers– must redefine itself as platform neutral.

What is important is not the box, but 'The Feed', or 'The Stream'.

So, it may be that one possible strategy is to dispense with any reliance on these devices we call 'radios', and instead concentrate on programming. But who and what's going to broadcast the programming if not a radio?

How about an iPod?

Saving Radio scenario #1: The primary agenda should be to influence audiences to tune in via PCs and hand held PC devices, and abandon sole-use dedicated devices, such as, ahem, radios. Listeners may then listen in real-time at their PCs, or download to their digital devices any programming as a Podcast. What's a Podcast anyway, but TIVO for sound? –At least that's the way the radio industry should sell it:

Listen to what you want when you want, wherever you want.

Simultaneously–

Saving Radio scenario #2: Embrace Interactivity. Traditional Radio, like TV, represents one-way communication. In the past, at least listeners could call in and make requests, which would be fulfilled in a reasonable time frame. Today, such communication is all but squelched. Though many, if not all, commercial radio stations have a web presence, their sites merely serve as a virtual wall scrawled over with a station's tag. With some notable exceptions, few radio web sites today boast the tool kit (or interest) to move beyond the monologue experience in order to more fully engage in a conversation with their listeners. In short, in the traditional radio paradigm, the goal is to gain and maintain listeners, which is a limited ambition, at most.

Moreover, expecting loyalty from someone, and hoping the other party will be content to only listen and not wish to contribute in any other way may have worked in centuries past, but that model is disintegrating by the moment.

Talk Radio itself is not immune to this sort of erosion, but over sized personalities may be one reason Talk seems to fare so well, when it does. Because Talk, unlike most iPod playlists, promotes a point of view. Love or hate the man or woman behind the mic, they have the capacity to make brains reel and emotions surge.

As it happens, digital playback devices are increasingly becoming multimedia players, so portability is not exclusive to sound programming. Users can travel with video as well, but viewers can't multi task the way listeners can. It's a simple fact of life: Consumers of sound-only content can multi task in a way that Television and Print audiences cannot.

Thus:

Unique Selling Position #A: Radio provides content that doesn't require you to stop your life in order to consume it. We turn to Print when our lives are on hold. We turn on the Television when we want to relax. But Radio provides Content (Information and Music) to Go.

Radio is where the action is, and one marketing strategy might be to impress upon media consumers that 'RADIO' and 'ACTION' are SYNONYMS, and where there is Life there is Radio. Indeed:

Radio=Life.

In contrast:

Saving Radio scenario #3: Given the choice to choose either listen-only content, or viewable content with sound, what will audiences do? If the answer is the latter, then radio may want to create viewable content to accompany heretofore listen-only conceived content. For instance, a slide show, if nothing else, or other visual content that enhances an otherwise heretofore listen-only experience. Representatives of the radio industry may scoff, but as audio books evolve into multimedia experiences, I believe radio itself may well embrace the idea of enhancing audio content audio content with images, the way television uses audio content to enhance video.

After all, who doesn't look at the back of the box while they eat their morning cereal? Even eating is enhanced by imagery, –any imagery.

But if radio starts providing images, what will make it any different from a TV company?

If the New York Times and CNN are both streaming video from their websites (and they are), why is one company considered a newspaper and the other a cable television company? In effect, they space each once inhabited is converging with the others.

Forward thinking radio professionals already intuit this. But what many may not realize is that the media universe, unlike our own, can be thought of as collapsing. By collapsing, I don't mean that it's disappearing. Rather, all media is evolving into multimedia.

Today, we are witnessing the emergence of Multimedia Singularity.

We are moving past the day when there are TV companies and there are Radio companies –or Print companies, for that matter. What was once a constellation of individual communications companies, each inhabiting their own space –Print, Television, Radio– is now better perceived as a single moving object: The Feed or The Stream. Where once we saw a smattering of stars, now we see an interconnected galaxy (of providers). Distribution platforms themselves are increasingly incidental (as primary indicators of experience); whatever the next technological advancement is, it had better be transparent.

Likewise, when we look through a telescope we don't care if we're looking through a Celestron Telescope zoom eyepiece or a Meade Series 4000 Eyepiece & Filter Set. We just want it to work, and if it works, then all we see are the heavens.

Multimedia Singularity is being made manifest in a variety of ways:

The New York Times, like all newspapers and magazines, is reinventing itself as a content aggregator and distributor, using not a single media platform delivered on paper, but a multimedia one, traveling via light through fiber optic cables. The similarities between Print and Television increase by the day.

Radio must follow suit if it is to survive. When Media Singularity is achieved, radio's competitors will not just be other radio stations, but all media sources (providing parallel content choices in a given market). Though perhaps born as traditional twentieth century newspapers, magazines, television channels or radio stations, respectively, all heretofore separate platforms will continue to transform themselves until each has fully evolved into, quite simply, unique points of interest, destinations, hosted by purely electronic media.

Do you remember what a Venn diagram is from your high school mathematics class? A Venn diagram consists of two or more overlapping circles, or sets. The overlapping area indicates shared characteristics or commonalities between the set

Now imagine a 'Content Venn' composed of not two circles, but many, –and possibly in three dimensions. Each circle represents a traditional media platform; to name three: Radio, Television, and Print. Within each circle are numerous other Venn diagrams representing subsets composed of individual companies competing in a given space. It doesn't take a futurist or psychic to see that the circles as they are presently positioned are not static, but that they are in motion and will continue converging upon one another until we are left with not many spaces, but one space. The result is no Venn diagram, that's for sure, but a single globe representing The Feed.

And what of that overlapping common space where all companies in all spaces meet, regardless of distribution platform? That space is what we can now think of as Multimedia Singularity.

So how will we choose who and what to tune into?

Once platforms become universally transparent, content will have to speak for itself. Given that scenario, context will be more important than ever. And context will be achieved via the same tool kits and assets we rely right on now:

• Personality
• Positioning
• Branding
• Audio Imaging
• Sonic Branding

–All of which we increasingly recognize as aspects of Design.

The result being that after the technological deluge, humans will actually be more important that ever.

Ironically, there will be so much to look at, that sound will play an exponentially more significant role in hooking in audiences.

One should expect to soon see what might have once been conceived as asymmetrical partnerships now engage in strategic merges. We may have had an inkling of that when Time Warner merged with AOL. That partnership was universally judged to be a fiasco. But ultimately it wasn't because Time Warner didn't need to be a multimedia company with an Internet presence, but rather that Time Warner didn't need AOL to withstand the same digital pressures, and be subject to the same evolutionary processes, eventually sustained by all successful members of the corporate species.

But regardless what partnerships evolve, and what eye candy is produced, sonic assets and Radio's importance as a vehicle for content distribution will also increase at reasonable rate.

Here's why:

When people listen to radio they may request more or less talk, more or less music, more or less advertising; but they never request less sound.

As it happens, Television viewers, rich with color, dimension and ever increasingly denser pixel formations, also share with radio listeners an enjoyment for sound. Likewise, they never request less sound either. Lower volume, yes; less aural data, emphatically no. That's because TV isn't just a vision experience; it also relies on sound. In fact, Television only 'works' so well because it is a multi-sensory experience. You don't think so? Try muting the audio. Subtitles communicate information to a varying degree, but the experience is far from optimal. Except for video art installations, the television experience is compromised without an accompanying soundtrack.

In the nineties comic and radio talk show host Howard Stern famously declared his ambition to become a 'King of all Media'. It sounded funny to hear him say it back then. In retrospect, Stern was ahead of his time. For today, radio strives for Multimedia Singularity, inching closer to TV and Print, and at exactly the same time TV and Print edge closer to Radio. It's that or die. And by die, I mean, fade away into silence. Of course, there's not a single profit or not-for-profit information, or entertainment company, that wants to stop broadcasting. So, silence is not an option.

It may be that one day the word 'radio', generally speaking, will solely come to mean 'broadcast sound', and that no one will actually think of a device called 'radio' when the word is uttered. By radio, we won't mean hardware, what we will mean is simply:

Content (though it may be accompanied and supported by images), requires no visual enhancement in order to communicate information, entertain or deliver a message.

And because storytellers have sustained that model since the dawn of man, it's safe to bet that radio, whatever form it takes, will continue to be a powerful medium for years to come.

Monday, April 06, 2009

Scratch the Contract

Young and experienced composers alike might complain that temp tracks limit creativity, although the process of modeling a temp can certainly prove an invaluable training tool.

In fact, student composers have invariably participated in the temp track process without even realizing it. Almost all university students in a composition program are assigned the task of composing a four-part harmony. In order to fulfill this assignment, one must learn the theoretical rules governing the traditional four voice chorale style.

The resulting student work is hopefully original while also paying strict adherence to the rules demanded by convention. By extension, a formal education in music is not really about finding one’s unique voice –leave that to biology, time and life– but rather, learning how to reproduce conventional material.

The analysis of a scratch track, and the subsequent composition of a derivative work, share a parallel process (though often scaled larger befitting the magnitude of some professional assignments).

Temp tracks DO limit creative direction, but that’s not necessarily a negative. Given the option of taking verbal direction from a client (who may otherwise be layman when it comes to music), a temp track may provide the only accurate brief for a given assignment. A director (or client) may deliver creative goals verbally or via written text, or by both means, but only when the music does the talking, does the exact nature of the assignment crystalize.

Brian Eno has suggested that working with limitations, and not wallowing in infinite options, is the most direct path to creative results, and I tend to agree. I love it when he points to Marshall amplifiers and suggests that it is the very limitations the thing that gives it personality (The Revenge of the Intuitive, Wired, Issue 7.01, Jan 1999). I think the same is true for people.

As it happens, if temp tracks lock the composer into a direction, they also lock in the client, and that is to the composer's advantage (especially if the client is otherwise prone to indecisiveness).

For instance, should the client change the direction after production has started, a composer can rightly request overages on the premise that the new requests were not explicit in the temp (assuming that the original bid was based on direction indicated in the temp).

But likewise, if a client decides a minimal electronic score is more appropriate than a previously agreed symphonic direction, they may seek to renegotiate a contract so that it no longer reflects the necessity of paying a hundred musicians or renting a sound stage.

Also, if a composer delivers a track that does not in any way reflect or reference mutually agreed specifications, the client can demand a rewrite, threaten non-payment, or justly fire the composer for not keeping to the agreement as defined by the temp.

Temp tracks do more than indicate direction; they also specify convention and clarify language. One might even argue they actually give language its meaning.

For instance, if the client requests a symphonic score (without providing you with a musical recording as an example of what he or she means by 'symphonic'), and you, the composer, deliver a work in the style of Leonard Bernstein's WEST SIDE STORY, you may have fulfilled the assignment only to learn that Bernstein was not was the client had in mind when they asked for something 'symphonic'.

If it was, then you’re simply lucky. But if the client was thinking Mahler instead, or Glass, they might justly or unjustly demand a rewrite –and you may or may not be able to win overages.

In order to preclude such a scenario from occurring, what usually happens is that one party or the other will seek to define particular aspects of verbal (or text) instructions by using an existing piece of recorded music to clarify the meaning of some adjectives.

So, if our hypothetical client requests a symphonic score, and additionally presents you with a recording of the Star Wars music as reference, then in this case you know exactly what they mean by symphonic.

Alternately, if in another meeting, another client asks for a rock track but doesn't provide you with an audio reference, then it behooves you, the composer, to present a variety of songs you consider rock (before beginning the composition process), in order to clarify that you and your client share the same definition of the word 'rock'. Is it Little Richard? Is it Nirvana? Big difference.

In this respect, a scratch track serves as both a creative brief and a contractual agreement –written in the language of music– between composer and client that explicitly details what is requested by the client, and what is expected of the composer.

Thursday, March 26, 2009

Music Supervising in Today's World

Are you currently sitting on a back catalog of compositions that you've recorded over the years? Are they doing anything but sitting on a hard drive? Your music deserves to have a life of its own, even if you can't pull yourself away from ProTools for even a moment. So, why not put those songs to good use and start seeking avenues to license the material?

Last night I had an opportunity to attend a workshop at the New York chapter office of The Recording Academy titled "Music Supervising and Licensing in Today's World."

Linda Lorence Critelli (VP Writer Publisher Relations, SESAC, Inc.) moderated the panel. Panelists included: Jim Black (President, Clearsongs), Keith D'Arcy (Senior VP, C.O.R.E., EMI Music Publishing), Suzanne Hilleary (Founder/President, WacBiz), and Ed Gerrard (Impact Music Management).

I'm always eager to learn new things, or hear how other people work, so I love going to conferences and workshops. And even if you're already an expert in your field, then there are always plenty of new people to meet and schmooze, provided you're up to the task. Sometimes the most useful information you come away with maybe delivered by the person sitting right next to you, not necessarily the person on stage.

NEW THING I LEARNED:

If you want to be a music supervisor rather than a composer/songwriter, you do need to know how to clear a piece of music. Sounds pretty basic right? But back in 1996, when I selected 3000 tracks for the NBC Olympics and cataloged them by Sport and Mood, all I had to do was search, identify, catalog and indicate edit points. I didn't personally need to execute the paperwork on all that stuff. Today, I certainly would, or someone on my team, or a business partner, would.

In other words your credentials need to be more substantial than a big music collection.

Yeah, it's true, we all gotta work for a living.


INFORMATION I KNEW BUT GOOD TO HEAR AGAIN AND AGAIN:

If you're on the creative side, and your aim is to get your music placed more than once, make sure it's easy to license and that there aren't any surprises attached to your work. Music Supervisors like it if you own and control your own master. And you've heard this before: Clear your samples! –Or don't use any sonic elements that aren't hand crafted and completely original to you.

Also, if you're using live musicians, make sure they sign work-for-hire agreements before they leave the session, not to mention that all and any co-composition /co-arrangement issues are well defined before you begin submitting material and calling it your own.

You may think that sneaking in an uncleared sample isn't going to hurt anyone. I've witnessed several composers use uncleared samples left and right, like sometimes for every sound on every track. In fact, I was standing next to two of them when each got served with a Cease and Desist letter for infringement. Nothing like a lawsuit to drag down the whole artsy vibe, man. So, just because someone else –even someone you respect– is willing to risk their career and reputation doesn't mean you should.

The truth is, it's not just about you and your reputation. Whatever your personal feelings about copyright laws, when you choose to skirt them, you put other people's livelihoods at stake, too.

Jim Black pointed out that one little uncleared sample can do irreparable harm to a music supervisor's career, not to mention cost a production hundreds of thousands of dollars on the back end, just because you weren't honest about your work.

Think they'll secure a license from you again after that? Think NOT.

But if you do have an uncleared sample in the master, don't let that stop you from submitting it. Just be honest about what it is, who it is, where it is, etc., –so that if your track is up for consideration, the music supervisor at least has an opportunity to clear the sample/s on your behalf, and on behalf of the project at hand.

Sample issues aside, you say your only real issue is one of creative insecurity? That you feel like none of your compositions are ready for prime time yet? Hey, if they're mixed, they're ready. When it comes to music placement there isn't any good or bad music, just the right track, and then all the rest (which are good for something else).

FOR MORE INFORMATION:

If you weren't at the NARAS workshop on Music Supervision last night, and you're trying to figure out what the first steps you should take to getting your music licensed, I invite you to check out the following articles from the Critical Noise archives. I'm pleased to report that everything I've previously written on the topic is still very much relevant. Given my personal experience, most of it applies specifically to broadcast promotions and the ad biz, rather than to movies, episodic television or games. But whatever your personal creative goals, I think you'll find, at least some of it, to be useful advice.

Breaking Into the Ad Music Biz (Originally posted November 16, 2007)

How To License Your Songs (Originally posted November 01, 2005)

Creating Value By Licensing (Originally posted October 01, 2005)

Too Many Notes To Choose From? (Originally published in Shoot Magazine, April 13, 2001)

Monday, March 02, 2009

Future Friendly For David Fincher

Have you ever lit up the world with music?

You Will.

In 1993, the music house I worked for was commissioned by ad agency, NW AYER, to create an original score for an AT&T campaign named 'YOU WILL'.

At the time I was a young assistant –hanging onto the ropes of commercial music production with one hand, answering phones and getting coffee for composers with the other.

The premise of the 'YOU WILL' campaign was that with the future right around the corner, AT&T was in a position to deliver all sorts of high tech goodies to their customers.

Try to visualize the pre millennial era: Few in the public sphere had yet heard of the Internet, much less owned a personal computer. Cell phones were the size of car batteries. The hot technology was the CD-Rom. So imagine how futuristic these commercials looked and sounded when they first aired and asked the then hypothetical questions:

"Have you ever paid a toll without slowing down? Bought concert tickets from a cash machine? Or tucked your baby in from a phone booth? 'YOU WILL'."

In one sense, 'YOU WILL' can be seen as (and possibly) modeled after GE’s own campaign 'WE BRING GOOD THINGS TO LIFE'. But cleverly, AT&T recast the message so as to position itself as the GE of the future.

But what a dark future the ad agency imagined for its client.

Let's look at one of the spots now, sans audio. Doing so will give you an idea of just the way our music composers first approached the project:

AT&T YOU WILL 'TOLL':30 (NO AUDIO)

video

Director David Fincher was commissioned to shoot the commercials. His vision of the future, as it turned out, was pretty bleak, made only somewhat more pleasant by gadgetry. At first glance, most of the interiors looked like they couldn’t even power the lights in the room much less a computer, while exteriors seemed designed to resembled a climate change model in full effect.

Which is not to say the film didn’t look good, it was great, even beautiful. But the art direction did not immediately convince one that a brighter future –either figuratively or literally– was upon us. Essentially, Fincher delivered a study in Sci-Fi noire: equal parts 'ALPHAVILLE', 'BLADE RUNNER' and 'BRAZIL'.

Oddly enough, no one at the agency or anyone of our creative staff initially thought this Orwellian version of the 21st Century presented much of a marketing problem (for a company trying to position itself as a ubiquitous element to your future lifestyle).

In fact, the consensus between both agency and our compositional staff was that Fincher’s footage demanded a rich cinematic treatment that inspired admiration in the things AT&T could achieve for its customers.

Someone suggested that the music house use as a reference Ennio Morricone's 'WHILE THINKING ABOUT HER AGAIN’ from the soundtrack to 'CINEMA PARADISO'.

I don't recall who first made this suggestion. It may have been Fincher, Jim Haygood (the campaign's editor), someone at the ad agency or one of our own creative directors Jonathan Elias or Alexander Lasarenko. But upon its acceptance and approval, Lasarenko composed a stirring work that captured the emotional depth of Morricone’s original cue.

In fact, if you lay the 'CINEMA PARADISO' cue against any of the AT&T spots today, you can see that as a temp track, the music synchs relatively well to picture (it lacks sound design, but you get the idea). If I had to guess, I'd bet that Haygood might have used either Morricone’s music, or Lasarenko’s demo, to facilitate the process towards a final cut.

AT&T YOU WILL 'TOLL':30 (Alternate music direction):

video

Either way, Morricone's track certainly reinforces the cinematic quality inherent to the footage. It also adds emotive warmth, and in that regard it humanizes the picture.

By virtue of the orchestral arrangement, it also conveys a sense of understated power, which one would think agreeable sonic branding for the communications giant.

All of which is to say that this direction seemed exactly right for the project, and everyone at the ad agency seemed to agree at first.

Unfortunately, the account executives at AT&T found the Morricone direction, however romantic in its original context, weirdly dark for a project that purported to be a brand imaging campaign.

It wasn’t simply an issue of the music not working, but that the music worked too well, reinforcing Fincher’s dark vision, demanding awe and respect; rather than conveying a feeling of technological marvel and inspiring a sense of excitement and wonder.

And of course they were right.

Most of the time music is supposed to support picture, but the AT&T campaign provides us with a perfect example of a project that requires a score that contrasts picture.

The symphonic direction did well to announce a Brave New World, but our real job was to introduce a Friendly Future.

Lest there be any confusion, the future was not going to be dark, rainy or Orwellian, or feel anything like the inside of a rusting deep space oil rig.

It was going to be fun, engaging, the technology liberating and easy to use. –Less 'ALIEN 3', if not quite 'JETSONS'. No rayguns; no monsters; and the weather is going to be fine. In other words, Disney’s TOMORROWLAND: safe, warm, inviting; and above all human and accessible.

So why didn’t Fincher shoot happy-go-lucky spots in the first place? In all likelihood he was the hottest young director at the time, and sometimes that’s all it takes to get the job. Which is to say Fincher was hired to do Fincher (and he delivered), and any issues related to branding would be managed in post, which they were.

But another surprise awaited our composers: While our symphonic music demo was soundly rejected, the edit it was synched to was approved.

In many cases, when music providers get it wrong, agencies simply fire them and move on to someone new. But in the case of 'YOU WILL', NW Ayer gave us another chance.

However, now we were in a position of having to compose a new score that contrasted picture, and in such a way designed to represent the polar opposite of our first demo, but would nevertheless synch to the existing edit, therefore matching picture lock.

Elias wanted to provide yet another reference track for the creative team, in order to provide a concrete example of the client’s aspirations. So, this time out Lasarenko suggested an inspirational acoustic rock track, written in the odd meter of 7/4, whose cadence roughly followed a driving I V I vi V vi† chordal sequence, upon which a mystical lyric was delivered. Slammed against picture the music’s energetic beat and shimmering guitars all but lit up Fincher's otherwise dark world.

(†FYI: For readers who are not musicians, the roman numerals in the previous paragraph represent shorthand for various kinds of chords: Upper case = Major/ Lower case = Minor)


As a choice for a scratch track, it was far from typical film music. But it indicated a direction that could transform a Sci-Fi noire mini feature into a fanciful version of the future. And it achieved this result by forcibly re-framing picture with music (that specifically provided the necessary context).

Obviously this speaks to the power of music, whether in advertising, entertainment or something else altogether: That is, the power to make you believe you are seeing something you are not, because your ear is telling you that you absolutely are.

Today, people re-frame their own respective worlds simply by scoring their life with personal playlists streaming off their own iPods or other portable playback devices.

In the end, both Elias’ NY and LA composers created several versions of the driving acoustic rock direction. The agency selected the strongest demo, which was further developed by adding sound design and an affable voice over courtesy MAGNUM P.I. actor Tom Selleck. When at last approved, and the final spots delivered, AT&T released the following press announcement:

"… the 'YOU WILL' campaign takes a whimsical look into the near-future when information technologies now being developed at AT&T will soon enhance the way people work, live and play."

AT&T YOU WILL 'TOLL':30 (FINAL AUDIO):

video

Of course, if the agency had approved the original symphonic orchestral direction, inspired by Morricone’s CINEMA PARADISO, the spots would never have been framed as anything near whimsy. Even now, the images themselves remain, dark and a bit Orwellian.

If this is the future, where the hell is the sun, you may ask?

Well, it's there, of course, beaming down upon the entire campaign, whimsy and all. It may never be a prominent element in any of the video. But nevertheless, it shines bright, illuminated by the power and magic of music.


* * *

Here's a video that includes all the spots in the campaign
(Added to this article 10/30/11)



* * *

Read what other people thought about 'YOU WILL':



1. From Boingboing, Cory Doctorow writes:

“I think these are the most emblematic advertisements of the era, defining the way that big companies totally missed the point of the Internet…”


2. The Work and Genius of David Fincher:
AT&T - "You Will" (1993)

* * *

FUTURE FRIENDLY FOR DAVID FINCHER is the third in an educational series examining the utilization of temp music in advertising, entertainment and media production. To read previous articles on this topic, click on either the following link or the TEMP MUSIC label/link that follows at the footer of this post:

2009 WINTER/SPRING CRITICAL NOISE ARTICLES ABOUT TEMP TRACKS: Scratch Track Fever

Monday, February 09, 2009

SCRATCH TRACK FEVER

This article is the second in a series examining the utilization of temp music in advertising, entertainment and media production.

Whether you call it temp music or scratch tracks (or something else), the utilitarian use of pre-recorded audio as a stand-in (or reference) for an as of yet to be determined or composed soundtrack, is standard operating procedure.

A discussion of the ethics of directly referencing other musical works, to varying degrees of detail, is reserved for another article.

But if you’re a young or amateur composer, songwriter or music designer yourself –trying to break into the system– you may be asking yourself why bother with temp music at all? Why not, for instance, simply be original?

The quick answer is that most commercial projects, whether advertising or entertainment, do not represent the singular vision of a composer, but the collective vision of a party of stakeholders in a given enterprise.

In addition, commercial works are usually derivative by nature.

Neither summer blockbusters nor advertising campaigns appear out of the ether. Few, if any, are insanely original. Most of the time, these projects are constructed according to, if not a formula, then convention. And so, it follows, the music commissioned by their makers is also constructed with industry or genre conventions in mind.

Lest you already have your mind made up, the product of formulaic thinking does not always have to be measured by a negative value. Equal temperament, the ii-V-I progression, 4/4 rhythms and absolutely every single tonal tradition –including those so-called anarchic forms, such as serial music, punk and no wave– are all produced by formula. Even the composite work of a random number/pitch generator –such scores do exist– is the result of an algorithm. And yet every single convention proves itself quite useful to adept composers, able musicians and eager audiences alike.

As it happens, there are as many reasons to consider the use of temp music, as there are creative professionals who do so. Although any single artisan may be aware of only his or her own immediate necessity for its implementation.

SURROGATE SCORES FOR HOLLYWOOD MANGA


In the broadest sense, filmmakers use temp tracks as surrogate scores, whether for preview audiences or to suggest to vendors, investors and other stakeholders what a finished film, commercial or other media project (slated for, or still in production) will eventually look, sound and feel like.

But scratch tracks aren’t only slapped against picture as surrogate scores or even as a reference tool. Long before a single frame is even shot, or a script complete, directors may select a piece of temp music as inspiration, and not just for the eventual score, but for the entire project itself.

Likewise, directors resist music suggestions embedded in scripts, but that doesn’t stop writers from making their choices known.

Still, later in the process, a temp will be chosen to accompany a pre-production model of a given project using scanned storyboards to produce what is known as an 'animatic'.

For those without any basic knowledge of the film making process:

Storyboards, while perhaps not as nuanced as a comic strip, are nonetheless a series of scene-by-scene illustrations that serve to suggest the final version of a film, TV show or commercial.

I think of storyboards as Hollywood manga, and I’m surprised there isn’t a larger interest or after-market for them amongst film fans.

Storyboards are primarily produced because they help identify a variety of needs, such as: what the film will look like; how will it be shot; what kind of sets will be required; what the costumes will look like, etc. In short everything but the music. Although once in animatic form, directors will certainly integrate temp music. Ultimately, animatics help producers arrive at a budget and schedule.

And the same thing can be said of temp music. It’s not too far fetched an idea to think of temp music being as to storyboards what storyboards are to feature films.

Temp music can also serve as a performance tool during actual filming or video capture. I can already hear the cackling, there are lots of tools in Hollywood, but this one is quite useful.

For instance, during production, directors and cinematographers may use music to enhance dramatic action via choreographed camera movement.

Less directly, performers of all kinds use music to focus, prepare, get ready and 'get pumped' before a scene or event.

It’s interesting to consider that a writer might have crafted a scene –or even an entire script– inspired by a specific piece of music, and then hand that script over to a director who might then hear something else altogether different in his or her own imagination.

And whilst shooting, each actor might have prepared him or herself for a given scene listening to their own respective individual playlists; thereby fueling their performance with yet another musical overlay.

On top of that, the cinematographer might have a symphonic score in mind, or in his or her ear buds, anyway. Still later in the process, an editor may yet select still another track to cut to.

In this hypothetical arrangement, when the music supervisor can’t secure a license, a composer is finally commissioned to score the cue, which he or she does, presenting an original work whose only common elements with the director’s (or editor’s) original scratch was mood and tempo.

Universes collide, but I guess that’s how stars are made.

TEMP TRACK AS TEMPO MAP


Pre-scores are indeed commissioned from time to time, especially in the case of animation projects.

Animators, more than composers, are loath to work with temp tracks because their work must be absolutely synchronized to audio. When animators are forced to work with a scratch track, and the final music is anything but a near infringement of the temp, there is likely possibility that either audio or picture, or audio and picture, will require any number of costly revisions –tweaks– in order to synch with each other.

For this reason animation houses, anticipating such revisions when knowledge of a temp is known before the bidding process, will justly increase their estimates for a project that requires they work with an unfinished or surrogate score.

In the case of live action, editors are generally free to use temp music as mood and tempo maps without financial repercussion. Each audio clip presents a concise distillation of human expression whose pulses form a grid or TEMPO MAP upon which subsequent cuts to moving image are made.

It has long struck me that the art of editing is every bit as musical as it is visual, and that the modern editor's art closely resembles that of an electronic recording artist or beat maker. But remove the temp and the musicality of the edit still remains.

It's as if Film and Video Editors are DJs or drummers of light, with story being the ultimate goal of this illuminated art form.

As such, music not only helps to tell the story, it also actually helps build it.

A project takes shape when each stimuli inducing element feeds into the other, resulting in what feels like a perpetual inspiration circuit, and a kind of multimedia symbiosis:

• Sound
• Image
• Conflict
• Resolution

–These are the four quadrants that support story and embed any given experience into one’s brain, whether a cinematic fabrication or present and real.

A story can certainly move forward without either emotion or music. But both emotion and music –like aphrodisiacs, steroids and Sildenafil– are performance enhancers.

The result is a tighter cut, whether the editor then chooses to

A. Present the cut without the temp track,
B. Present the cut with the temp track, or
C. Present the cut with an alternate temp track, in order to provide options for a particularly unruly piece of footage.

It makes one wonder if our lives don’t unfold in the same way, with karma ultimately being something like a Motown or Meters loop, played at a discrete level, yet still capable of propelling the entire cosmic story forward; until at last some divine editor decides to make the final cut.

Shiva, creator and destroyer of worlds (in Hindu mythology) is after all, depicted as a dancer.

MUSIC AS MOOD MAP

If a director defines the look of a project, editorial retains great leverage in defining mood.

Curiously enough, it frequently boils down to which person selects the scratch track.

Certainly, if a director has a specific work in mind for the temp, editors will first use that. But even if presented with a select by the director, an editor might still propose another idea –maybe several ideas–, and chose to cut and present to an altogether different kind of track than the director or client originally had in mind.

To be clear, an editor can't run wild with a personal scratch track choice and hope to go final. The director, client, studio or other stakeholders must approve the alternate selection. Nevertheless, the situation, as is approval of an alternate temp track, is indeed quite common.

It probably happens more frequently with advertising projects that with features. Why? –Because directors (on advertising projects) are often retained solely for the shoot itself –that is, for their eyes and their capacity to capture magic on film or video, during the shoot, but not necessarily afterwards.

Directors may therefore seem entirely absent from the edit process. Or if they are present for the edit, it is in a consultant capacity, highly valued for their opinion, but without any real authority to define a final cut.

Meanwhile, editors are hired not just for their eyes, but also for their talent as storytellers. With or without temp music, the best of the lot have a deep, almost primal feeling for pacing. And if editors are indeed drummers of light, they are also, like the BBC's Dr. Who, Time Lords.

Given this circumstance, editors retain great leverage choosing the temp music they will cut to. In fact, by virtue of their power to chose temp music, editors are also often the unsung and undeclared music supervisors of a given project.

By the time an editor is finished building a story out of raw footage, he or she may have essentially re-defined the look and sound, if not the very experience of a project –all because of the music they chose to cut to.

SCRATCH TRACK AS SURROGATE SCORE

Although there are some advantages to commissioning a pre score in lieu of temp music, being free is not one of them. On the other hand, unlicensed temp tracks allow for directional change later in the process without creative or financial penalty.

Aside from the editorial process, temp tracks can be quite useful in other ways, too. Temp tracks against rough cuts or preproduction trailers help producers garner financial interest in a project.

Potential investors then can measure the potential audience for a project based on this preview, itself a kind of beta version of a film, and then make an informed decision whether or not to contribute their own dollars into the making of it.

When production finally gets underway, rough cuts will get synched to scratch music as one means of communicating direction to the various artisans who, while they may be vendors in one sense, are also co-creators of the project.

Likewise, during previews, studios use the presentation of unfinished movies (synched to temp tracks) the same way Madison Ave uses focus groups, –in order to discover potential weaknesses of a given entertainment experience. As with focus groups, the results of previews provide producers with an opportunity to maximize entertainment value and thereby insure their investment by making suitable changes prior to release.

It’s easy to think of scratch music as serving one singular purpose. In reality, the practical uses of temping audio is varied throughout the production assembly line. Whatever your personal opinion, or preferred process for working, one would do well to understand the strength of the scratch track before abandoning the concept completely.

And if you have any desire or hope of scoring for film or advertising, then resistance, as they say amongst the Borg, is futile.

* * *

Click on the following link to read the first article in this series:

TEMP TRACKS AND THEIR PURPOSE, Monday (February 02, 2009).