I think the correct answer is not a byte, but a myte’s worth.
In my custom corpus, a myte represents the smallest musical unit capable of triggering a cognitive response.
By this measure, a myte is smaller than a meme, which is defined as a small but nevertheless complete unit of transferable cultural information. A myte, on the other hand, need carry only enough data to provoke a response.
If Memes and Motives are Molecular; then Pitch is Atomic and the Myte a nano sized Sub Atomic audio particle.
Likewise, when we are impressed with a musical work, we speak of its compositional elements; and when we are impressed with a musician we are generally commenting on their style (how they play). But I think when we are impressed with a musician's TONE, what we are saying is they are deft manipulators of the myte.
That is, mytes represent the individual component elements that together form a matrix of expression that result in Nuance.
For instance, if one is a turn of the century rock fan of a certain age, one only need hear one (resonant) note from U2's The Edge to identify his playing, and a listener is able to do so not by recognizing what The Edge is playing (and therefore guess The Edge must be playing it), but rather by way of the sonic artifacts –intentional and unintentional– that are as much a part of The Edge's tone as the music itself (And this would be true for any musician).
WHEN THE MACHINE APPEARS TO SING
One might also consider the ratio of air to pitch in the production a horn player's tone; of bow and rosin friction to vibrating string, sounding a bit like chronic vocal fry, in the production of a string player's tone; of breath to voice for a vocalist or flautist. We, the listener, focus almost exclusively on pitch content; we might even comment on the purity of tone as we allow ourselves to be dazzled by a performance; but in reality great tone, unless it is artificial, is never pure. And that's what draws us in; the humanity by which the machine appears to sing.
In fact it is as much the individual character by which a performer shapes pitch as it is the noise and nuance coloring said pitch (by a sublime measure) that we identify a given performer. We can say they are excellent technicians not simply because they control pitch and rhythm, but because of their the capacity to make calculated deviations and control noise, using it as another color. And it may be this noise more than anything else that brands the band, so to speak.
This then begs another question:
Is a musician’s signature tone more a matter of physical execution or external artifacts?
Broadly speaking we can examine the production of tone on a macro and micro level. At the macro level, tone is the inherent sound produced by a given instrument. On a micro level, musicians shape that tone with their fingers. The result is timbre shaped by nuance, which we call a musician's sound.
Within the context of Quantum Audio analysis we might further define Tone as not quite molecular but ‘rather sub memetic’. And the reason for this is that Tone exists over time, and using our myte analogy, can be said to be made up of a matrix of mytes. And as it happens, once we identify the elements of this organization, we can produce an algorithm and replicate it with simulation software (or if we are exceptionally talented, our own hands and cognitive powers as they lend themselves to musical performance).
This is certainly the premise by which cover bands ply their trade (albeit intuitively); it never sounds exactly like the real thing, but sometimes good enough is often good enough.
SUB MELODIC AUDIO PHENOMENA
The identification of mytes and the algorithm by which they operate is also the premise by which manufacturers of simulation software produce devices by which we can dial in an entire era, and therefore evoke collective memories from a designated demographic. No doubt, this is obviously a useful tool in creating a brand asset or scoring period pieces, not to mention commonplace today.
And so, I think there can also be no doubt that even the merest sliver of a song can capably evoke an emotional response, and that the identification of the sonic gestures responsible for this cognitive phenomena are sub melodic and devoid of rhythm patterning, in other words 'incomplete' expressions.
We might similarly ask how much information can be conveyed and decoded from within a single phoneme? For no doubt, nomenclature experts having their own understanding of the power of incomplete expressions sound, ply their trade by stitching phonemes together into new, but nevertheless highly suggestive words for their clients.
MUSICAL EXPRESSION OR AUDIO ALGORITHM?
But whether musical or linguistic in nature, assuming there is a cognitive difference in the two approaches (I'm not sure, either way), I do think such expressions require a different method by which to explain their utility as elements in aural message delivery systems and sonic branding than can be currently achieved by either the macro analysis of traditional music theory or music memetics.
And until that time, we can still take pleasure in knowing that they absolutely do exist as sure as we know when the punch has been spiked with Lysergic acid diethylamide. That is, we know because these sub musical particles (and often non musical particles) act upon the senses like a hallucinogen, not because they are chemical in nature –they are not– but because of their ability to serve as very real psychoactive triggers upon the consciousness and conduits to virtual worlds and experiences.
It stands to reason then that apart from melody and rhythm, there is something else inside the music that makes any assembly of sound expressive. Some call it nuance and others call it noise, but if that's the case, that what we collectively call noise can be identified as essential elements in the production of expression, then clearly it is something akin to critical noise.