Certain constraints apply to most musical items. For example, all pitch values come from a particular scale, and rhythm is defined by nested regular beats. Similar constraints apply to most musical items, although the details vary from item to item. For example, one tune may be in 3/4 time and another tune may be in 4/4 time. Each time signature constitutes a possible constraint that might apply to a musical item. (And it is not possible for both constraints to apply simultaneously.)
The hypothesis advanced here is that constraints are the only thing that make music musical, that different constraints have different levels of musicality, and that, to a first approximation, the overall musicality of a musical item is the sum of the musicalities of the individual constraints that apply to that item.
It follows from this hypothesis that everything that we don't know about what makes music musical is a consequence of what we don't know about the types of constraints that define individual musical items.
Some musical constraints can be specified precisely, and are global, in the sense that they apply without exception to all parts of a musical item. For example, all notes have to belong to a specified scale, or, the times at which musical notes begin are taken from a subset of a set of regular nested beats (for example, regular beats of 4 notes in a bar, 2 notes, 1 note, 1/2 note and 1/4 note).
Other possible musical constraints are partial. Examples include: most steps from one note to the next are just one step on the scale or, if not, then they are consonant intervals, in which case they are usually part of the current chord. Two other common constraints are that there is an approximately balanced binary structure, and that volume changes tend to happen in a "smooth" manner.
To fully understand what music is, we must understand all the possible constraints that apply to musical items, and for each constraint, we must determine what degree of musicality it contributes to the overall musicality of musical items.
Out of the set of all possible constraints that one could apply to music, only some constraints will be musical – if one attempts to compose music according to constraints which are not musical, then the resulting composition will not be musical.
That is, rules such as: all notes come from the scale, and, rhythm is based on nested regular beats.
We have not yet discovered a way to formally describe all the constraints that apply to music, which is why we are not yet able to predict the musicality of music independently of subjective observation. That's also why we do not yet know any algorithm that can generate "strong" music without human assistance. (From a mathematical point of view, being able to predict the musicality of any candidate musical item is equivalent to having an algorithm that generates all "strong" items of music. However, ingenuity may be required to design an equivalent generating algorithm that is efficient enough to be useful in practice.)
To put it another way, we know some of the rules (i.e. constraints) that describe music, but we don't yet know all of them. Furthermore, each musical item is defined by a set of constraints specific to that musical item, although at the same time similar types of constraint will apply to many different musical items.
For example, constraints on pitch and on time exist somewhat independently of each other.
If different constraints were truly independent, then it would be possible to exchange the corresponding components between different items of music, for example the melody of tune A combined with the rhythm of tune B, without any loss of musicality. In practice such a cross-bred tune might be a good starting point for further developing a composition, but will not be as optimal as the original source compositions.
For example, there can be musicality in a purely percussive musical item, in which case the constraint that pitch values come from a scale does not apply.
Each musical rule defines a constraint, which contributes to musicality. But no individual constraint is essential to musicality.
Furthermore, for any common type of constraint which contributes musicality to musical items, which appears to define a general "rule", there may exist other constraints similar to constraints of that type, but not fully compatible with them. If we judge musical items subject to these similar constraints against the general "rule", we will probably observe that the rule is followed only approximately. (A possible real-life example is note-bending. The general rule says that all pitch values come exactly from a chosen scale. But when notes are bent, a somewhat altered version of that general rule applies – i.e. there is still some relationship between the pitch values of notes and the chosen scale, but the pitch does not conform exactly to the scale at all times.)
As well as whole musical items that are "exceptions" to general rules, there can be portions of a musical item that are exceptions to the constraints that apply most of the time, for example a bar of different length, or an accidental note that is not from the main scale.
An explanation for this type of exception is that in order to satisfy one constraint better, some other constraint must be satisfied less, but with an overall increase in total musicality. If we only fully understand the constraint that is satisified less, then this will appear to be the "exception to the rule", which appears to happen for no reason, because we do not properly understand the "rule" which corresponds to the second constraint, which (as a result of the compromise) is satisfied better.
It is known that different areas of the brain specialise in different aspects of perception. Furthermore, where an aspect of perception has a definite dimension, such as size, or speed, there is very often a direct correlation between the perceived dimension and actual physical location in the cortical map.
It follows that, if we could observe human brain activity with sufficient resolution, that we would be able to observe a particular activity pattern resulting from any particular constraint on any musical aspect which is mapped in a manner that is correlated with position. (And the failure to observe such patterns of activity already is presumably due to the fact that the required resolution is greater than the available resolution of all existing non-intrusive brain scanning technologies, and furthermore that human-like music perception is absent from non-human animals, so such activity patterns cannot be observed using more intrusive techniques that might be applicable to animals in the laboratory.)
A plausible hypothesis about the nature of music is that music is just one thing, even though we readily observe that music has multiple aspects, such as melody, harmony, rhythm and structure. One way to achieve a unified hypothesis is to assume that these different aspects are processed by different cortical maps, and that a single characteristic of neuronal activity accounts for the musicality of each aspect within each corresponding cortical map.
Whatever this characteristic is, it is the long searched for "universal" property of music
Once sufficient constraints apply to an item of music, the "next note" is close to 100% predictable.
Probabilistic models which attempt to explain the next note as a probabilistic function of previous notes will underestimate the predictability of the notes in a musical item, if such models are not fully informed about the types of constraints that may apply to music. And we know that we are all ignorant about at least some of the constraints that may apply to music, because we do not yet have any theory that fully accounts for all known "strong" music.
Furthermore, whether or not a listener has heard an individual item of music before (and can therefore predict the next note), is not directly relevant to the perceived musicality of the item, because what matters is whether or not the music satisifies the relevant constraints, and this does not depend on the listener's familiarity with the specific item.
However, it may be that if a new item of music is based on one or more constraints of a type not familiar to the listener, that there may be a period of "learning" to perceive those constraints before the listener fully "appreciates" the music.
The more constrained music is, the more compressible it will be. However the musicality of music does not just depend on how many constraints apply to the music, but also depends on the musicality of the particular constraints. Arbitrarily chosen constraints may result in music that is very predictable and therefore very compressible. But if those constraints are not "musical" constraints, then the music will not be musical.
This article is a Propositional Exposition. It is licensed under the Creative Commons Attribution-ShareAlike license.