The function of a neuron is to process information from its inputs, which consist of signals (or action potentials) coming in from those neurons whose axons connect to its dendrites. The neuron constantly performs a calculation based on these inputs, and outputs the result as a sequence of signals sent out on its own axon to other neurons. The meaning of a neuron both determines and is determined by the relationship between the output signals that it generates and the component of perception, belief, emotion, intention or other mental state that is represented by the activity of that neuron.
When a software developer develops software, the meaning of information stored in different locations is generally determined by the developer, who creates named locations (generally called variables) and assigns a particular meaning to each location. Once such assignments are made, the developer can then write code which calculates the value for each particular location from a relevant set of data input from other locations, in a manner which is determined by the developer to be correct, given the meanings assigned to the input locations and to the output location.
If you were designing information processing hardware, you would tend to follow a similar strategy, but the locations would consist of specific components physically located in your hardware, and the flow and processing of information would be determined by physical connections between the different components.
Pre-determining the meaning of information stored and processed in different locations would seem to simplify the design of software or hardware, and it is hard to see how one could ever manage the design of a system in which such pre-determination did not occur.
It is true that, if we consider mappings of, for instance, high-level programming language variables to physical memory, then this mapping can vary on the fly, but this variation is managed entirely by the implementation of a generic virtual machine (within which programs written in the high-level programming language execute) on the low-level physical hardware. As far as the high-level programmer is concerned, the implementation is entirely transparent, and they need only concern themselves with the assignment of meaning to their "fixed location" program variables.
Fixed mappings from meaning to location seem necessary to make the design of information processing systems tractable, but it is not what happens in the brain.
There is a certain a degree of "labelling" and assignment of meaning according to said labelling, that occurs when brain growth and development are controlled by neurotrophins, which are signalling molecules which cause neurons to connect to each other in certain patterns as they grow and develop.
But, and this is the big "but", the corresponding assignment of meaning is fuzzy and probabilistic. And, it can change over time. The name for the tendency to change over time is cortical plasticity (or just plasticity, if one is not just considering cortical neurons).
Furthermore, there does not appear to be any "central control" that is in charge of deciding which neuron should carry and process precisely which meaning. The brain has no analogue of the software developer reserving locations, labelling them and then assigning meaning to them. Whatever the processes are that determine the allocation of meaning, they seem to involve only the interactions of neurons with other neurons that they are directly connected to.
It is as if there was no programmer to manage the program, so the variables just talked among themselves, with each variable only talking to other variables that either send information to or receive information from that variable during a particular calculation.
When a developer debugs their software, the simplest thing that they can do is observe the values of data stored by particular variables as the program processes particular input data. The developer has in their head (or maybe even written down) the intended meanings of particular variables, and they can compare these to the actual values of data stored in those variables at particular time.
Whereas the brain has no central assignment of meaning, as meaning is apparently determined by local negotiations between connected neurons, and this meaning can change over time. So how on earth can an individual neuron "debug" itself, and/or "develop" its meaning and implementation thereof?
Given that neurons do not have fixed pre-determined meanings, it is rather difficult to exactly describe the task that a particular neuron is performing, which makes it difficult for us (or the neuron) to determine how well it is performing that task, and how it could change in order to perform better.
Whatever task it is that a neuron performs, it is necessarily limited to receiving its available inputs, using them to perform a calculation and then sending the result to whatever other neurons are suitably connected to receive that result. This suggests the following corporate-style mission statement for neurons:
It is the task of each neuron to make use of the inputs available to it to produce information which is of maximum utility and value to those neurons that receive its outputs. Additionally, each neuron should provide feedback to its input neurons about the value of the information that it receives from then.
By defining the task of a neuron in terms of "value", we avoid the difficulty of not knowing in advance exactly what the task of each neuron is. The idea of value suggests an analogy with a free-enterprise economy, where each neuron is an entrepeneur in a "market" consisting of the whole brain. In which case there must be some explicit measurement of this value, which takes account of "supply" and "demand" within a neuronal "economy" of information.
Even without any specific assumptions about how neuronal "value" is calculated and how information describing that value is distributed, we can use the idea of a supply/demand economy to explain some aspects of cortical plasticity. For example, if certain neurons are destroyed, for instance by disease, often other nearby neurons will take over the functionality of the destroyed neurons. We can interpret this as being caused by the demand for the information previously supplied by the destroyed neurons increasing, so that said information has a higher value in the market, which then leads other neurons to switch over to supplying that information themselves, whereas previously those neurons would not have bothered to supply it, because the original (now destroyed) suppliers were more competitive at doing so.
The most well-known theory about how local interactions between neurons determine changes in neuronal connections is that of Donald Hebbs. The theory is sometimes summed up as "neurons that fire together wire together". We can consider whether this has a plausible economic interpretation.
For example, consider an input neuron X whose firing is somewhat correlated with the firing of the neuron Y that it sends information to, and suppose (for this example) that it is connected by an excitatory synapse. The Hebbian theory predicts that the connection from neuron X to neuron Y will be strengthened because of this correlation (so whatever mechanism it is that determines the strengthening, this mechanism must involve the detection of the relevant correlation).
From an economic point of view, if neuron Y is performing a certain task, and neuron X is correlated with it, it is plausible that neuron X might be supplying information that is relevant to whatever it is that neuron Y is doing. If this is a valid assumption, then we have a local mechanism that is consistent with an economic interpretation of cortical learning and plasticity.
I suspect that one simple mechanism is not enough to explain everything that goes on in the brain, and that the rules of economics might be different for neurons performing qualitatively different kinds of tasks, for example neurons involved in decision-making as compared to neurons involved in perception. There may also be more global indicators of value that are relevant in some situations, for example our perception of pleasure and pain which represent information about overall success at satisfying biological goals. But a value-based theory does seem to be one kind of theory that can provide a framework for explaining how a neuron can "know what to do", even though "what a neuron does" is something that is intrinsically variable, and which seems to vary as a result of the very processes that occur when each neuron in the network attempts to do its own job "better".