In 1945, economist F.A. Hayek wrote an article entitled The Use of Knowledge in Society. In it, he described flawed thinking he found to be endemic in the world of economic planning. His arguments centered around how knowledge was used and where the limits to availability of knowledge prevent theoretical methods from working in practice. Hayek found in particular that the idea of concentrating all necessary knowledge into a central planning authority was not workable for several reasons:
- To solve an economic optimization problem as typically posed requires that data for the whole society be, as he put it, “given to a single mind” i.e., concentrated in one place or system or organization
- It requires knowledge that is dispersed among many people, with no possibility that any individual or group of experts could be capable of acquiring it all
- The necessary data exists in small portions that are not integrated, are incomplete, and may be contradictory
- Some data can only be found locally and require special understanding to even recognize
- The data can be transitory and may change on an unpredictable time line
- The statistical aggregates beloved of economists are not useful in dealing with constant small changes that make up actual economic dynamics
- Yet economists persisted in assuming markets only work if all participants have perfect knowledge, even though if this were to be the case, markets would have no purpose
Hayek suggested that markets have value precisely because they enable participants to benefit from widely dispersed knowledge that is never gathered into one place and where no one has more than the slightest fragment of that knowledge, including market designers and operators, and government planners. He also indicated that some of the knowledge is fleeting in nature and can only be recognized by people with specialized understanding of local conditions and opportunities (“unorganized knowledge of the circumstances of time and place”), knowledge disdained by the economists of the time. He portrayed prices as both distillers of information and as motivators. Price communicates essential information in a very compressed and, as he described it, “symbolic” form – information that is beyond the horizon of any individual actor to acquire. Price, he argued, is a coordination mechanism and it also enables division of labor and can use knowledge that is local in time and space as well as knowledge that is “unorganized” and never available to central planners and markets. In effect, it is crowdsourcing – a term that was unknown in 1945.
The applicability of this thinking to present-day electric power systems is striking. Electricity market economists and power system control engineers both still engage in the fallacies of centralization and availability of global information and the need for control elements of the extended grid to have complete information – see the work on transactive energy and distributed state estimation. We can see that both centralized markets and grand central optimizations schemes of the type advocated by what I shall call the “Boston school” are not actually workable in the coming grid environment, as indicated by recent market failures in Texas, Australia, and elsewhere.
Instead, a distributed method of coordination that avoids the shortcomings described above is needed. The coordination framework derived from layered decomposition that we call Laminar Coordination satisfies these requirements. It is distributed in nature and provides coordination signals that encode information from the entire system without the need to concentrate that knowledge in anyone place, system, or device. Some of the characteristics of the Laminar approach are:
- it is not centralized
- it provides Hayek’s “symbols” in the forms of resource allocation signals via primal decomposition and price signals via dual decomposition, which can be mixed in the same coordination framework
- it does not accumulate information across tiers or layers and hence is scalable, an issue Hayek did not address in his essay
- it enables use of local constraints and objectives (“local selfish optimization inside global coordination”)
- it does not depend on theoretical optimality
- it is capable of fast reaction close to the distribution edge, avoiding the latency problems that are growing in conventional systems as device dynamics move to faster and faster time scales
- it provides a flexible structure that avoids hidden coupling, tier bypassing, and coordination gapping – common flaws in DER integration approaches
If we are to move to the second S-curve grid we need, the sophistry of grand centralism/universal complete knowledge must be replaced by rigorous structured “agent-sourcing”/network economy approaches to DER integration.