The current discussion around Distributed Energy Resources (DER) and grid modernization often relies upon a magic box called optimization to make everything work. According to its proponents, it will coordinate tens of millions of grid-connected devices, unlock loads of latent value, and avoid the need for grid infrastructure investment. Unfortunately, optimal solutions may well be brittle, meaning that a slight change in underlying conditions can make the formerly optimal solution invalid. Optima can be broad and nearly flat, in which case being off optimum a bit makes very little difference in outcomes. Or, optima can be sharp, in which case a slight shift makes a very large difference in outcomes. Also, optima may well be found at the edge of the feasible solution set (think linear programming to help visualize this concept even though the grid problems are nonlinear) so that a slight shift in conditions can leave a formerly optimal solution outside the bounds of feasibility.
The better the optimum is, as compared to other solutions (an economist would say the more value it extracts), the sharper the optimization peak becomes. Correspondingly, the likelihood is greater that a shift in underlying conditions will cause the formerly optimal solution to fall outside the set of feasible solutions and thereby fail badly. In other words, optima that make the biggest difference are also the most brittle.
Rather than seeking the holy grail of an optimal solution, it is better to seek a robust solution. A robust solution is one that is relatively insensitive to variations in exogenous factors and model parameters. Such a solution will be suboptimal from the standpoint of traditional grid techno-economic objectives but will be superior to the traditional “optimal” solution under conditions of volatility, subsystem or component failure, and external system stress. In essence, we are introducing a new criterion into the optimization problem, namely the need for the solution to be robust. Once that is done, the robust solution becomes better than the traditional techno-economic optimal solution.
So where is this better solution to be found? Right in the neighborhood of the traditional optimal solution. Generally, optimization problems have what is known as a feasible set of solutions – ones that satisfy the problem constraints. Among the feasible solutions, there may be one particular solution that extremizes an objective function (setting aside the issue of local vs. global optima for the moment). Identifying the feasible solution set may in itself be difficult, but the process of locating the optimum can be computationally demanding to the point of scaling unsustainably as the number of elements (such as consumers’ DER devices) increases.
Rather than seeking and choosing this solution, we can choose another from the feasible solution set that is better from the standpoint of robustness. In fact, since we are choosing for robustness rather than unique extremization, there are likely many candidates to be found in the vicinity of the centroid of the feasible solution set – in other words, more interior to the feasible solution set than the traditional optimum. By admitting a larger number of useful and practical solutions, the process of finding one can be relaxed, so a side benefit of this approach is improvement in the computational scalability issue.
Letting go of the obsession with optimality and seeking practical solutions with robustness in mind is a necessary key to wide-scale integration and utilization of consumers’ DER.