Thinking in Systems: International Bestseller

Metadata
- Title: Thinking in Systems: International Bestseller
- Author: Donella H. Meadows and Diana Wright
- Book URL: https://amazon.com/dp/B005VSRFEA?tag=malvaonlin-20
- Open in Kindle: kindle://book/?action=open&asin=B005VSRFEA
- Last Updated on: Thursday, August 12, 2021
Highlights & Notes
If a factory is torn down but the rationality which produced it is left standing, then that rationality will simply produce another factory. If a revolution destroys a government, but the systematic patterns of thought that produced that government are left intact, then those patterns will repeat themselves.… There’s so much talk about the system. And so little understanding. —ROBERT PIRSIG, Zen and the Art of Motorcycle Maintenance
Managers are not confronted with problems that are independent of each other, but with dynamic situations that consist of complex systems of changing problems that interact with each other. I call such situations messes.… Managers do not solve problems, they manage messes. —RUSSELL ACKOFF,1 operations theorist
Once we see the relationship between structure and behavior, we can begin to understand how systems work, what makes them produce poor results, and how to shift them into better behavior patterns.
A system is a set of things—people, cells, molecules, or whatever—interconnected in such a way that they produce their own pattern of behavior over time. The system may be buffeted, constricted, triggered, or driven by outside forces. But the system’s response to these forces is characteristic of itself, and that response is seldom simple in the real world.
Psychologically and politically we would much rather assume that the cause of a problem is “out there,” rather than “in here.” It’s almost irresistible to blame something or someone else, to shift responsibility away from ourselves, and to look for the control knob, the product, the pill, the technical fix that will make a problem go away.
No one deliberately creates those problems, no one wants them to persist, but they persist nonetheless. That is because they are intrinsically systems problems—undesirable behaviors characteristic of the system structures that produce them. They will yield only as we reclaim our intuition, stop casting blame, see the system as the source of its own problems, and find the courage and wisdom to restructure it. Obvious. Yet subversive. An old way of seeing. Yet somehow new. Comforting, in that the solutions are in our hands. Disturbing, because we must do things, or at least see things and think about things, in a different way.
At a time when the world is more messy, more crowded, more interconnected, more interdependent, and more rapidly changing than ever before, the more ways of seeing, the better. The systems-thinking lens allows us to reclaim our intuition about whole systems and hone our abilities to understand parts, see interconnections, ask “what-if” questions about possible future behaviors, and be creative and courageous about system redesign. Then we can use our insights to make a difference in ourselves and our world.
The behavior of a system cannot be known just by knowing the elements of which the system is made.
I have yet to see any problem, however complicated, which, when looked at in the right way, did not become still more complicated. —POUL ANDERSON
A system* is an interconnected set of elements that is coherently organized in a way that achieves something. If you look at that definition closely for a minute, you can see that a system must consist of three kinds of things: elements, interconnections, and a function or purpose.
A system is more than the sum of its parts. It may exhibit adaptive, dynamic, goal-seeking, self-preserving, and sometimes evolutionary behavior.
You think that because you understand “one” that you must therefore understand “two” because one and one make two. But you forget that you must also understand “and.” —Sufi teaching story
THINK ABOUT THIS How to know whether you are looking at a system or just a bunch of stuff: A) Can you identify parts? … and B) Do the parts affect each other? … and C) Do the parts together produce an effect that is different from the effect of each part on its own? … and perhaps D) Does the effect, the behavior over time, persist in a variety of circumstances?
It’s easier to learn about a system’s elements than about its interconnections.
Many of the interconnections in systems operate through the flow of information. Information holds systems together and plays a great role in determining how they operate.
The best way to deduce the system’s purpose is to watch for a while to see how the system behaves.
Purposes are deduced from behavior, not from rhetoric or stated goals.
A NOTE ON LANGUAGE The word function is generally used for a nonhuman system, the word purpose for a human one, but the distinction is not absolute, since so many systems have both human and nonhuman elements.
An important function of almost every system is to ensure its own perpetuation.
In fact, one of the most frustrating aspects of systems is that the purposes of subunits may add up to an overall behavior that no one wants.
Keeping sub-purposes and overall system purposes in harmony is an essential function of successful systems.
The least obvious part of the system, its function or purpose, is often the most crucial determinant of the system’s behavior.
Changing interconnections in a system can change it dramatically.
A change in purpose changes a system profoundly, even if every element and interconnection remains the same.
Storing information means increasing the complexity of the mechanism.
A stock is the memory of the history of changing flows within the system.
All models, whether mental models or mathematical models, are simplifications of the real world.
The human mind seems to focus more easily on stocks than on flows. On top of that, when we do focus on flows, we tend to focus on inflows more easily than on outflows.
A stock can be increased by decreasing its outflow rate as well as by increasing its inflow rate. There’s more than one way to fill a bathtub!
A stock takes time to change, because flows take time to flow.
Stocks generally change slowly, even when the flows into or out of them change suddenly. Therefore, stocks act as delays or buffers or shock absorbers in systems.
The time lags imposed by stocks allow room to maneuver, to experiment, and to revise policies that aren’t working.
The presence of stocks allows inflows and outflows to be independent of each other and temporarily out of balance with each other.
Stocks allow inflows and outflows to be decoupled and to be independent and temporarily out of balance with each other.
Systems thinkers see the world as a collection of stocks along with the mechanisms for regulating the levels in the stocks by manipulating flows.
Systems of information-feedback control are fundamental to all life and human endeavor, from the slow pace of biological evolution to the launching of the latest space satellite.… Everything we do as individuals, as an industry, or as a society is done in the context of an information-feedback system. —Jay W. Forrester
In other words, if you see a behavior that persists over time, there is likely a mechanism creating that consistent behavior. That mechanism operates through a feedback loop. It is the consistent behavior pattern over a long period of time that is the first hint of the existence of a feedback loop.
In any case, the flows into or out of the stock are adjusted because of changes in the size of the stock itself.
A feedback loop is a closed chain of causal connections from a stock, through a set of decisions or rules or physical laws or actions that are dependent on the level of the stock, and back again through a flow to change the stock.
Balancing feedback loops are equilibrating or goal-seeking structures in systems and are both sources of stability and sources of resistance to change.
The presence of a feedback mechanism doesn’t necessarily mean that the mechanism works well. The feedback mechanism may not be strong enough to bring the stock to the desired level.
I’d need rest to refresh my brain, and to get rest it’s necessary to travel, and to travel one must have money, and in order to get money you have to work.… I am in a vicious circle … from which it is impossible to escape. —Honoré Balzac,4 19th century novelist and playwright
Here we meet a very important feature. It would seem as if this were circular reasoning; profits fell because investment fell, and investment fell because profits fell. —Jan Tinbergen,5 economist
Reinforcing feedback loops are self-enhancing, leading to exponential growth or to runaway collapses over time. They are found whenever a stock has the capacity to reinforce or reproduce itself.
THINK ABOUT THIS: If A causes B, is it possible that B also causes A?
The … goal of all theory is to make the … basic elements as simple and as few as possible without having to surrender the adequate representation of … experience. —Albert Einstein,1 physicist
First the general one: The information delivered by a feedback loop can only affect future behavior; it can’t deliver the information, and so can’t have an impact fast enough to correct behavior that drove the current feedback. A person in the system who makes a decision based on the feedback can’t change the behavior of the system that drove the current feedback; the decisions he or she makes will affect only future behavior.
The information delivered by a feedback loop—even nonphysical feedback—can only affect future behavior; it can’t deliver a signal fast enough to correct behavior that drove the current feedback. Even nonphysical information takes time to feedback into the system.
If you’re gearing up your work force to a higher level, you have to hire fast enough to correct for those who quit while you are hiring. In other words, your mental model of the system needs to include all the important flows, or you will be surprised by the system’s behavior.
A stock-maintaining balancing feedback loop must have its goal set appropriately to compensate for draining or inflowing processes that affect that stock. Otherwise, the feedback process will fall short of or exceed the target for the stock.
Complex behaviors of systems often arise as the relative strengths of feedback loops shift, causing first one loop and then another to dominate behavior.
System dynamics models explore possible futures and ask “what if” questions.
QUESTIONS FOR TESTING THE VALUE OF A MODEL Are the driving factors likely to unfold this way? If they did, would the system react this way? What is driving the driving factors?
Model utility depends not on whether its driving scenarios are realistic (since no one can know that for sure), but on whether it responds with a realistic pattern of behavior.
Systems with similar feedback structures produce similar dynamic behaviors.
A delay in a balancing feedback loop makes a system likely to oscillate.
Delays are pervasive in systems, and they are strong determinants of behavior. Changing the length of a delay may (or may not, depending on the type of delay and the relative lengths of other delays) make a large change in the behavior of a system.
In physical, exponentially growing systems, there must be at least one reinforcing loop driving the growth and at least one balancing loop constraining the growth, because no physical system can grow forever in a finite environment.
A quantity growing exponentially toward a constraint or limit reaches that limit in a surprisingly short time.
The real choice in the management of a nonrenewable resource is whether to get rich very fast or to get less rich but stay that way longer.
Nonrenewable resources are stock-limited. The entire stock is available at once, and can be extracted at any rate (limited mainly by extraction capital). But since the stock is not renewed, the faster the extraction rate, the shorter the lifetime of the resource.
Renewable resources are flow-limited. They can support extraction or harvest indefinitely, but only at a finite flow rate equal to their regeneration rate. If they are extracted faster than they regenerate, they may eventually be driven below a critical threshold and become, for all practical purposes, nonrenewable.
The trick, as with all the behavioral possibilities of complex systems, is to recognize what structures contain which latent behaviors, and what conditions release those behaviors—and, where possible, to arrange the structures and conditions to reduce the probability of destructive behaviors and to encourage the possibility of beneficial ones.
If the land mechanism as a whole is good, then every part is good, whether we understand it or not. If the biota, in the course of aeons, has built something we like but do not understand, then who but a fool would discard seemingly useless parts? To keep every cog and wheel is the first precaution of intelligent tinkering. —Aldo Leopold,1 forester
Placing a system in a straitjacket of constancy can cause fragility to evolve.
There are always limits to resilience.
Resilience is not the same thing as being static or constant over time. Resilient systems can be very dynamic. Short-term oscillations, or periodic outbreaks, or long cycles of succession, climax, and collapse may in fact be the normal condition, which resilience acts to restore! And, conversely, systems that are constant over time can be unresilient. This distinction between static stability and resilience is important. Static stability is something you can see; it’s measured by variation in the condition of a system week by week or year by year. Resilience is something that may be very hard to see, unless you exceed its limits, overwhelm and damage the balancing loops, and the system structure breaks down. Because resilience may not be obvious without a whole-system view, people often sacrifice resilience for stability, or for productivity, or for some other more immediately recognizable system property.
Systems need to be managed not only for productivity or stability, they also need to be managed for resilience—the ability to recover from perturbation, the ability to restore or repair themselves.
Like resilience, self-organization is often sacrificed for purposes of short-term productivity and stability. Productivity and stability are the usual excuses for turning creative human beings into mechanical adjuncts to production processes. Or for narrowing the genetic variability of crop plants. Or for establishing bureaucracies and theories of knowledge that treat people as if they were only numbers.
Self-organization produces heterogeneity and unpredictability. It is likely to come up with whole new structures, whole new ways of doing things. It requires freedom and experimentation, and a certain amount of disorder. These conditions that encourage self-organization often can be scary for individuals and threatening to power structures. As a consequence, education systems may restrict the creative powers of children instead of stimulating those powers. Economic policies may lean toward supporting established, powerful enterprises rather than upstart, new ones. And many governments prefer their people not to be too self-organizing.
Systems often have the property of self-organization—the ability to structure themselves, to create new structure, to learn, diversify, and complexify. Even complex forms of self-organization may arise from relatively simple organizing rules—or may not.
So, naturalists observe, a flea Has smaller Fleas that on him prey; And these have smaller still to bite ‘em, And so proceed ad infinitum. —Jonathan Swift,4 18th century poet
Corporate systems, military systems, ecological systems, economic systems, living organisms, are arranged in hierarchies. It is no accident that that is so. If subsystems can largely take care of themselves, regulate themselves, maintain themselves, and yet serve the needs of the larger system, while the larger system coordinates and enhances the functioning of the subsystems, a stable, resilient, and efficient structure results. It is hard to imagine how any other kind of arrangement could have come to be.
Complex systems can evolve from simple systems only if there are stable intermediate forms. The resulting complex forms will naturally be hierarchic. That may explain why hierarchies are so common in the systems nature presents to us. Among all possible complex forms, hierarchies are the only ones that have had the time to evolve.
Hierarchies evolve from the lowest level up—from the pieces to the whole, from cell to organ to organism, from individual to team, from actual production to management of production.
The original purpose of a hierarchy is always to help its originating subsystems do their jobs better. This is something, unfortunately, that both the higher and the lower levels of a greatly articulated hierarchy easily can forget. Therefore, many systems are not meeting our goals because of malfunctioning hierarchies.
When a subsystem’s goals dominate at the expense of the total system’s goals, the resulting behavior is called suboptimization.
To be a highly functional system, hierarchy must balance the welfare, freedoms, and responsibilities of the subsystems and total system—there must be enough central control to achieve coordination toward the large-system goal, and enough autonomy to keep all subsystems flourishing, functioning, and self-organizing.
Hierarchical systems evolve from the bottom up. The purpose of the upper layers of the hierarchy is to serve the purposes of the lower layers.
The trouble … is that we are terrifyingly ignorant. The most learned of us are ignorant.… The acquisition of knowledge always involves the revelation of ignorance—almost is the revelation of ignorance. Our knowledge of the world instructs us first of all that the world is greater than our knowledge of it. —Wendell Berry,1 writer and Kentucky farmer
Everything we think we know about the world is a model. Our models do have a strong congruence with the world. Our models fall far short of representing the real world fully.
A system is a big black box Of which we can’t unlock the locks, And all we can find out about Is what goes in and what comes out. Perceiving input-output pairs, Related by parameters, Permits us, sometimes, to relate An input, output and a state. If this relation’s good and stable Then to predict we may be able, But if this fails us—heaven forbid! We’ll be compelled to force the lid! —Kenneth Boulding,2 economist
The behavior of a system is its performance over time—its growth, stagnation, decline, oscillation, randomness, or evolution. If the news did a better job of putting events into historical context, we would have better behavior-level understanding, which is deeper than event-level understanding. When a systems thinker encounters a problem, the first thing he or she does is look for data, time graphs, the history of the system. That’s because long-term behavior provides clues to the underlying system structure. And structure is the key to understanding not just what is happening, but why.
System structure is the source of system behavior. System behavior reveals itself as a series of events over time.
Many relationships in systems are nonlinear. Their relative strengths shift in disproportionate amounts as the stocks in the system shift. Nonlinearities in feedback systems produce shifting dominance of loops and many complexities in system behavior.
When we think in terms of systems, we see that a fundamental misconception is embedded in the popular term “side-effects.”… This phrase means roughly “effects which I hadn’t foreseen or don’t want to think about.”… Side-effects no more deserve the adjective “side” than does the “principal” effect. It is hard to think in terms of systems, and we eagerly warp our language to protect ourselves from the necessity of doing so. —Garrett Hardin,5 ecologist
Disorderly, mixed-up borders are sources of diversity and creativity.
On planet Earth there are no system “clouds,” no ultimate boundaries. Even real clouds in the sky are part of a hydrological cycle. Everything physical comes from somewhere, everything goes somewhere, everything keeps moving.
There are no separate systems. The world is a continuum. Where to draw a boundary around a system depends on the purpose of the discussion—the questions we want to ask.
Systems surprise us because our minds like to think about single causes neatly producing single effects.
At any given time, the input that is most important to a system is the one that is most limiting.
To shift attention from the abundant factors to the next potential limiting factor is to gain real understanding of, and control over, the growth process.
Any physical entity with multiple inputs and outputs is surrounded by layers of limits.
There always will be limits to growth. They can be self-imposed. If they aren’t, they will be system-imposed.
I realize with fright that my impatience for the re-establishment of democracy had something almost communist in it; or, more generally, something rationalist. I had wanted to make history move ahead in the same way that a child pulls on a plant to make it grow more quickly. I believe we must learn to wait as we learn to create. We have to patiently sow the seeds, assiduously water the earth where they are sown and give the plants the time that is their own. One cannot fool a plant any more than one can fool history. —Václav Havel,7 playwright, last president of Czechoslovakia and first president of the Czech Republic
Overshoots, oscillations, and collapses are always caused by delays.
When there are long delays in feedback loops, some sort of foresight is essential. To act only when a problem becomes obvious is to miss an important opportunity to solve the problem.
As every individual, therefore, endeavours as much as he can both to employ his capital in the support of domestic industry, and so to direct that industry that its produce may be of greatest value… he generally, indeed, neither intends to promote the public interest, nor knows how much he is promoting it.… He intends his own security; … he intends only his own gain and he is in this … led by an invisible hand to promote an end which was no part of his intention. By pursuing his own interest he frequently promotes that of society more effectually than when he really intends to promote it. —Adam Smith,9 18th century political economist
Change comes first from stepping outside the limited information that can be seen from any single place in the system and getting an overview. From a wider perspective, information flows, goals, incentives, and disincentives can be restructured so that separate, bounded, rational actions do add up to results that everyone desires. It’s amazing how quickly and easily behavior changes can come, with even slight enlargement of bounded rationality, by providing better, more complete, timelier information.
To paraphrase a common prayer: God grant us the serenity to exercise our bounded rationality freely in the systems that are structured appropriately, the courage to restructure the systems that aren’t, and the wisdom to know the difference!
The bounded rationality of each actor in a system may not lead to decisions that further the welfare of the system as a whole.
What makes a difference is redesigning the system to improve the information, incentives, disincentives, goals, stresses, and constraints that have an effect on specific actors.
Rational elites … know everything there is to know about their self-contained technical or scientific worlds, but lack a broader perspective. They range from Marxist cadres to Jesuits, from Harvard MBAs to army staff officers.… They have a common underlying concern: how to get their particular system to function. Meanwhile … civilization becomes increasingly directionless and incomprehensible. —John Ralston Saul,1 political scientist
Such resistance to change arises when goals of subsystems are different from and inconsistent with each other.
The alternative to overpowering policy resistance is so counterintuitive that it’s usually unthinkable. Let go. Give up ineffective policies. Let the resources and energy spent on both enforcing and resisting be used for more constructive purposes. You won’t get your way with the system, but it won’t go as far in a bad direction as you think, because much of the action you were trying to correct was in response to your own action. If you calm down, those who are pulling against you will calm down too.
That calming down may provide the opportunity to look more closely at the feedbacks within the system, to understand the bounded rationality behind them, and to find a way to meet the goals of the participants in the system while moving the state of the system in a better direction.
The most effective way of dealing with policy resistance is to find a way of aligning the various goals of the subsystems, usually by providing an overarching goal that allows all actors to break out of their bounded rationality. If everyone can work harmoniously toward the same outcome (if all feedback loops are serving the same goal), the results can be amazing. The most familiar examples of this harmonization of goals are mobilizations of economies during wartime, or recovery after war or natural disaster.
THE TRAP: POLICY RESISTANCE When various actors try to pull a system stock toward various goals, the result can be policy resistance. Any new policy, especially if it’s effective, just pulls the stock farther from the goals of other actors and produces additional resistance, with a result that no one likes, but that everyone expends considerable effort in maintaining. THE WAY OUT Let go. Bring in all the actors and use the energy formerly expended on resistance to seek out mutually satisfactory ways for all goals to be realized—or redefinitions of larger and more important goals that everyone can pull toward together.
The tragedy of the commons arises from missing (or too long delayed) feedback from the resource to the growth of the users of that resource.
THE TRAP: TRAGEDY OF THE COMMONS When there is a commonly shared resource, every user benefits directly from its use, but shares the costs of its abuse with everyone else. Therefore, there is very weak feedback from the condition of the resource to the decisions of the resource users. The consequence is overuse of the resource, eroding it until it becomes unavailable to anyone. THE WAY OUT Educate and exhort the users, so they understand the consequences of abusing the resource. And also restore or strengthen the missing feedback link, either by privatizing the resource so each user feels the direct consequences of its abuse or (since many resources cannot be privatized) by regulating the access of all users to the resource.
The actor in this feedback loop (British government, business, hospital, fat person, school administrator, jogger) has, as usual, a performance goal or desired system state that is compared to the actual state. If there is a discrepancy, action is taken. So far, that is an ordinary balancing feedback loop that should keep performance at the desired level. But in this system, there is a distinction between the actual system state and the perceived state. The actor tends to believe bad news more than good news. As actual performance varies, the best results are dismissed as aberrations, the worst results stay in the memory. The actor thinks things are worse than they really are. And to complete this tragic archetype, the desired state of the system is influenced by the perceived state. Standards aren’t absolute. When perceived performance slips, the goal is allowed to slip. “Well, that’s about all you can expect.” “Well, we’re not doing much worse than we were last year.” “Well, look around, everybody else is having trouble too.”
THE TRAP: DRIFT TO LOW PERFORMANCE Allowing performance standards to be influenced by past performance, especially if there is a negative bias in perceiving past performance, sets up a reinforcing feedback loop of eroding goals that sets a system drifting toward low performance. THE WAY OUT Keep performance standards absolute. Even better, let standards be enhanced by the best actual performances instead of being discouraged by the worst. Use the same structure to set up a drift toward high performance!
THE TRAP: ESCALATION When the state of one stock is determined by trying to surpass the state of another stock—and vice versa—then there is a reinforcing feedback loop carrying the system into an arms race, a wealth race, a smear campaign, escalating loudness, escalating violence. The escalation is exponential and can lead to extremes surprisingly quickly. If nothing is done, the spiral will be stopped by someone’s collapse—because exponential growth cannot go on forever. THE WAY OUT The best way out of this trap is to avoid getting in it. If caught in an escalating system, one can refuse to compete (unilaterally disarm), thereby interrupting the reinforcing loop. Or one can negotiate a new system with balancing loops to control the escalation.
This system trap is found whenever the winners of a competition receive, as part of the reward, the means to compete even more effectively in the future. That’s a reinforcing feedback loop, which rapidly divides a system into winners who go on winning, and losers who go on losing.
THE TRAP: SUCCESS TO THE SUCCESSFUL If the winners of a competition are systematically rewarded with the means to win again, a reinforcing feedback loop is created by which, if it is allowed to proceed uninhibited, the winners eventually take all, while the losers are eliminated. THE WAY OUT Diversification, which allows those who are losing the competition to get out of that game and start another one; strict limitation on the fraction of the pie any one winner may win (antitrust laws); policies that level the playing field, removing some of the advantage of the strongest players or increasing the advantage of the weakest; policies that devise rewards for success that do not bias the next round of competition.
Why does anyone enter the trap? First, the intervenor may not foresee that the initial urge to help out a bit can start a chain of events that leads to ever-increasing dependency, which ultimately will strain the capacity of the intervenor. The American health-care system is experiencing the strains of that sequence of events. Second, the individual or community that is being helped may not think through the long-term loss of control and the increased vulnerability that go along with the opportunity to shift a burden to an able and powerful intervenor. If the intervention is a drug, you become addicted. The more you are sucked into an addictive action, the more you are sucked into it again. One definition of addiction used in Alcoholics Anonymous is repeating the same stupid behavior over and over and over, and somehow expecting different results.
The secret is to begin not with a heroic takeover, but with a series of questions. Why are the natural correction mechanisms failing? How can obstacles to their success be removed? How can mechanisms for their success be made more effective?
If you are the intervenor, work in such a way as to restore or enhance the system’s own ability to solve its problems, then remove yourself. If you are the one with an unsupportable dependency, build your system’s own capabilities back up before removing the intervention. Do it right away. The longer you wait, the harder the withdrawal process will be.
THE TRAP: SHIFTING THE BURDEN TO THE INTERVENOR Shifting the burden, dependence, and addiction arise when a solution to a systemic problem reduces (or disguises) the symptoms, but does nothing to solve the underlying problem. Whether it is a substance that dulls one’s perception or a policy that hides the underlying trouble, the drug of choice interferes with the actions that could solve the real problem. If the intervention designed to correct the problem causes the self-maintaining capacity of the original system to atrophy or erode, then a destructive reinforcing feedback loop is set in motion. The system deteriorates; more and more of the solution is then required. The system will become more and more dependent on the intervention and less and less able to maintain its own desired state. THE WAY OUT Again, the best way out of this trap is to avoid getting in. Beware of symptom-relieving or signal-denying policies or practices that don’t really address the problem. Take the focus off short-term relief and put it on long-term restructuring.
THE TRAP: RULE BEATING Rules to govern a system can lead to rule beating—perverse behavior that gives the appearance of obeying the rules or achieving the goals, but that actually distorts the system. THE WAY OUT Design, or redesign, rules to release creativity not in the direction of beating the rules, but in the direction of achieving the purpose of the rules.
THE TRAP: SEEKING THE WRONG GOAL System behavior is particularly sensitive to the goals of feedback loops. If the goals—the indicators of satisfaction of the rules—are defined inaccurately or incompletely, the system may obediently work to produce a result that is not really intended or wanted. THE WAY OUT Specify indicators and goals that reflect the real welfare of the system. Be especially careful not to confuse effort with result or you will end up with a system that is producing effort, not result.
If the system is chronically stagnant, parameter changes rarely kick-start it. If it’s wildly variable, they usually don’t stabilize it. If it’s growing out of control, they don’t slow it down.
The strength of a balancing loop—its ability to keep its appointed stock at or near its goal—depends on the combination of all its parameters and links—the accuracy and rapidity of monitoring, the quickness and power of response, the directness and size of corrective flows.
Examples of strengthening balancing feedback controls to improve a system’s self-correcting abilities include: preventive medicine, exercise, and good nutrition to bolster the body’s ability to fight disease, integrated pest management to encourage natural predators of crop pests, the Freedom of Information Act to reduce government secrecy, monitoring systems to report on environmental damage, protection for whistleblowers, and impact fees, pollution taxes, and performance bonds to recapture the externalized public costs of private benefits.
A balancing feedback loop is self-correcting; a reinforcing feedback loop is self-reinforcing. The more it works, the more it gains power to work some more, driving system behavior in one direction.
Reinforcing feedback loops are sources of growth, explosion, erosion, and collapse in systems. A system with an unchecked reinforcing loop ultimately will destroy itself. That’s why there are so few of them. Usually a balancing loop will kick in sooner or later.
Missing information flows is one of the most common causes of system malfunction. Adding or restoring information can be a powerful intervention, usually much easier and cheaper than rebuilding physical infrastructure.
There is a systematic tendency on the part of human beings to avoid accountability for their own decisions. That’s why there are so many missing feedback loops—and why this kind of leverage point is so often popular with the masses, unpopular with the powers that be, and effective, if you can get the powers that be to permit it to happen (or go around them and make it happen anyway).
If you want to understand the deepest malfunctions of systems, pay attention to the rules and to who has power over them.
The most stunning thing living systems and some social systems can do is to change themselves utterly by creating whole new structures and behaviors.
Self-organization means changing any aspect of a system lower on this list—adding completely new physical structures, such as brains or wings or computers—adding new balancing or reinforcing loops, or new rules. The ability to self-organize is the strongest form of system resilience. A system that can evolve can survive almost any change, by changing itself. The human immune system has the power to develop new responses to some kinds of insults it has never before encountered. The human brain can take in new information and pop out completely new thoughts.
Any system, biological, economic, or social, that gets so encrusted that it cannot self-evolve, a system that systematically scorns experimentation and wipes out the raw material of innovation, is doomed over the long term on this highly variable planet.
The intervention point here is obvious, but unpopular. Encouraging variability and experimentation and diversity means “losing control.” Let a thousand flowers bloom and anything could happen! Who wants that? Let’s play it safe and push this lever in the wrong direction by wiping out biological, cultural, social, and market diversity!
Even people within systems don’t often recognize what whole-system goal they are serving. “To make profits,” most corporations would say, but that’s just a rule, a necessary condition to stay in the game. What is the point of the game? To grow, to increase market share, to bring the world (customers, suppliers, regulators) more and more under the control of the corporation, so that its operations becomes ever more shielded from uncertainty.
- Paradigms—The mind-set out of which the system—its goals, structure, rules, delays, parameters—arises
No one has ever said that better than Ralph Waldo Emerson: Every nation and every man instantly surround themselves with a material apparatus which exactly corresponds to … their state of thought. Observe how every truth and every error, each a thought of some man’s mind, clothes itself with societies, houses, cities, language, ceremonies, newspapers. Observe the ideas of the present day … see how timber, brick, lime, and stone have flown into convenient shape, obedient to the master idea reigning in the minds of many persons.… It follows, of course, that the least enlargement of ideas … would cause the most striking changes of external things.7
The ancient Egyptians built pyramids because they believed in an afterlife. We build skyscrapers because we believe that space in downtown cities is enormously valuable. Whether it was Copernicus and Kepler showing that the earth is not the center of the universe, or Einstein hypothesizing that matter and energy are interchangeable, or Adam Smith postulating that the selfish actions of individual players in markets wonderfully accumulate to the common good, people who have managed to intervene in systems at the level of paradigm have hit a leverage point that totally transforms systems. You could say paradigms are harder to change than anything else about a system, and therefore this item should be lowest on the list, not second-to-highest. But there’s nothing physical or expensive or even slow in the process of paradigm change. In a single individual it can happen in a millisecond. All it takes is a click in the mind, a falling of scales from the eyes, a new way of seeing. Whole societies are another matter—they resist challenges to their paradigms harder than they resist anything else.
You insert people with the new paradigm in places of public visibility and power. You don’t waste time with reactionaries; rather, you work with active change agents and with the…
There is yet one leverage point that is even higher than changing a paradigm. That is to keep oneself unattached in the arena of paradigms, to stay flexible, to realize that no paradigm is “true,” that every one, including the one that sweetly shapes your own worldview, is a tremendously limited understanding of an immense and amazing universe that is far beyond human comprehension. It is to “get” at a gut level the paradigm that there are paradigms, and to see that that itself is a paradigm, and to…
If no paradigm is right, you can choose whatever one will help to achieve your purpose. If you have no idea where to get a purpose, you can listen to the universe. It is in this space of mastery over paradigms that people throw off addictions, live in constant joy, bring down empires, get locked up or…
The higher the leverage point, the more the system will resist changing it—that’s why societies often rub out truly enlightened beings. Magical leverage points are not easily accessible, even if we know where they are and which direction to push on them. There are no cheap tickets to mastery. You have to work hard at it, whether that means rigorously analyzing a system or rigorously casting off your own paradigms and throwing yourself into the humility of not-knowing. In the end, it seems that mastery has…
The real trouble with this world of ours is not that it is an unreasonable world, nor even that it is a reasonable one. The commonest kind of trouble is that it is nearly reasonable, but not quite. Life is not an illogicality; yet it is a trap for logicians. It looks just a little more mathematical and regular than it is. —G. K. Chesterton,1 20th century writer
Social systems are the external manifestations of cultural thinking patterns and of profound human needs, emotions, strengths, and weaknesses. Changing them is not as simple as saying “now all change,” or of trusting that he who knows the good shall do the good.
Systems thinking makes clear even to the most committed technocrat that getting along in this world of complex systems requires more than technocracy.
We can’t control systems or figure them out. But we can dance with them!
Starting with the behavior of the system forces you to focus on facts, not theories. It keeps you from falling too quickly into your own beliefs or misconceptions, or those of others.
Starting with the behavior of the system directs one’s thoughts to dynamic, not static, analysis—not only to “What’s wrong?” but also to “How did we get there?” “What other behavior modes are possible?” “If we don’t change direction, where are we going to end up?” And looking to the strengths of the system, one can ask “What’s working well here?” Starting with the history of several variables plotted together begins to suggest not only what elements are in the system, but how they might be interconnected. And finally, starting with history discourages the common and distracting tendency we all have to define a problem not by the system’s actual behavior, but by the lack of our favorite solution. (The problem is, we need to find more oil. The problem is, we need to ban abortion. The problem is, we don’t have enough salesmen. The problem is, how can we attract more growth to this town?) Listen to any discussion, in your family or a committee meeting at work or among the pundits in the media, and watch people leap to solutions, usually solutions in “predict, control, or impose your will” mode, without having paid any attention to what the system is doing and why it’s doing it.
Remember, always, that everything you know, and everything everyone knows, is only a model. Get your model out there where it can be viewed. Invite others to challenge your assumptions and add their own. Instead of becoming a champion for one possible explanation or hypothesis or model, collect as many as possible. Consider all of them to be plausible until you find some evidence that causes you to rule one out. That way you will be emotionally able to see the evidence that rules out an assumption that may become entangled with your own identity.
Getting models out into the light of day, making them as rigorous as possible, testing them against the evidence, and being willing to scuttle them if they are no longer supported is nothing more than practicing the scientific method—something that is done too seldom even in science, and is done hardly at all in social science or management or government or everyday life.
You’ve seen how information holds systems together and how delayed, biased, scattered, or missing information can make feedback loops malfunction. Decision makers can’t respond to information they don’t have, can’t respond accurately to information that is inaccurate, and can’t respond in a timely way to information that is late. I would guess that most of what goes wrong in systems goes wrong because of biased, late, or missing information. If I could, I would add an eleventh commandment to the first ten: Thou shalt not distort, delay, or withhold information. You can drive a system crazy by muddying its information streams. You can make a system work better with surprising ease if you can give it more timely, more accurate, more complete information.
The first step in respecting language is keeping it as concrete, meaningful, and truthful as possible—part of the job of keeping information streams clear. The second step is to enlarge language to make it consistent with our enlarged understanding of systems.
If something is ugly, say so. If it is tacky, inappropriate, out of proportion, unsustainable, morally degrading, ecologically impoverishing, or humanly demeaning, don’t let it pass. Don’t be stopped by the “if you can’t define it and measure it, I don’t have to pay attention to it” ploy. No one can define or measure justice, democracy, security, freedom, truth, or love. No one can define or measure any value. But if no one speaks up for them, if systems aren’t designed to produce them, if we don’t speak about them and point toward their presence or absence, they will cease to exist.
Remember that hierarchies exist to serve the bottom layers, not the top. Don’t maximize parts of systems or subsystems while ignoring the whole. Don’t, as Kenneth Boulding once said, go to great trouble to optimize something that never should be done at all. Aim to enhance total systems properties, such as growth, stability, diversity, resilience, and sustainability—whether they are easily measured or not.
The thing to do, when you don’t know, is not to bluff and not to freeze, but to learn. The way you learn is by experiment—or, as Buckminster Fuller put it, by trial and error, error, error. In a world of complex systems, it is not appropriate to charge forward with rigid, undeviating directives. “Stay the course” is only a good idea if you’re sure you’re on course. Pretending you’re in control even when you aren’t is a recipe not only for mistakes, but for not learning from mistakes. What’s appropriate when you’re learning is small steps, constant monitoring, and a willingness to change course as you find out more about where it’s leading.
Let’s face it, the universe is messy. It is nonlinear, turbulent, and dynamic. It spends its time in transient behavior on its way to somewhere else, not in mathematically neat equilibria. It self-organizes and evolves. It creates diversity and uniformity. That’s what makes the world interesting, that’s what makes it beautiful, and that’s what makes it work.
We can, and some of us do, celebrate and encourage self-organization, disorder, variety, and diversity. Some of us even make a moral code of doing so, as Aldo Leopold did with his land ethic: “A thing is right when it tends to preserve the integrity, stability, and beauty of the biotic community. It is wrong when it tends otherwise.”
Systems thinking can only tell us to do that. It can’t do it. We’re back to the gap between understanding and implementation. Systems thinking by itself cannot bridge that gap, but it can lead us to the edge of what analysis can do and then point beyond—to what can and must be done by the human spirit.