[Note: This is a draft from the companion text I am working on for the course I offer periodically ACS101. I will mark items in the series with the subtitle ACS101-X].
Many organic patterns are generated by the application initially of a coarse-grained process that serves as underlying structure within which finer-grained processes can unfold.
Before the details get filled in, we need some blurry image of the whole. Not a tabula rasa, not a blank canvas; a frame and context within which the flourishes and accents situate themselves.
With this in mind, I feel obligated to offer a big, blurry picture on this topic I’m calling “Applied Complexity”.
The central claim is this: you can’t get what you want by forcing it.
We have to completely unlearn the modern scientismic beliefs around how intentions, actions, and real-world outcomes are related. The nature and necessity of this shift is directly implied by what we already know in a rigorous “scientific” manner about complexity and complex systems.
Science as an institution has long been captured by reductionistic assumptions, and part of the scientism of our time has been to let those (wholly insufficient) assumptions bleed into our thought and action generally, with a result that we are constantly trying things that are guaranteed not to work, stuffing reality into a model that it will never fit into. Nassim Nicholas Taleb refers to the Bed of Procrustes: stretch you or chop you until you fit the bed right.
The shift we need is from build to grow. From command and control to nurture and select. From directly forcing the system, to indirectly enabling it.
Living complexity can exist only under the condition that no single agent is an a position to compel the details of the whole.
We must never attempt to be such an agent — we would be applying our own Procustean bed, killing the whole and its future possibilities. Applying the lessons of complexity push us necessarily towards humility.
So we must learn to allow the system to self-generate states that we prefer, without “controlling” too much. When dealing with complex systems, any apparent success in achieving total control will certainly be a pyrrhic victory.
This is the blurry picture, for now. Let’s go and fill in some of the details.
Unlearning
Our primary task here is not so much learning, as it is unlearning. A much more difficult task, mind you.
Canonical models from the field of complexity science give us a rigorous way of exploring and meditating on various aspects of complex causality and associated assumptions. This yields useful anchor points to relate real systems and conditions, and should not be confused with modeling in an attempt to sufficiently capture a real-world complex system.
Hence, as we go forward, we will use various conceptual, mathematical, and computational models to anchor various concepts. It is my experience that there is a sinking-in effect associated with repeated exposure to relatively simple models that nonetheless display complex behaviors — it eventually sinks into your gut. (Especially if you program your own simulations!). This is a way to begin unlearning. When you think you understand what is going on, yet still get surprised, you start to feel complexity.
An exploration of formal modeling also yields several rigorous bounds with respect our own epistemic abilities and degree of certainty. This is perhaps its most important contribution in answering the question of what to do. It can tell us what not to do. What to cease from trying over and over, always with disaster, always justifying disaster with the varying details, and always missing the invariant, that it is the approach that is the problem, the assumptions simply don’t fit.
So, we will study the models and see what they teach us, and one of the things that they teach us is that models will only take us so far. Then, we must consider strategies and tactics for how to proceed given the ineffectiveness of classical notions of control, the vast uncertainty we face, and the need to generate healthy complexity that we don’t and can’t fully understand.
The good news is that there exist non-reductionistic approached to design, engineering, as well as governance and policy. But those approaches will be shied away from until we are truly convinced of the futility of our urge to directly control.
You can’t get what you want by forcing it.
Hi Joe, you write that, "The good news is that there exist non-reductionistic approached to design, engineering, as well as governance and policy. But those approaches will be shied away from until we are truly convinced of the futility of our urge to directly control."
Do you have any information on resources that specifically talk about these approaches? I am coming from a business background (M.B.A.), and I find that all of the resources you find in that domain would fit what I think you would call "Direct Control."
As in, the best way to manage employees in this given scenario is an even more specific set of guidelines/rulebooks that they must follow (whether it's cleaning the shop up at night, or how a salesperson sends out his emails).
I am finding that in most cases, these rules do nothing. The people continue to act as they always have. In many ways it directly seems like the exact type of thing John Gall talks of in Systemantics, regarding the inertia of the system and how it exists to serve its own purposes and not ours.
Systemantics was a WONDERFUL book, one of the best reads I've ever had, and directly applicable to my life, and how foolish I have been in thinking that I could create complex systems from nothing (and explaining how/why those systems then crashed around me despite herculean effort to support them), but it was mostly a book about WHAT NOT TO DO with complex systems.
So I am hoping you have some suggestion about WHAT TO DO when dealing with complex systems? Again, from a practitioners standpoint? (It does seem like Gall's advice that ALL complex systems that actually DO work have come from a gradual increasing in size and complexity from a smaller, simpler system that worked from the very beginning gives us a clue in this regard, but I haven't seen anyone try to take this to the next level and make concrete suggestions about how to go about creating these scenarios.)
Thanks!
Whether it is unclear the outcome of this (and most likely we shouldn't be doing it), there is a 100% certainty it will not come out as expected.
https://www.businesswire.com/news/home/20210913005498/en/Woolly-Mammoths-Will-Walk-the-Arctic-Tundra-Again-New-Biosciences-and-Genetics-Company-Colossal-Pioneers-Animal-De-Extinction-Technology-to-Help-Restore-Lost-Ecosystems-and-Help-Ensure-a-Habitable-Planet