A follow-up to the counting-problem essay — on why “more management science” is the wrong reflex once the variety has outrun the controller, what survives the scepticism, and the inevitability that the answers involve AI in the human loop.
I am sceptical and some would say pessimistic of the ability of any pre-packaged subset of management science theory, whether it be focused on change management or the day to day, to offer the kind of minimal certainty over outcomes that most of would recognise as acceptable. Lets take common or garden tech projects. Many projects “get over the line” but all the stats say that most leave casualties. Some are closer to a what Edward Yourdon called a ‘death march’ and I feel I have retreated from Moscow more than once. Sceptical, yes, but on the other hand, I feel as if I know what good can look like. Could I evidence with a (hopefully) humble brag. I have been privileged to experience (though leave no profound mark on) a number of organisations elite in different ways. One to ones in the 1980s with the history tutors at Lincoln College, Oxford, were challenging, especially for someone targeting a “Gentleman’s Fourth” some years after the grade had been abolished. The directing staff at the Royal Military Academy Sandhurst were outstanding, not least because they managed to get the winner of the “award for officer who will find de-mobilisation least challenging” through the course (even if only by means of fireman’s lift at times.) The Grenadier Guards’ attention to detail was “gleaming, sir” in every way even if their Paymaster was (I am afraid to say) at times a scruffy drunk. Dell Computer in the 1990s under Michael Dell was killing HP from a standing start but for some reason I wasn’t mentioned in his book. JPMorgan’s “Arcordia” and “Jupiter” derivatives groups would have been dynamos of innovation at massive scale whether I ran their timesheets or not. People at all of the less case-study-worthy companies I have worked for since have sometimes said “I am sorry our systems seem so ad-hoc, you would have been used to better at…wherever.” Well, yes and no. Certain elements were indeed unique and I would unapologetically classify those organisations as elite by my terms whilst at the same time recognising them as “elite” from other perspectives. Elite sometimes to the point of awe.
But on a bad day these organisations were just as dysfunctional, frustrating and unpredictable as anywhere else I have worked. And before you jump to the correlation…not just around my workspace (though I dread the longitudinal study). All organisations and all people (though not necessarily at all times) I am sure strive to do better. They rely on heroic personal efforts, but also try to rely on management processes that promise to help. Sadly, compared to our lofty aspirations, most end in delivering a capability that is adequate and underwhelming at best and chaotic at worst. Most “maturity assessments” start and end with grading of zero: unplanned, uncontrolled, and a bit melancholy.
In my last post I explored some of the reasons for that, based on some now very old thinking by W. Ross Ashby: the Law of Requisite Variety. In dry terms the law states that in order to offer “control” a system must match the “variety” of its environment. The set of “states” that the environment can find itself in. It must have a “move”, a “play”, a response of some kind to the shape-shifting variety that is quotidian experience. I ventured that with AI, this challenge had overnight accelerated beyond enumeration and that it was a step change, not an incremental nudge. However, I hope I turned the corner by offering tools designed to get our heads round the new problem “situation” before moving on to set the problem statement.
Let us therefore continue this more optimistic trajectory towards the sunlit uplands of controlled success and audit sign-off.
I. The wrong reflex
The first instinct, on the way to those uplands, is depressingly often to demand a finer-grained framework. A bigger risk taxonomy. A more elaborate KPI tree. A glossier dashboard. The reflex is so widespread that it deserves a moment of attention, partly because it looks like the responsible response and partly because, on examination, it is the wrong one.
Henry Mintzberg’s Rise and Fall of Strategic Planning is, almost thirty years on, still the cleanest single statement of the case for the prosecution. Mintzberg’s three “fallacies of strategic planning” — that discontinuities can be predicted, that strategists can be detached from the operations of the business, and that strategy-making can itself be formalised — sit underneath most failures of management-science-as-applied-to-the-turbulent. The third is the one that bites here. The dream of formalisation is the formalist conceit: the assumption, threaded through operations research, total quality management, the strategic-planning industry, and most of what the consulting industry sells, that with a sharp enough framework one can render the problem tractable. The conceit has done extraordinary work where conditions favour it. Where conditions do not, it actively harms: Russell Ackoff, an operations-research pioneer who lived long enough to disown most of the field he had helped build, put it most pointedly — classical management science teaches you to do the wrong thing more efficiently. More rigour, applied to a regime to which rigour does not apply, makes the wrongness larger.
Knight Capital is the standing example. On 1 August 2012 the firm — at the time the largest equity market-maker in the United States — deployed a routine software change to its automated trading platform. A piece of dormant code from 2003, never removed, was reactivated by the deployment on one of eight servers. The system began executing buy-high-sell-low trades, at thousands of orders per second, against itself. Forty-five minutes later, Knight had lost $440 million — roughly three times the firm’s annual earnings — was effectively bankrupt, and within a few months had been absorbed by a competitor. The post-mortems found, predictably, that internal controls existed, deployment procedures existed, change management existed, audit trails existed. The firm was, on paper, mature. None of the controls covered the failure mode that arose, because the failure mode lived in the seam between two pieces of code that nobody had thought of together. The risk register did not enumerate it. The framework was the wrong shape for the system.
The cleanest single result in the contingency-theory tradition — which goes back to Joan Woodward’s studies of British manufacturing in the late 1950s — is that the right structure depends on the environment. Mechanistic organisations work in stable environments; organic organisations work in turbulent ones. Apply the wrong one and the formalisation that was helpful in the first case becomes a liability in the second. The corollary, which contingency theorists do not always emphasise but which falls out of the same argument, is that the same is true of management theories themselves. They are themselves contingent on the regime to which they are applied. There is no view from nowhere; only the view from a regime.
Before reaching for any specific solution, a board therefore has to ask which regime its AI sits in. The honest answer, for most firms most of the time, is the turbulent one. That sets the criterion for what counts as a serviceable response. Anything that depends on the environment being mostly stable, the goals being mostly agreed, and the causal structure being mostly known will, by construction, fail the test.
II. Ashby, fairly stated
It is worth stating the Ashby case more carefully than the slogan-version allows, because the slogan-version invites the equally lazy counter-slogan that follows it.
Ashby’s law, as it is usually quoted, says that a controller must have at least as much variety as the system it controls. As a formal statement that is closer to an accounting identity than an empirical generalisation: it follows from the definitions. The system being controlled has more variety than the control system; the control system will therefore fail in some of the states the controlled system can reach. That is true by construction. The empirical record is thinner than the formal record. The most-cited attempt to test the law in an organisational setting is J. D. R. de Raadt’s 1987 study, conducted in a single insurance company with around a hundred observations and self-report measures. Suggestive, not decisive; mostly superseded since by work in the viable system model tradition and in resilience engineering. Ashby has matured into a boundary principle: a constraint on what any control system can hope to do. That is still useful. It is not the whole answer.
Several honest objections to a strong reading of the law are worth taking seriously, not because they undo it but because each carries an implication for what to do about it.
The most useful comes from Stafford Beer, who pointed out that variety can be attenuated and that the controller’s variety can be amplified — that one need not match the environment’s variety so much as carve up the world into a controllable shape. Beer’s distinction between amplifiers and attenuators sits underneath most of the practical work that follows. A control system needs to represent only the parts of the world that matter for control, not the world; that is the engineering project Ashby implies, not a contradiction of it.
Beer’s move is reinforced by an observation Karl Weick and the high-reliability-organisation tradition both make in different vocabularies: organisations adapt; they do not only control. Variety mismatch is handled dynamically, through ongoing sensemaking and revision of the model, rather than by trying to match the environment statically. Mindful organising is variety handling under another name. And what counts as variety, in any case, is filtered before it reaches the controller. Most of the environment is noise; effective controllers attend to the subset that is signal for their purpose. That is the discipline of Herbert Simon’s bounded rationality and satisficing — not because we are too dim to optimise, but because optimising over the full variety would itself be a category error.
There is a much older argument in the same direction. Markets and institutions absorb complexity better than central controllers can. Hayek’s Use of Knowledge in Society (1945) is the canonical statement: prices, markets, and institutions handle far more variety than any central planner could, because they aggregate distributed knowledge into a small number of operational signals. The cost is a particular kind of opacity; the benefit is variety handling at scale. Elinor Ostrom’s work on polycentric governance, which won her the 2009 Nobel, is a related line: multiple semi-autonomous decision centres deal with environmental variety that no single authority can.
The objection that most often catches the practitioner — and surprises them — is that formalisation can itself increase variety-handling capacity. Codified rules, modular interfaces, abstractions, and standards do not so much match the environment’s variety as compress it into something the controller can act on without losing what matters. Toyota’s andon cord, of which more in a moment, does exactly this. So does aviation Crew Resource Management, of which also more. The point is to engineer the controller to do better than its variety would suggest, not to give up because its variety is finite.
What none of these objections gives you is a recipe. Ashby’s law tells you a mismatch will cause failure; it does not tell you which match to build. The law is more abstract than operational, and requisite variety is necessary rather than sufficient — a controller can have variety in principle and still fail in practice if its feedback is too slow, its model wrong, or its action repertoire mismatched. That is the engineering problem the rest of this essay is about.
The cumulative effect of these objections is not to undo Ashby but to bound the move he licenses. The full variety cannot be matched. It can be attenuated, decomposed, decentralised, modelled selectively, sensed at high frequency, and constrained. That is a programme. It is also, conveniently, the programme several practical traditions have already worked out.
III. Three regimes
A frame, before specific tools.
Treat management-science value as the difference between decision improvement (whatever the discipline gives you that you would not otherwise have had) and model-failure cost (what it costs you when the model is wrong in a way you did not detect). Where decision improvement is large and model-failure cost low, management science earns its keep. Where decision improvement is small or speculative and model-failure cost is high, the discipline operates as a tax.
Three regimes follow, and they are not novel in their parts but they are useful as a single cut. In the optimisation regime, the environment is stable, objectives are agreed, causal structure is known, variety is bounded. Classical management science is at home: linear programming, queueing theory, six-sigma, lean, well-scoped PRINCE2. Model-failure cost is low because the model is mostly right. In the robustness regime, the environment is uncertain but bounded, objectives are mostly stable, causal structure is partially known. Optimal solutions over-fit and shatter under perturbation; the realistic goal is regret reduction and adaptation. This is the home of robust decision-making and the better forms of scenario planning. In the resilience regime, variety is unbounded, objectives are contested, causal structure is opaque or non-stationary. Optimisation is impossible and even robustness is fragile; the realistic goal is to keep the system viable, learning, and out of ruin.
Most of the trouble in AI governance comes from people treating a regime-three problem with regime-one tools — six-sigma rigour applied to a moving, contested, partly-opaque object — and producing artefacts whose precision is the wrong shape for the problem. The board’s first question is therefore not which framework to adopt, but which regime the firm is in. The cybernetic-contingency answer to the formalist conceit, in a sentence, is that the choice of method is itself a judgement about the world, and the cost of getting that judgement wrong is the silent overhead carried by every framework adopted in the wrong regime.
IV. Five things that survive Ashby
What practical tools, then, survive contact with the variety problem? My short list is five. None dissolves the variety problem. Each gives the controller more leverage than it otherwise had.
Variety attenuation, properly understood. This is Beer’s first instrument and the one most often taken too lightly. The aim is to reduce the variety the controller has to handle by absorbing it, deflecting it, or refusing to process it. The canonical example is Toyota’s andon cord — the rope or button that any line worker can pull when something looks wrong, which alerts a team leader and, if the problem is not resolved within a defined cycle, stops the line. Naive readings treat the andon cord as worker empowerment, which it is, but the deeper move is cybernetic. The plant cannot enumerate every defect that could appear; instead it pushes detection out to the workers nearest the variety, and pre-commits to stopping when an unenumerated condition is detected, rather than trying to keep running through it. The variety that matters is filtered to a binary signal — pull or do not pull — that the supervisory layer above is built to handle. The system absorbs unbounded variety by refusing to absorb it.
The AI analogue is the formal-harness pattern from the previous essay: the harness makes the model’s effective output set smaller than the model’s full output set; the model retains its intrinsic variety, but the system’s controllable variety is what was reduced. Allowlists, scope reductions, pre-commitments not to deploy AI in the highest-stakes decisions — these are all variety attenuation. The Knight Capital failure was, at root, an attenuation failure: there was no andon cord, no human in the loop with both the authority and the latency to stop an automated process whose behaviour had moved outside the envelope it was designed to operate in.
Adaptive capacity, in the resilience-engineering sense. The discipline most associated with Erik Hollnagel and David Woods names four capacities a resilient system needs: the ability to anticipate (see what is coming), to monitor (see what is happening), to respond (act on it), and to learn (carry the experience forward). Adaptive capacity is what lets a system handle disturbances it did not know about in advance. The most-studied case is aviation Crew Resource Management, which arose out of the 1977 Tenerife runway collision, the deadliest accident in commercial aviation history. The investigation found — as had several earlier crashes — that the technical failures were minor; the catastrophic ones were communication, hierarchy, and a refusal to challenge the captain when he was wrong. NASA convened a workshop in 1979; United Airlines was the first carrier to train CRM in 1981; by the 1990s it was a global standard. The aviation accident rate fell, and has continued to fall, in significant part because the industry built adaptive capacity into the cockpit by design. None of the four capacities was assumed; each was trained, drilled, and reviewed.
A board can ask for evidence of all four. Anticipation: what is the firm’s horizon-scanning function and when did it last surface something the executive had not seen? Monitoring: what continuous signals does the firm actually watch for AI-related drift, and who looks at them? Response: what is the playbook when the signal trips, and has the playbook been rehearsed in the last twelve months? Learning: what near-misses have been logged this year, what was changed as a result, and is anyone above the line of fire reading the log? None of this tends to show up in the standard AI risk register. It should.
Polycentric governance. Ostrom’s lesson, applied. AI policy in a firm rarely fits in one committee, one function, or one accountable executive — risk, security, legal, ethics, business units, technology, HR, finance, procurement, internal audit, and external assurance each see a piece. The temptation is to centralise; the cybernetic answer is the opposite. Multiple semi-autonomous decision centres, with clear boundaries, overlapping jurisdictions, and rules of mutual adjustment, deal with variety that no single authority can. Ostrom’s eight design principles for managing common-pool resources — clear boundaries, congruent rules, collective choice, monitoring, graduated sanctions, conflict resolution, recognition of rights to organise, nested enterprises — were not designed for AI but read remarkably well as a maturity checklist. The board’s question is not whether any one of those centres has produced the right slide. It is whether the centres exist and talk to each other.
The institutions of mine I described in the opening were, in their best moments, polycentric in this way. Sandhurst’s directing staff and the Grenadiers’ chain of command both operated inside and through hierarchy, but the variety was handled by giving people closer to the action both the authority and the cultural permission to act on it. The institutions’ worst moments came when those rights were withdrawn — when something was escalated that should have been resolved, or held that should have been escalated. The pathologies are familiar in any organisation that has tried to centralise its way out of complexity.
Robust decision-making under deep uncertainty. The technique most associated with RAND — and refined over twenty years across water-resource policy, climate adaptation, infrastructure planning, and defence — replaces “predict then act” with “test many candidate plans against many futures and pick the one least regrettable across the largest plausible set”. The underlying machinery is computational; the philosophy is older than the computers. Robustness over optimality. Adaptivity as a property of the strategy, not just of the operator. RDM does not promise to identify the right answer. It promises to identify which answers are wrong only in narrow circumstances and which are wrong widely — and to make that information explicit before the strategy is committed. For AI strategy, where the relevant futures genuinely cannot be enumerated, that discipline is exactly the one that closes off the worst kinds of regret.
Premortems. The simplest tool on this list and one of the most effective. Gary Klein’s premortem, which the Harvard Business Review published in 2007, asks the team to imagine their project has failed catastrophically a year out, and to write down — silently, individually, then aloud — why. The procedure exploits the prospective hindsight effect: a 1989 study found that imagining an outcome has happened increases the ability to identify reasons for it by about thirty per cent. Klein’s contribution was institutional. The premortem turns the room from collegial endorsement of a plan into a five-minute exercise in which everyone, including the people who would otherwise be reluctant to speak, has explicit permission to be the bearer of bad news. It costs almost nothing. It catches an absurd amount.
These five do not stack into a methodology. That, partly, is the point. A board working seriously with the variety problem picks a combination fitted to its regime, with the understanding that the combination itself will need revision as the regime shifts. It is also why the response to the counting problem cannot be a new framework. Any combination one picks will itself be a regime-conditional choice.
V. Inside-out and outside-in
A short stylistic point before the AI question.
Most governance, as practised, is inside-out. It starts with the firm’s plan, identifies the risks to that plan, and assigns controls. The risk register is its central artefact. It assumes the firm’s frame is roughly right; the work is to defend it. PRINCE2 is inside-out; the PMBOK is inside-out; most enterprise risk management as practised is inside-out.
Variety-aware governance is outside-in. It starts with the environment, asks what changes there could make the firm’s plan wrong, and lists the assumptions — explicit, dated, falsifiable — that the plan depends on. The assumptions register, not the risk register, is the central artefact. It assumes the firm’s frame is provisional; the work is to test it.
Inside-out asks: what might stop this plan succeeding? Outside-in asks: what change in the world could make this plan wrong?
Most AI-governance literature, including most board-level material, is inside-out. The variety problem is precisely the kind of thing inside-out governance does not see, because the variety lives mostly outside the frame the inside-out method takes for granted. A board can move toward outside-in by changing what arrives in the pack — listing the assumptions on which the AI strategy depends, in order of consequence; the evidence for each; the most recent date each was tested; the change in any since last quarter. That is a different artefact from a risk register. It produces a different conversation.
This is not a rejection of inside-out — that would be its own conceit, and the wrong one. A serious board does both, and pays attention to the gap between them.
VI. AI as variety amplifier
If the controller’s variety is the binding constraint, an obvious response is to amplify it. AI in 2026 is the most powerful general-purpose variety amplifier yet built. It scans more inputs than a human can scan, generates more options than a team can brainstorm, and runs more counterfactuals than a workshop can support. None of this dissolves the variety problem. It does make the controller less mismatched than it otherwise was.
A short list of uses where the gain is real.
The cheapest is assumption extraction. Ask a model to read a strategy paper and list the assumptions on which it depends. The output is rough, sometimes wrong, and uniformly more thorough than the meeting that approved the paper. The exercise amplifies variety because it surfaces what the original authors did not know they had assumed.
A higher-bandwidth use is environmental scanning and weak-signal detection. The OECD’s Building Capacity in Technology Horizon Scanning (April 2026) records that web scraping plus large-language-model analysis allows near real-time detection of weak signals at a scale that conventional foresight could not reach. The technology does not substitute for sensemaking; it removes the bottleneck on getting raw signal in.
Klein’s premortem becomes a different exercise once the model is doing the speculating. Generated against ten variants of a strategy by an instance with no personal stake in the strategy passing, the premortem surfaces failure modes the room would not raise. The cost of generating the raw output is small. The institutional discipline of curating it remains the human’s job. A close cousin is scenario expansion: RDM-style stress-testing has historically been computationally expensive, and model-assisted generation of plausible-but-distinct futures, combined with conventional evaluation, brings the technique closer to within reach for organisations that previously could not afford it.
Two more uses are worth naming because they reach parts of the variety problem that conventional methods do not. Dependency and regret mapping: read the firm’s contracts, vendor list, and technical architecture; produce a graph of dependencies; identify the single points of failure no one had labelled as such. Given a candidate decision, generate the futures in which it is most regretted, and the early indicators those futures are arriving. Requisite-variety stress-tests: generate adversarial inputs to a control structure and watch where the controller has no response. This is what red-teaming, the UK AI Security Institute’s evaluation programme, and Anthropic’s responsible-scaling work all are, in different vocabularies. The function is the same: probe whether the controller has the variety it needs, before the environment probes it for you.
The pattern is consistent. AI amplifies the controller’s variety inside a process that human judgement still owns. That is a non-trivial gain. It is also where the recursion bites.
VII. Who governs the variety amplifier?
Importing AI into the controller imports the counting problem inside the controller. The variety amplifier is itself ungovernable by enumeration. Its inputs are unbounded; its outputs can be wrong in ways nobody anticipated; its failure modes — sycophancy, hallucination, prompt injection, tool misuse, goal drift, reward hacking — are documented in the 2026 LLM-security literature and they will not all be the failure modes a year hence.
The harness logic from the previous essay applies again. Use AI as a variety amplifier by all means. Specify what it is allowed to do. Constrain its scope. Verify the artefacts it produces against the documents it claims to draw on. Sample its outputs adversarially. Monitor for drift. Keep humans in the loop where the loop is doing real work, and remove them where their nominal presence is a liability. The slightly disorienting feature of using AI to handle Ashby is that the harness which makes the AI safe to deploy is itself an exercise in requisite variety: a control system has to be built around the AI whose variety is at least equal to the AI’s failure modes that one can detect.
This is not paradoxical. The recursion bottoms out in human judgement, made decidable by structure. The harness keeps the judgement human and decidable; it does not dissolve it. The board’s question becomes more concrete: what is the harness around the AI we use in our own governance, who specified it, and how do we know it holds?
A specific failure pattern is worth dwelling on. The most attractive applications of AI as a variety amplifier — premortems, counterfactual challenge, devil’s advocacy — are the ones where a sycophantic model does the most damage, because the failure mode of sycophancy is the same as the human failure mode the AI was meant to correct: agreement with the room. A model fine-tuned to be agreeable, deployed in a team that values being told it is doing well, doubles the failure mode rather than attenuating it. Institutional design has to push the other way, and push hard. That is a design problem, not a model problem; and like most design problems it gets harder once it is in production.
VIII. What the board pack actually looks like
The mechanical question is what arrives in front of directors and what they do with it.
The risk register stays. It does work that the rest of the apparatus does not. A second artefact joins it — an assumptions register, in the outside-in sense, listing the propositions on which the AI strategy depends, the most recent date each was tested, and the evidence on which the test relied. The format matters; the ownership matters more. The register has to be owned by someone whose career does not depend on the assumptions being right.
A harness specification sits alongside the AI deployments. It says, in language that does not require a doctorate to read, what each deployed AI is allowed to do, what data it can touch, what tools it can call, when it must defer, and what triggers escalation. It is signed off by management, reviewed by the board (or its delegated committee), and revisited on a cadence that matches the system’s tempo, not the board’s.
A requisite-variety stress-test report — the cousin of penetration-testing for systems and red-team exercise for strategies — runs at a defined frequency, with results presented to the board independent of the management line that runs the AI. This is the single most under-specified element of current AI governance and the one most directly aligned with Marchand-style fiduciary expectations.
A resilience profile — Hollnagel’s four capacities, evaluated honestly — is reviewed annually and after any major incident. Anticipate, monitor, respond, learn. Each capacity has a status, with evidence. A board reading such a profile would, for the first time, have a defensible answer to how do we know our AI risk function is fit for the system it governs?
A scenario-and-regret memo — short, quarterly, three to five futures tested against the current strategy with the regret of the strategy in each — supplements the more familiar strategic-update piece.
The point of these artefacts is fit, not novelty. They produce different conversations because they describe a different shape of system. They will require board members capable of reading them, which is a board-composition question — and one of the few that is genuinely actionable.
IX. Better questions, second pass
The previous essay closed with a long list of questions. Most of them stand. A second pass, fitted to the variety frame, sharpens a smaller set.
Which regime is our AI in — optimisation, robustness, or resilience — and is the management-science apparatus we are running on it the right one for that regime? What is our variety-attenuation strategy: where have we explicitly chosen not to deploy AI, where are scopes constrained, where are pre-commitments in force, and where is the andon cord? What is our variety-amplification strategy: where is AI itself making the controller better, and what is the harness around the AI that does that work? What is in our assumptions register, and which assumptions changed status this quarter? Of Hollnagel’s four capacities — anticipate, monitor, respond, learn — which is our weakest, and what are we doing about it? Where in the firm is decision-making polycentric in a way that increases variety-handling, and where is it centralised in a way that throttles it? What is the regret profile of our AI strategy across plausible futures, and where is it most fragile?
And — the question implicit in all the others — when this management-science apparatus turns out to have been the wrong apparatus, how will we know, and what will it have cost us to find out?
These are better questions because they are aimed at the right regime. They are also, at last, the kind of questions a fiduciary can defensibly answer.
X. The limits of any of this — including this
A final concession any sceptical piece in this genre owes the reader. The frame proposed here — cybernetic-contingency, three regimes, tools fitted to the regime, AI used carefully to amplify the controller — is itself a framework. By its own argument, it is a regime-conditional tool. There will be problems for which it is the wrong frame. There will be moments in any given firm when it is the wrong frame. The same scepticism that makes it preferable to more rigour makes it provisional.
The strongest objection, to which the strongest answer this piece can give is partial, is that meta-frameworks for choosing frameworks may be the deepest version of the formalist conceit. Every level up gives the analyst more rope. Past a certain point, writing a better framework becomes the problem rather than the solution, and the right response is to admit, in plain language, that we do not know — and to behave accordingly. The traditions cited here have all, in different ways, made that admission. Bounded rationality is an admission. Robust decision-making under deep uncertainty is an admission. Resilience engineering is an admission, the most explicit. The framework around them is a scaffold; the scaffold is not the building.
What survives, after the scaffold, is a small number of habits. Notice which regime you are in. Budget the cost of getting that wrong. Attenuate where you can. Amplify where you must. Decentralise where it helps. Satisfice where it is cheap. Learn faster than the environment changes. Expect to be wrong. Design so that being wrong is not ruin.
These are old habits. The cybernetics tradition would call them viability. The resilience tradition would call them graceful extensibility. Plainer language will do: do not bet the firm on being right.
XI. A closing observation
The previous essay closed on the failure space being a list. This one closes on the matched observation: the controller is also a list, and that is the problem.
A board, an audit committee, a risk function, a model risk team — these are countable bodies, with countable agendas, working at countable cadence. The system they are meant to govern does not have those properties. The honest response is to stop pretending the controller is bigger than it is, and to design accordingly. That is what attenuation, amplification, decentralisation, monitoring, and an explicit theory of which is the binding constraint at any given moment together amount to. Less than mastery; rather more than resignation.
There is a particular dignity in admitting this. A board that does so is, if anything, more responsible — not less. The pretence of mastery, in a domain that does not yield to mastery, is the failure mode the law has not yet learned to name. This essay has tried to give it a name: it is the formalist conceit, and it lives most comfortably in the very rooms whose duty is to oversee what cannot quite be enumerated. Recognising it is the start of being responsible for it. Doing something about it, with the regime-fitted tools that already exist, is the work that follows.
The sunlit uplands turn out to be slightly less sunlit, and a great deal less optimised, than the brochure promised. They are still uplands. The walk to them is just the kind of walk that the genuine traditions of cybernetics, contingency theory, and resilience engineering have always said it would be: hilly, weather-beaten, requiring company, and worth the doing.
