We’re entering the era where AI doesn’t just answer questions — it selects actions.
Supply chain routing. Credit risk. Fraud detection. Treatment planning. Portfolio optimisation. The pitch is always the same:
And in a narrow, mathematical sense, it can.
But here’s the catch: optimisation is a superpower and a liability.
Because if a system can optimise perfectly, it can also optimise perfectly for the wrong thing — quietly, consistently, at scale.
That’s why the most important design problem isn’t “make the AI smarter.” It’s “make the relationship between humans and AI adaptive, observable, and enforceable.”
Call that relationship a dynamic contract.
AI’s “perfection” is usually:
A model can deliver the highest-return portfolio while ignoring:
A model can produce the fastest medical plan while ignoring:
AI can optimise the map while humans live on the territory.
The problem is not malice. It’s that objectives are incomplete, and the world changes faster than your policy doc.
Static rules are how we’ve governed software for decades:
They’re easy to explain, test, and audit — until they meet reality.
Market regimes shift. User behaviour shifts. Regulations shift. Data pipelines shift. Static rules drift from reality, and “optimal” actions start producing weird harm.
A fixed objective function (“maximise conversion”, “minimise cost”) slowly detaches from what you mean (“healthy growth”, “fair treatment”, “sustainable outcomes”).
When the system makes thousands of decisions per hour, small misalignments compound. Static constraints become a thin fence around a fast-moving machine.
A dynamic contract is not “no rules.” It’s rules with a control system:
Think: not a fence — a safety harness with sensors, alarms, and a manual brake.
A dynamic contract has four components. Miss one, and you’re back to vibes.
A dynamic contract assumes:
So the system must support:
This is not “moving goalposts.” It’s acknowledging that the goalposts move whether you admit it or not.
If the system can’t show:
…then you don’t have governance. You have hope.
Observability means:
A contract without an override is a ceremony.
You need:
If AI makes decisions, who owns:
Dynamic contracts require a clear chain:
This is less “ethics theatre,” more on-call rotation for decision systems.
At a systems level, this is a closed loop:
This loop is the difference between:
A routing model might optimise purely for cost. But real operations have constraints that appear mid-flight:
Dynamic contract move: temporarily reweight objectives toward reliability, tighten risk limits, trigger manual approval for reroutes above a threshold.
A portfolio optimiser can deliver higher returns by exploiting correlations that become fragile under stress — or by concentrating in ethically questionable exposure.
Dynamic contract move: enforce shifting exposure caps, add human approval gates when volatility spikes, record decision provenance for audit.
AI can recommend the most statistically effective treatment, but “best” depends on:
Dynamic contract move: require preference capture, enforce explainability, and make clinician override first-class, not an afterthought.
Here’s the pragmatic blueprint.
Define the contract in machine-readable form (YAML/JSON), e.g.:
Treat it like code:
Your model shouldn’t directly execute actions. It should propose actions that pass through a policy layer.
Policy layer responsibilities:
Dashboards are passive. You need alerts linked to contract changes:
At minimum:
If you answer “no” to any of these, you’re still on static rules.
AI will keep getting better at optimisation. That’s not the scary part.
The scary part is that our objectives will remain incomplete, and our environments will keep changing.
So the only sane way forward is to treat AI decision-making as a governed system:
Because the future isn’t “AI makes decisions.” It’s “humans and AI co-manage a decision system — continuously.”
That’s how you get “perfect decisions” without perfect disasters.
\


