Why AI Still Needs a Human in the Loop
- jholdenross
- Jan 13
- 3 min read
Updated: Jan 21
Trust, Non-Determinism, and the Case for Expert-Led Contract Risk Insight
Artificial intelligence is now firmly embedded in business decision-making. From document review and data extraction to forecasting and pattern recognition, AI offers speed and scale that were unthinkable even a few years ago.
Yet despite its rapid adoption, a trust deficit remains, particularly where AI is used to inform decisions with financial, legal or reputational consequences.
For SMEs signing contracts that could materially affect their business, that trust gap is not abstract. It is practical, immediate, and justified.
The Problem Is Not Just “Black Box” AI
Much has been written about the so-called black box nature of AI systems — the difficulty of understanding how a model arrives at a particular output.
That concern is real, but it is only part of the picture. A less well understood, but equally important issue is non-determinism.
Non-Determinism: The Hidden Source of Discomfort
Modern AI language models are probabilistic, not deterministic.
In plain English, that means:
the same question,
asked in slightly different ways,
or even asked twice in succession,
can produce different answers.
Those answers may all be plausible, well-reasoned and articulate — but they may differ in emphasis, structure, or conclusions.
This is not a flaw. It is how these models work. But it creates a problem when AI output is used to support decisions that require consistency, accountability and repeatability.
For example:
Which version should a director rely on?
Why did the risk ranking change?
Which answer would stand up to scrutiny after the fact?
In low-stakes contexts, this variability is acceptable, even useful. In contract risk assessment, it is not.
Why This Matters Especially for SMEs
Large organisations can absorb uncertainty. SMEs usually cannot. A single contract dispute, liability event or termination can:
disrupt cashflow,
absorb management time,
damage reputation,
or threaten business continuity.
SME decision-makers therefore need:
clarity, not variation;
prioritisation, not volume;
judgement, not just analysis.
They also need confidence that the advice they rely on is:
consistent,
explainable,
and defensible if challenged later.
Purely automated AI output currently struggles to provide that reassurance.
The Emerging Current Consensus: Human-in-the-Loop
For this reason, regulators, policymakers and professional bodies increasingly emphasise the need for human-in-the-loop systems — particularly in high-impact decision domains.
The UK government’s AI principles, the EU’s AI Act, and guidance from bodies such as the OECD and the Alan Turing Institute all stress:
proportionality,
oversight,
and human accountability where outcomes matter.
In other words, human review is not a weakness of AI-assisted systems. It is a governance requirement.
How Keysteer Closes the Gap
Keysteer services are designed with this reality in mind.
AI is used to assist with:
identifying relevant contractual risk areas and their impact,
structuring analysis,
and surfacing potential issues efficiently.
But AI is not left to make the final call. Every Keysteer Risk Report is:
generated using a consistent underlying risk framework, and
personally reviewed and sense-checked by an experienced practitioner before being issued.
This approach addresses non-determinism directly:
variability is moderated,
outputs are calibrated,
and conclusions are anchored to professional judgement.
The result is a report that is:
coherent,
repeatable,
explainable,
and suitable for real-world decision-making.
Technology Supporting Judgement — Not Replacing It
This approach does not reject AI.It uses it deliberately and proportionately, utilising its benefits in terms of saving time and cost.
The aim is not to automate decisions, but to support better decisions, especially for SMEs operating without in-house legal or risk teams.
By keeping a human in the loop, Keysteer combines:
the efficiency of modern analytical tools, with
the accountability and consistency that trust requires.
Conclusion
AI is an extraordinary tool, but it is not currently a substitute for judgement, particularly where contracts, liability and business survivability are concerned.
The trust deficit in AI is not irrational. It reflects real structural features of how modern models work, including non-determinism and variability of output.
The solution is not to abandon AI, but to embed it within a disciplined, human-led framework.
That is the philosophy behind Keysteer, and why trust sits at the centre of how the service is designed and delivered.
Sources and further reading
The Act Texts | EU Artificial Intelligence Act (risk-based approach)
Articles on AI non-determinism and probabilistic language models (e.g. Nature , Artificial intelligence | MIT Technology Review)


Comments