In practice
Our work is intentionally lightweight, board-level and context-specific.
It does not take the form of standard consulting programmes
or predefined implementation tracks.
It usually starts at the moment organisations realise that
AI has begun to run alongside decision-making,
and it has not been clearly discussed
what can remain delegated within the organisation
and where the board itself needs to stay involved.
When governance questions arise
AI moves forward, governance lags behind
AI is rarely introduced as an explicit board decision.
It usually starts as a tool, an experiment or an efficiency gain.
Over time, its role shifts:
recommendations become standard practice,
exceptions grow rare,
and automation increases —
without being recognised as a separate decision moment.
At that point, boards often realise that,
while processes and controls may exist,
there has never been a conscious discussion about
where board-level involvement remains necessary.
Rules are in place, but the judgement is missing
Many organisations have policies, procedures and registers
governing the use of AI.
In some cases, these are even formally assessed as sound.
Yet another question often remains unanswered:
Who, at board level, decided
that this setup was sufficient —
and on what grounds?
Our work focuses on enabling that judgement
at the right moment:
not under pressure,
but before external scrutiny, incidents
or escalation force the issue.
Decisions cut across the organisation
AI rarely fits neatly within a single function or department.
IT, Legal, Compliance, Data, HR and business teams
are often all involved.
Without clear board-level choices,
responsibility can become fragmented —
making it unclear
who should step in
when risks materialise
or outcomes are challenged.
What this work usually looks like
Engagements typically take one of the following forms:
-
A focused board or executive conversation
A structured discussion about
where AI may run alongside decision-making
and where board members need to stay hands-on. -
Clarification within existing governance
Limited support in recording
board-level choices around AI
within existing governance or oversight structures —
without adding new layers. -
Reflection before things get tense
Board-level reflection on intervention,
escalation and responsibility
before an incident or external question forces the discussion.
Sometimes a single conversation is sufficient.
In other cases, limited follow-up ensures
that board-level choices remain
recognisable and explainable over time.
From practice to board-level questions
AI governance discussions rarely start with technology.
They begin when board members sense that
AI has begun to influence outcomes,
while it was never consciously discussed
who decides
and who carries responsibility.
Across sectors, similar questions emerge —
often triggered by external attention,
internal escalation
or a growing unease about ownership.
Examples of questions that typically arise
-
When AI has financial impact
Where was the decision made that this use was acceptable?
And would the board make the same choice today? -
When AI affects people
Who feels responsible for the outcomes —
and who can step in when they are challenged? -
When multiple functions are involved
Where does decision authority truly sit
when several disciplines co-decide? -
When external questions arise
Who explains why these choices were made —
and can the board show this was not accidental?
These are not technical questions.
They are board-level questions about involvement,
oversight and responsibility.
Our role is not to answer these questions on behalf of boards,
but to ensure they are recognised,
discussed and owned at board level —
before circumstances force the issue.
These conversations are often supported by the
Governance Canvas —
not as a checklist,
but as a conversation tool
to clarify
where board members stay at the controls.
What this work deliberately is not
- No AI implementation
- No compliance outsourcing
- No tool-by-tool assessments
- No standard frameworks or maturity models
- No certifications or training programmes
Our role is not to manage systems,
but to help boards take responsibility
for how AI runs alongside decisions —
and how that is explained.
AI governance becomes visible when things get tense.
Our work helps boards be prepared
before that moment arrives.