AI governance — board-level advisory

We support boards and senior leaders
at the moment AI begins to run alongside decision-making.

Not by introducing new frameworks, tools or controls,
but by helping clarify
what can remain delegated within the organisation
and where the board itself needs to stay involved.


What this advisory work is about

For us, AI governance centres on one core board-level question:


What can we delegate now that AI runs alongside decisions —
and where do we, as a board, need to stay at the controls?

This question sits above technology, tooling and implementation.
It belongs with the board and senior leadership.

Our role is to help boards make that judgement consciously —
before AI use becomes the subject of external scrutiny,
incidents or challenge.


What we mean by “governance”

If by governance you mean
a checklist, maturity score or technical scan —
that is not what we do.

If you mean being able to explain
why board-level choices around AI were reasonable at the time,
then that is exactly our focus.

We work within applicable laws and recognised standards,
but the core question is always:
can the board itself explain
why it chose to remain involved where it did?


When organisations reach out

Boards and senior leaders typically contact us when:

  • AI is already in use, but its implications for the board were never discussed
  • responsibility for AI-related outcomes feels fragmented
  • there is strong focus on compliance, but less on board-level judgement
  • leaders prefer reflection upfront over explanation afterwards

How we work with boards

Our work almost always starts with a focused conversation.

Not an assessment, audit or scan.
But a structured board-level reflection on questions such as:

  • Where is AI already running alongside decision-making?
  • What are we deliberately leaving within the organisation?
  • Where should we, as a board, be more hands-on?
  • Who keeps an eye on things once systems are live?
  • Who can step in if outcomes are challenged?
  • Who must be able to explain these choices?

The outcome is clarity — not a report.
Where relevant, board-level choices are captured
so they remain explainable over time.


Typical forms of collaboration

All engagements are governance-driven;
the form follows the question, not the other way around.

  • Board or executive conversation
    A focused discussion to clarify
    where AI runs alongside decisions
    and where board involvement needs to remain.
  • Clarification within existing governance
    Limited support to help existing forums, roles or decision-making
    better reflect the practical reality of AI use.
  • Targeted follow-up
    Selective follow-up where board-level choices
    require further clarification or documentation.


See how this shows up in practice →
In practice


From governance to execution

Governance does not replace policies, procedures or tooling.
It comes before them.

We help boards determine
what needs to be decided at board level
before Legal, Compliance, IT or Risk
translate those choices into execution.

This ensures execution follows conscious responsibility —
not the other way around.


What we deliberately do not do

  • No AI implementation
  • No technical risk assessments
  • No compliance outsourcing
  • No legal advice
  • No standard frameworks or checklists

AI governance only works
when board responsibility
is not outsourced to tools or templates.


What organisations gain

  • Clarity on where the board itself stays involved
  • Oversight aligned with actual decision impact
  • Board-level choices that remain explainable under scrutiny
  • Confidence that AI governance sits at the right level

How collaborations usually start

Most engagements begin with an exploratory conversation
with a CEO, General Counsel or board member.

No pitch.
No predefined solution.

But a shared reflection on the question:


Where are we currently letting AI run alongside decisions —
and where should we, as a board, be more hands-on?


Read how collaborations are structured →
Collaboration model