This explorer matches the SRF’s idea of a governance manual for decomposition: you see who is accountable for each slice of the stack, how that follows the MLOps lifecycle, and what each layer is for as AI scales. That structure helps teams expand AI faster with less ambiguity—clearer procurement and vendor boundaries, stage-gate evidence aligned to owners, and a path that scales as systems grow more autonomous—while still complementing NIST AI RMF, ISO/IEC 42001, and the EU AI Act rather than replacing them.
Three views of one idea: decompose AI accountability by enterprise layer so obligations land on a named owner—something generic risk frameworks don’t spell out for AI-native, multi-vendor stacks.
AI responsibility components mapped to the five layers of the CoSAI AI Shared Responsibility Framework (V0.7). Cells follow the RACI matrices in §A.1.2 – §A.1.5 of the whitepaper: Primary = accountable owner, Shared = responsible / supports, Inherited = not directly involved or inherited from an upstream layer.
Scroll horizontally to see all nine responsibility components. The layer-label column is pinned.
CRISP-ML(Q) / MLOps lifecycle, with each phase tagged to the CoSAI AI SRF layer that holds primary accountability for the controls in that phase. Useful for mapping evidence (per §A.7 of the whitepaper) to lifecycle stage gates. Click a phase to expand.
The CoSAI AI Shared Responsibility Framework (V0.7) is an accountability model: it names who owns each part of the AI stack so obligations from NIST AI RMF, ISO/IEC 42001, the EU AI Act, and your internal control sets can land on one primary party per concern. It works as a governance manual for decomposition—enterprise layers express invariant dependencies while operating models shift who supplies each layer. The outline below summarizes what each layer clarifies; the cards give illustrative AI concerns that become assignable and evidential once you adopt that structure—supporting faster triage, cleaner scaling, and repeatable expansion of new models and use cases.
- L1 — Business & Usage. Strategic accountability: permitted AI use, risk acceptance, regulatory posture, and lifecycle governance decisions—so requirements cascade from business intent to the layers below, matching the framework’s “governance from the Business layer.”
- L2 — Information. Data-as-model-fuel: provenance, rights, training data, answer-time source stores (including RAG knowledge bases), embeddings, and retrieval boundaries—extending information governance to what models actually ingest and emit.
- L3 — Application. Integration accountability: how capability meets users and APIs—surfaces, tools, agents, prompts, validation, and human override—where application developers discharge the controls referenced around prompt injection and guardrails in the framework.
- L4 — Platform. Shared runtime accountability: serving, tenancy, isolation, quotas, monitoring, and operational hardening at scale—including containing blast radius across tenants as in the framework’s layered incident example.
- L5 — Model provider. Artifact & supply-chain accountability: training provenance, the shipped model, cards and transparency, third-party and fine-tuned change—so model builders and vendors have a clear lane alongside NIST/ISO what to achieve.