Skip to main content
Preview Your Audit
← All insights

Two Agents, One Platform: The M365 Compliance Map Nobody Drew

How an ISO 27001 audit agent and an M365 operations agent share infrastructure while serving different masters — and the M365 telemetry mapping that drove the split.

This is the fifth article in a series on rethinking ISO 27001 compliance from first principles. The previous article described what continuous, structured evidence looks like. This one maps that architecture to the platform most organisations already pay for — and explains why it took two purpose-built agents to do it properly.

I reviewed all 93 Annex A controls in ISO 27001:2022 and asked one question: “Can Microsoft 365 provide evidence for this?” Not “is there a compliance product that claims to cover it?” Evidence. Proof that the control is operating, drawn from the platform’s own telemetry.

The answer surprised me. Not because the coverage was high — I expected that, having spent 35 years as an architect in the Microsoft stack. What surprised me was that nobody had done the mapping. Not Microsoft. Not the compliance consultants. Not the auditing firms. Everyone talks about M365 as a “security platform,” but nobody had drawn the line from individual Annex A controls to specific API endpoints and asked: “What can we actually measure?”

So I did the exercise. And then I built two systems to execute it.


The four domains

ISO 27001:2022 organises its 93 Annex A controls into four groups:

  • Organisational (A.5.x): 37 controls. Governance, roles, and asset management.
  • People (A.6.x): 8 controls. Screening and terms of employment.
  • Physical (A.7.x): 14 controls. Perimeters and equipment maintenance.
  • Technological (A.8.x): 34 controls. Endpoints, identity, and cryptography.

The instinct is to assume that M365 handles the technological controls and everything else is manual. That instinct is wrong.


The breakdown: automation vs governance

The mapping produced three distinct categories. Understanding the difference is the key to reducing your “compliance tax.”

  1. Automated (40%): M365 provides evidence via API queries alone. Evaluation is deterministic — the data is structured, and the measurement is repeatable. A.8.5 (Secure Authentication), for instance, is a direct pull from Entra ID.
  2. Hybrid (38%): This is where the real work lives. The platform provides the digital telemetry (e.g., DLP alerts or sensitivity label logs), but a human must supplement it with the “governance wrap.” For A.5.14 (Information Transfer), M365 proves the technical enforcement, while you provide the policy documentation.
  3. Manual (22%): Controls outside the platform’s reach. No API will tell you if your server room has fire suppression (A.7.x). Attempting to “creatively interpret” these into automation is a fast track to an audit failure.

The industry often presents compliance as binary — automated or not. This misses the point. Automation handles the tedious evidence collection so the architect can focus on the 38% where human judgment actually moves the needle.

In practice, the hybrid category turned out to be more interesting than I expected. Several controls I initially classified as manual — Clause 10.1 (Continual Improvement), A.5.5 (Contact with Authorities), A.5.29 (Information Security During Disruption) — became hybrid once I realised the platform provided data I hadn’t considered. The management system clauses (Clauses 4 through 10) yielded 23 additional collection scripts once I started treating them as measurable subclauses rather than pure governance documents.

The final count surprised me with its trajectory. The initial architecture was pure PowerShell — 107 scripts covering 84 Annex A controls and 23 management system subclauses. But as the system matured, a pattern emerged: the management system clauses (Clauses 4 through 10) required a different kind of evidence — structured reports assembling data from multiple sources into a coherent narrative. PowerShell scripts that collected raw data weren’t enough; the clauses needed builders that synthesised evidence into auditor-ready reports.

The current architecture is a hybrid pipeline: 93 PowerShell collection scripts running alongside 30 C# clause report builders — 7 top-level builders for Clauses 4 through 10 and 23 subclause builders — all sharing a common rule evaluation interface (IRuleEvaluator). The PowerShell scripts handle the breadth — querying Graph API endpoints across all 93 controls. The C# builders handle the depth — assembling clause-level evidence into structured reports with scorecards, cross-references, and trend data. Each producing structured, weighted, sealed evidence on every run.


The capability map: 79 capabilities across 93 controls

When you stop looking at M365 as a bundle of products and start seeing it as a stack of capabilities, a structure emerges. Across the tenants I manage, I have catalogued 79 distinct capabilities that map to ISO 27001 controls.

ISO ControlM365 CapabilityPrimary Evidence Source (Graph API)
A.5.12 (Classification)Purview Information ProtectionGET /informationProtection/sensitivityLabels
A.8.1 (Endpoints)Microsoft IntuneGET /deviceManagement/managedDevices
A.8.12 (DLP)Purview DLPGET /security/alerts (DLP filtered)
A.8.2 (Privileged Access)Entra ID PIMGET /privilegedAccess/aadRoles/resources

The risk here is the “denominator problem” — double-counting a single Conditional Access policy as evidence for four different controls without cross-referencing. A strategic architecture ensures that when evidence from A.5.12 appears in A.5.13, it is marked as informational, keeping the compliance score honest and defensible.

The capability map serves a second purpose: deployment tracking. Each capability has a maturity status — not started, planned, in progress, deployed — derived from the evidence itself. If the evidence rules for a capability are all passing, the capability is deployed. If they’re partially passing, it’s in progress. This transforms the capability registry from a static inventory into a living roadmap that updates with every evidence collection run.


CIS and ISO: complementary, not competing

There is a persistent confusion in the security space between CIS Benchmarks and ISO 27001. They measure different dimensions:

  • CIS Benchmarks measure configuration state. Is the setting enabled? (The “What”).
  • ISO 27001 measures management effectiveness. Is there a review process? Do you learn from incidents? (The “How” and “Why”).

A CIS-aligned configuration in Intune becomes ISO evidence when you add the management layer: who reviewed the baseline, what happens when an exception is needed, and how is that exception tracked? The platform provides both the setting and the audit trail; the architect’s job is to connect them.

This distinction is why I built two agents, not one.


Two agents: separation of concerns

The Audit Agent handles ISO 27001. It collects evidence for 93 controls, evaluates rules against thresholds, produces weighted compliance scores, manages risk traceability, and tracks corrective actions. Its domain is the management system — the “how” and “why” of security.

The Operations Agent handles CIS Benchmarks. It assesses M365 tenant configurations against the CIS Microsoft 365 Foundations Benchmark, producing pass/fail results per CIS control. Its domain is configuration state — the “what” of security.

They’re separate by design. This isn’t a limitation; it’s an architectural decision that reflects the standard itself. ISO 27001 Clause 5.3 requires segregation of duties. Having a single system that both assesses security posture and evaluates management effectiveness would create a circular dependency — the compliance system would be evaluating its own output.

But they share infrastructure. The same Key Vault credentials, the same Cosmos DB account, the same Azure subscription. And critically, the CIS assessment results flow from the Operations Agent to the Audit Agent via a callback — appearing on the Statement of Applicability, the compliance dashboard, and the evidence browser alongside the ISO evidence.

This gives the organisation something no single tool provides: a view where CIS tells you “your anti-malware settings are configured correctly” and ISO tells you “your anti-malware management process is working, with evidence of reviews, exception handling, and incident response.” The “what” and the “why” on the same screen, cross-referenced at the control level.

The callback architecture proved more versatile than I originally designed it to be. The CIS results were the first cross-agent data flow, but a second followed quickly: FinOps cost data. The Operations Agent collects Azure consumption data, reservation coverage, and savings plan recommendations — and pushes cost metrics to the Audit Agent via the same callback pattern. This means the compliance dashboard can display not just “are your controls working?” but “what is your security investment costing?” — connecting Clause 5.1 (management commitment, including resource allocation) to actual spend data.

The third data source wasn’t cross-agent at all — it was external. CyberAware, a security awareness training partner, provides completion rates and phishing simulation results via their API. The Audit Agent consumes this directly, mapping training evidence to A.6.3 (Information Security Awareness, Education and Training). This mattered because it proved the architecture wasn’t locked to M365 telemetry alone. Any structured data source — partner API, external platform, third-party tool — can feed evidence into the same weighted, rule-based framework. The M365 stack provides the majority of evidence, but the architecture doesn’t require exclusivity.


The multi-tenant reality

If you’re an MSP managing compliance for multiple customers, the architecture must be multi-tenant from the start. Not as an afterthought — as a constraint that shapes every design decision.

Each tenant has its own evidence collection schedule (daily, weekly, or monthly). Each has its own exception groups, naming conventions, and compliance thresholds — because what constitutes an “exception” in a five-person law firm is different from a two-hundred-person engineering company. Each has its own risk register, its own Statement of Applicability, and its own corrective action history.

The platform I’ve built manages this through a three-tenant architecture: a management tenant (infrastructure, Key Vault, credentials), a primary customer tenant (the reference implementation), and additional customer tenants onboarded through a standardised process. Tenant-specific configuration — exception group names, detection patterns, policy naming, vendor references — is abstracted into per-tenant naming configuration rather than hard-coded into scripts.

This is the MSP scaling model the industry hasn’t built: not “one dashboard that shows all tenants” but “one architecture that produces independent, verifiable, tenant-specific evidence at scale.”


The “Living SOA” strategy

If you run M365 E5, the evidence for roughly 75 of your 93 controls is already sitting in your tenant. It is latent compliance — data you’ve already paid for but haven’t queried.

The Statement of Applicability (SOA) should be the bridge. Instead of a static spreadsheet filed away once a year, the SOA should be a living configuration document. It should define: “For control A.8.1, we measure these seven rules, using these API endpoints, against these thresholds.”

This shifts the architectural goal from “passing an audit” to “continuous verification.” It reduces the billable-hour sinkhole of manual evidence collection and restores the SOA to what it was always meant to be: a decision document.

In the system I’ve built, the SOA is exactly this. It’s enriched with evidence status from the latest collection run, CIS Benchmark results from the Operations Agent, and Compliance Manager sync status from Microsoft Purview. It’s not a spreadsheet — it’s a view that assembles itself from the current state of the tenant.


The Microsoft ghetto?

Is leaning entirely on M365 a limiting factor? Yes and no. The risk is concentration. If you rely on one platform for your identity, your endpoints, and your compliance evidence, you have a single point of failure.

However, for most businesses, the alternative isn’t “better security” — it’s “fragmented chaos.” ISO 27001:2022 Clause 5.22 specifically deals with monitoring service providers. By using the M365 stack for evidence, you aren’t just ticking a box; you are forcing yourself to master the architecture of your own environment. The platform isn’t the limit; your ability to query its telemetry is.

The two-agent architecture partially mitigates the concentration risk through a different mechanism: operational monitoring (Ops Agent) is separated from compliance assessment (Audit Agent). If one system fails, the other continues independently. The shared infrastructure is a pragmatic trade-off — separate Azure subscriptions would double the cost without meaningfully reducing the dependency on M365 itself.


The pragmatic conclusion

How much of your M365 E5 telemetry do you actually use for evidence? Not how many features are turned on — how many are actively measured, weighted, and sealed for an auditor?

The telemetry for most of your Annex A controls already exists. The question isn’t whether the data is there. It’s whether you have the architectural clarity to stop buying more tools and start querying the ones you already own.

And if you’re an MSP: the question is whether you’re offering your clients screenshots and spreadsheets, or whether you’re offering them a system that produces verifiable evidence as a byproduct of the management you’re already doing. The difference isn’t technical. It’s strategic.


JJ Milner is a Microsoft MVP and contributor to the CIS Benchmarks. He writes about rethinking security and compliance from first principles.

J
JJ Milner

Microsoft MVP and founder of Global Micro Solutions. 30+ years securing Microsoft environments across 1,200+ tenants. Writes about rethinking compliance from first principles.

See what the auditor would find. In 30 minutes.

Same questions a real ISO 27001 auditor asks. Immediate gap analysis.

Start Your Audit Preview