Skip to main content
Preview Your Audit
← All insights

The Questions Nobody Asks: Challenging Compliance Orthodoxy

Six auditor questions that don't ask 'do you have this?' but 'why did you choose this, and how do you know it's working?' The hard ones expose gaps no documentation covers.

This is part of a series on rethinking ISO 27001 compliance from first principles. In an earlier article, I introduced the three difficulty levels of auditor questions. This one goes deeper — into the questions that don’t ask “do you have this?” but “why did you choose this, and how do you know it’s working?”

I’ve been compiling auditor questions. Not the routine ones — the ones that challenge conventional thinking about how an ISMS should work.

Earlier in this series, I discussed the three difficulty levels: routine, probing, and challenging. Most organisations prepare for the first two. They can demonstrate that their policies exist and that their processes are documented. What they can’t do — what almost nobody prepares for — is justify why they made the design decisions they made.

This article is about those questions. The ones that don’t ask “do you have this?” but “why did you choose this, and how do you know it’s working?”

I’m going to walk through six of them. Each one sounds reasonable. Each one exposes a gap that no amount of documentation can cover.


”How do you ensure consistency when different people perform risk assessments?”

This is a Clause 6.1.2 question. The standard requires that the risk assessment process “produces consistent, valid and comparable results.” Clause 9.1 reinforces this by mandating that monitoring and measurement methods must produce “comparable and reproducible results” to be considered valid. Most organisations interpret this as “we have a documented methodology.” The auditor interprets it as: if two different people assessed the same risk independently, would they arrive at similar scores?

The typical answer: “We use a risk matrix. Everyone follows the same criteria.”

The follow-up: “Show me. Take this risk — unauthorised access to cloud resources — and walk me through how Person A and Person B would arrive at the same score.”

This is where it falls apart. Most risk matrices include descriptions such as “Major: significant financial loss” and “Likely: expected to occur annually.” What counts as “significant”? To a five-person MSP, a £50,000 loss might be existential. To a multinational, it’s a rounding error. “Expected to occur annually” — based on what data?

The real answer requires calibration. Documented examples of what each score level means in the organisation’s specific context. Worked examples showing how the methodology was applied to actual risks. Evidence that different people have applied it and produced comparable results — or, if they haven’t, evidence that you’ve identified this gap and are addressing it.

The standard doesn’t require perfection here. It requires that you’ve thought about consistency and can demonstrate how you pursue it. The fact that most organisations have never tested their risk methodology with more than one assessor is itself a finding — not because it fails, but because it reveals the process hasn’t been validated.


”Continual improvement and corrective action — how are they different in your ISMS?”

This catches almost everyone. Clause 10.1 (Continual Improvement) and Clause 10.2 (Nonconformity and Corrective Action) appear adjacent in the standard. Most organisations treat them as a single process: something breaks, you fix it, that’s improvement.

It’s not.

Clause 10.2 is reactive. Something went wrong — a failed audit finding, an incident, a control that dropped below threshold. You identify the nonconformity, determine the root cause, implement a corrective action, and verify it worked. This is a fix.

Clause 10.1 is proactive. Nothing is necessarily broken. You identify an opportunity — a better way to configure a policy, a new capability in the platform you haven’t leveraged, a process that works but could work more efficiently. This is an improvement. While Clause 10.1 addresses the ongoing suitability and effectiveness of the ISMS, the primary requirement for identifying these “opportunities” begins in the initial planning phase under Clause 6.1.1.

The difference matters because they have different triggers, different tracking, and different evidence.

Corrective action is triggered by failure: an audit finding, a non-compliant evidence result, or an incident report. The evidence trail runs from the nonconformity through root cause analysis to the corrective action and its verification.

Continual improvement is triggered by opportunities: a new Defender feature release, a Secure Score recommendation you haven’t evaluated, or a management review discussion about process efficiency. The evidence trail runs from the opportunity identification through evaluation to implementation and effectiveness review.

If your ISMS has a corrective actions log but no improvement register, you can fix problems, but you can’t demonstrate improvement. The auditor is asking whether your ISMS is a system that only reacts to failures or one that actively seeks improvement. The standard requires both.


”Your organisation relies heavily on Microsoft 365. How do you manage the concentration risk?”

This is an A.5.19/A.5.22 question about supplier management, and it’s devastating because the honest answer for most M365-dependent organisations is: “We don’t, really.”

The auditor isn’t suggesting you shouldn’t use M365. They’re asking whether you’ve assessed the risk of single-vendor dependency and determined it to be acceptable — with documented justification — or whether you simply never considered it. This concentration risk is now explicitly governed by the new 2022 control A.5.23 (Information security for use of cloud services), which mandates structured processes for managing the entire cloud service lifecycle, including acquisition, use, and secure exit strategies.

The typical answer: “Microsoft is a large, reliable vendor. We trust them.”

The follow-up: “Trust is not a control. What happens if Microsoft has a major outage? What’s your business continuity plan? Where is your supplier risk assessment for Microsoft as a critical service provider?”

The right answer isn’t “we’re migrating away from M365.” The right answer demonstrates that you’ve assessed the risk, quantified the dependency, identified the controls that mitigate it, and accepted the residual risk at the appropriate management level. That might include:

  • A documented supplier risk assessment covering Microsoft’s certifications (ISO 27001, SOC 2), SLA commitments, and shared responsibility model
  • Business continuity provisions for extended outages
  • Data portability assessment — could you retrieve your data if you needed to leave?
  • Regular review of Microsoft’s Service Trust Portal for audit reports and compliance certifications

The question tests whether you’ve thought about a risk that’s so fundamental to your operations that most organisations treat it as invisible. The vendor is so embedded that questioning the dependency feels absurd. But the standard requires you to manage supplier relationships — all of them, including the one that runs your entire platform.


”How did you decide on your information security policy structure, and what alternatives did you consider?”

This is an A.5.1 question, and it’s challenging because most organisations have not made a decision. They copied a template. Or a consultant wrote their policies. Or they adopted whatever structure their compliance platform provided.

The auditor is testing whether the policy framework was a deliberate design decision or an unintended consequence of implementation.

There are legitimate structural options. A monolithic policy — one large document covering everything. A domain-based approach — policies grouped by topic (access control, cryptography, operations). A control-aligned approach — one policy per Annex A control. Each has trade-offs in maintainability, auditability, and ownership clarity.

The typical answer: “We have a comprehensive set of policies covering all controls.”

The follow-up: “How do you ensure consistency across them? How do you detect gaps between your Statement of Applicability and your policy set? If a new control is added, how does a corresponding policy get created?”

If the answer to that last question is “the consultant creates it,” you’ve just revealed that your policy framework depends on an external party. If the answer is “we have a template and a process,” the auditor will request the template and the most recently created policy to verify that the process runs.

The point isn’t that one structure is better than another. The point is that you should be able to articulate why you chose yours, what trade-offs you considered, and how you maintain it. “We inherited it from the consultant” is honest, but it’s not evidence of a managed process.


”If all your internal audits show full compliance, how do I know the audits are rigorous?”

I mentioned this question briefly in an earlier article, but it deserves deeper treatment because it encapsulates a fundamental paradox of compliance: perfect results are suspicious.

Clause 9.2 requires internal audits. Most organisations conduct them annually, covering all controls over a cycle. The instinct is to want clean results — no findings means no problems, which means the ISMS is working.

The auditor sees it differently. An internal audit programme that consistently reports zero findings is either not looking hard enough or not being honest about what it finds. Real systems have friction. Real processes have gaps. Real environments have edge cases, exceptions, and things that almost worked but didn’t quite.

The right answer to this question is counterintuitive: you want your internal audits to find things. Findings that are identified, tracked through corrective action, and resolved are evidence of a functioning ISMS. They demonstrate that the audit process has teeth, that the organisation is willing to admit imperfection, and that the corrective action process actually works.

The follow-up is worse: “How do you ensure objectivity when you have a small team?”

In a small organisation — and most M365-dependent businesses are not large enterprises — the people who built the ISMS are the same people who audit it. The standard recognises this reality; it doesn’t require external auditors. However, Clause 9.2.2 strictly requires that auditors are “impartial” and that they “do not audit their own work” to ensure the objectivity of the results.

The practical answer: if the CTO wrote the disaster recovery plan, someone else audits it. If the security lead configured Conditional Access, a different person reviews the evidence. You document the audit assignment matrix, you ensure no conflicts, and — critically — you keep records that demonstrate this discipline was maintained across the full audit cycle.


”How do you justify using proxy measurements rather than direct measurements in your evidence collection?”

This question cuts to the heart of everything I’ve been writing about in this series. It’s an A.8.1 question at its core, but it applies to any control where the measurement is indirect.

I discussed proxy measurements in an earlier article: BitLocker status as a proxy for encryption and Secure Score as a proxy for security posture. The auditor is now asking: Do you know you’re using proxies, and can you defend that choice?

The typical answer is silence, because most organisations don’t realise they’re using proxies. They think the Intune compliance report is the compliance state, not a proxy for it.

The right answer requires three things:

First, acknowledge the proxy. “We use BitLocker status as reported by Intune to evidence encryption coverage. This is a proxy measurement — it measures the presence of a specific encryption technology, not the encryption state of the data.”

Second, justify it. “Direct measurement of data encryption state is not available via the M365 API. BitLocker is the platform’s encryption mechanism for Windows devices, making its status the most reliable available indicator.”

Third, document the limitation. “This proxy does not account for devices encrypted by other mechanisms — such as Azure VMs using server-side encryption — which is why we exclude non-endpoint devices from this measurement and report them under a separate control.”

This is where the compliance preparation industry has a blind spot. The tools it sells produce measurements. Nobody asks what those measurements actually mean, whether they’re direct or proxy, and what the implications are when the proxy diverges from reality. The auditor who asks this question is testing whether you understand your own evidence — not just whether you can produce it.


The questions I’ll leave you with

Here are three. Don’t look up the answers. Just ask yourself whether you could answer them — with evidence — right now.

One. Your organisation acquires a smaller company next month. How do you expand your ISMS scope to cover their information assets, their personnel, and their IT infrastructure — and what risk assessment do you perform on systems you didn’t build?

Two. Your highest-scoring risk is rated “Likely, Major” with a residual treatment of “Treat.” The treatment relies on four M365 capabilities. One of them — Privileged Identity Management — was misconfigured for three months before anyone noticed. How did your monitoring fail, and what corrective action prevents it from happening again?

Three. An auditor asks your newest team member: “What would you do if you received a suspicious email?” They answer correctly. Then the auditor asks: “How do you know that answer reflects training rather than common sense?” What evidence do you have that your awareness programme caused the behaviour, rather than merely preceded it?

These aren’t gotcha questions. They’re the questions the standard is designed to answer. If your ISMS can answer them, the audit is a formality. If it can’t, the question is whether you have a management system or just a collection of documents.

I’ve been compiling questions like these — testing them in real environments, calibrating the difficulty, and mapping them to specific clauses and controls. The compilation now stands at 788 questions spanning every clause and every Annex A control, each classified by difficulty level and tagged with the specific evidence required to answer it.

The exercise revealed something unexpected: the questions that are hardest to answer are often the simplest to state. “How do you know your risk methodology produces consistent results?” is twelve words. Answering it with evidence requires a calibration framework, worked examples, and inter-assessor testing. The gap between the question and the evidence isn’t knowledge — it’s architecture. You either have a system that produces the evidence, or you don’t.

This is why I believe the questions themselves are the most valuable artefact in a compliance programme. Not the policies, not the controls, not the risk register — the questions. Because if you can answer the challenging ones, the routine ones take care of themselves.

A postscript: several of these questions became design specifications. The question about continual improvement versus corrective action led to separate tracking systems — an improvement register with proactive triggers and a corrective actions pipeline with reactive ones, each with distinct evidence trails. The question about internal audit rigour led to a structured audit programme with scheduling, per-control observations, and a findings lifecycle that tracks from identification through corrective action to closure. The question about audit objectivity led to a roles register — 33 named roles across five organisational tiers — that documents who audits what and ensures no one marks their own homework.

The questions didn’t just test the ISMS. They shaped it.


JJ Milner is a Microsoft MVP and the founder of Global Micro Solutions, a managed services provider operating across 1,200+ Microsoft 365 tenants. He writes about rethinking compliance from first principles.

Related ISO 27001 controls

J
JJ Milner

Microsoft MVP and founder of Global Micro Solutions. 30+ years securing Microsoft environments across 1,200+ tenants. Writes about rethinking compliance from first principles.

See what the auditor would find. In 30 minutes.

Same questions a real ISO 27001 auditor asks. Immediate gap analysis.

Start Your Audit Preview