Skip to main content
All posts
Cybersecurity12 min read

Security Architecture Review: The 20-Point Audit We Run on Every Engagement

A comprehensive 20-point security architecture review checklist covering identity, network, encryption, logging, incident response, backup, and more — with scoring methodology.

Published

Every engagement we undertake at CC Conceptualise begins with a structured security architecture review. Over years of conducting these assessments across regulated industries, we have refined a 20-point checklist that reliably identifies the security gaps that matter. Not theoretical risks — the actual weaknesses that attackers exploit and auditors flag.

This article shares our methodology, the 20 controls we assess, and the scoring framework we use to prioritise remediation.

Scoring Methodology

Each control is scored on a 0-5 scale:

ScoreMaturity LevelDescription
0Non-existentNo control implemented
1Ad-hocSome activity exists but is inconsistent and undocumented
2DevelopingControl is partially implemented with significant gaps
3DefinedControl is implemented and documented but not consistently monitored
4ManagedControl is implemented, monitored, and regularly tested
5OptimisingControl is fully automated, continuously monitored, and regularly improved

Aggregate scoring:

  • 80-100 (out of 100): Strong posture — focus on optimisation
  • 60-79: Moderate posture — address critical gaps within 90 days
  • 40-59: Weak posture — significant remediation required
  • Below 40: Critical — immediate action required across multiple domains

The 20-Point Checklist

Loading diagram...

1. Identity and Access Management

What we assess:

  • Is phishing-resistant MFA enforced for all users?
  • Are Conditional Access policies comprehensive and without dangerous exclusions?
  • Is PIM configured for all privileged roles?
  • Are there excessive Global Administrators?
  • Are break-glass accounts properly configured and monitored?

Red flags:

  • More than 2 permanent Global Administrators
  • Conditional Access policies with broad exclusion groups
  • No MFA enforcement for guest users
  • Legacy authentication protocols still permitted

Target: Score 4+

2. Network Segmentation

What we assess:

  • Is network traffic segmented between tiers (web, application, data)?
  • Are NSGs or Azure Firewall controlling east-west traffic?
  • Is a hub-spoke or Virtual WAN topology enforced?
  • Are management ports (RDP, SSH) accessible from the internet?
  • Is DNS filtering implemented?

Red flags:

  • Flat network with all resources in a single subnet
  • NSGs allowing * inbound from *
  • RDP/SSH ports exposed to the internet (even with "just my IP")
  • No network-level segmentation between production and non-production

Target: Score 3+

3. Data Encryption

What we assess:

  • Is encryption at rest enforced for all storage services?
  • Are customer-managed keys (CMK) used for sensitive data?
  • Is TLS 1.2+ enforced for all data in transit?
  • Are database connections encrypted?
  • Is Key Vault used for key management (not hardcoded keys)?

Red flags:

  • Storage accounts with infrastructure encryption disabled
  • Databases accessible over unencrypted connections
  • TLS 1.0 or 1.1 still permitted on any endpoint
  • Encryption keys stored in application configuration

Target: Score 4+

4. Key and Secret Management

What we assess:

  • Is Azure Key Vault used for all secrets, keys, and certificates?
  • Is access to Key Vault restricted by RBAC (not legacy access policies)?
  • Are secrets rotated on a defined schedule?
  • Is Key Vault soft-delete and purge protection enabled?
  • Are Key Vault access logs monitored?

Red flags:

  • Secrets stored in environment variables, app settings, or source code
  • Key Vault using legacy access policies instead of RBAC
  • No secret rotation policy
  • Purge protection disabled (allows permanent deletion)

Target: Score 4+

5. Logging and Monitoring

What we assess:

  • Are diagnostic settings configured for all critical resources?
  • Is a central Log Analytics workspace receiving all security-relevant logs?
  • Are Activity Logs forwarded for all subscriptions?
  • Is log retention configured for compliance requirements (minimum 365 days)?
  • Are logs protected against tampering (immutable storage for archive)?

Red flags:

  • No centralised logging strategy
  • Critical resources without diagnostic settings
  • Log retention under 90 days
  • No protection against log deletion by compromised admin accounts

Target: Score 4+

6. Incident Response

What we assess:

  • Is there a documented incident response plan?
  • Has the plan been tested in the last 12 months?
  • Are automated detection and response playbooks configured?
  • Is there an external IR retainer in place?
  • Are DORA/NIS2 reporting timelines achievable with current tooling?

Red flags:

  • No documented IR plan
  • Plan exists but has never been tested
  • No SIEM deployed (no Sentinel or equivalent)
  • No external IR capability retained

Target: Score 3+

7. Backup and Disaster Recovery

What we assess:

  • Are backups configured for all critical systems?
  • Are backup vaults immutable?
  • Are restore tests conducted regularly (monthly for critical systems)?
  • Is cross-region replication configured for DR?
  • Is the RTO/RPO documented and validated through testing?

Red flags:

  • No immutable vault configuration
  • Backups never tested (no restore evidence)
  • Single-region deployment with no DR plan
  • Backup data in the same subscription as production (vulnerable to ransomware)

Target: Score 4+

8. Secrets Management in Pipelines

What we assess:

  • Are service connections using workload identity federation (not secrets)?
  • Are pipeline variables encrypted and scoped appropriately?
  • Are there any secrets committed to source control?
  • Is secret scanning enabled on all repositories?
  • Are service principal credentials rotated regularly?

Red flags:

  • Long-lived secrets in pipeline variables
  • Service principals with password credentials (not certificates or federated)
  • No secret scanning on repositories
  • Service principals with excessive permissions (Owner, Contributor at subscription level)

Target: Score 3+

9. API Security

What we assess:

  • Is Azure API Management or equivalent gateway in place?
  • Are APIs authenticated and authorized (not open)?
  • Is rate limiting configured?
  • Is input validation implemented?
  • Are APIs versioned with a deprecation strategy?

Red flags:

  • APIs directly exposed without a gateway
  • No authentication on internal APIs ("it's behind the firewall")
  • No rate limiting (vulnerable to abuse)
  • API keys as the sole authentication mechanism

Target: Score 3+

10. Container Security

What we assess:

  • Are container images scanned for vulnerabilities before deployment?
  • Is a private container registry used (not pulling from public Docker Hub)?
  • Are containers running as non-root?
  • Is Kubernetes RBAC properly configured?
  • Are network policies enforced in the cluster?

Red flags:

  • No image scanning in CI/CD pipeline
  • Images pulled directly from public registries in production
  • Containers running as root
  • Kubernetes cluster with no network policies (flat cluster network)
  • Default service account used by pods

Target: Score 3+

11. Supply Chain Security

What we assess:

  • Are SBOMs generated for all artifacts?
  • Is dependency scanning configured in all pipelines?
  • Are artifacts signed?
  • Is a private package feed used with upstream source management?
  • Are dependency updates tracked and applied within SLA?

Red flags:

  • No dependency scanning
  • Direct consumption of public NuGet/npm feeds without curation
  • No SBOM generation
  • Vulnerable dependencies in production with no remediation timeline

Target: Score 3+

12. Compliance and Governance

What we assess:

  • Is Azure Policy used for guardrails across the estate?
  • Are regulatory compliance requirements mapped to technical controls?
  • Is compliance continuously monitored (not just point-in-time)?
  • Are management groups and subscriptions structured with proper hierarchy?
  • Is tagging enforced for cost and ownership accountability?

Red flags:

  • No Azure Policy assigned
  • Compliance assessed only during annual audits
  • Flat subscription structure with no management group hierarchy
  • No tagging strategy

Target: Score 3+

13. Data Classification

What we assess:

  • Is a data classification scheme defined and applied?
  • Are sensitivity labels configured in Microsoft Purview?
  • Is DLP configured for sensitive data types?
  • Are data handling rules enforced technically (not just by policy)?
  • Is data discovery/scanning conducted for unclassified data stores?

Red flags:

  • No data classification scheme
  • Classification exists on paper but no technical enforcement
  • Sensitive data in unprotected storage accounts
  • No DLP rules for financial, health, or personal data

Target: Score 3+

14. Endpoint Protection

What we assess:

  • Is EDR deployed on all endpoints (workstations and servers)?
  • Is EDR integrated with the SIEM for correlation?
  • Is automated investigation and response enabled?
  • Are device compliance policies enforced through Conditional Access?
  • Is application control configured for high-risk environments?

Red flags:

  • EDR coverage below 95% of managed devices
  • EDR alerts not integrated with SIEM
  • No device compliance policies
  • Automated remediation disabled

Target: Score 4+

15. Email Security

What we assess:

  • Is Defender for Office 365 (or equivalent) deployed?
  • Is DMARC configured at p=reject for the primary domain?
  • Are anti-phishing policies configured with impersonation protection?
  • Is user reporting of suspicious emails enabled and monitored?
  • Are attack simulation exercises conducted regularly?

Red flags:

  • No advanced email filtering (relying on basic Exchange Online Protection only)
  • DMARC at p=none or not configured
  • No phishing simulation programme
  • No Safe Links/Safe Attachments

Target: Score 4+

16. WAF and DDoS Protection

What we assess:

  • Is Azure WAF or equivalent deployed in front of all public-facing applications?
  • Are WAF rules in prevention mode (not just detection)?
  • Is DDoS Protection Standard enabled on public-facing VNets?
  • Are WAF logs monitored and tuned regularly?
  • Is bot management configured?

Red flags:

  • Public-facing applications without WAF
  • WAF in detection-only mode for extended periods
  • No DDoS protection (relying only on Azure's basic infrastructure protection)
  • WAF with default rules and no application-specific tuning

Target: Score 3+

17. Vulnerability Management

What we assess:

  • Is continuous vulnerability scanning deployed (infrastructure and applications)?
  • Are vulnerabilities tracked with remediation SLAs?
  • Is there a defined process for critical/zero-day vulnerabilities?
  • Are vulnerability trends tracked and reported to management?
  • Is scanning coverage validated (are all assets being scanned)?

Red flags:

  • No vulnerability scanning programme
  • Scan results not tracked or actioned
  • No SLA for remediation
  • Scanning coverage below 90%

Target: Score 4+

18. Security Awareness Training

What we assess:

  • Is security awareness training mandatory for all employees?
  • Is training refreshed at least annually?
  • Are phishing simulations conducted regularly (monthly)?
  • Are high-risk users identified and given additional training?
  • Is training effectiveness measured (not just completion)?

Red flags:

  • No mandatory security training
  • Training is a one-time onboarding event with no refresh
  • No phishing simulations
  • No measurement of training impact

Target: Score 3+

19. Third-Party Risk Management

What we assess:

  • Is there a register of all third-party ICT providers?
  • Are third parties assessed for security before onboarding?
  • Are contractual security requirements included in agreements?
  • Are critical third parties monitored continuously?
  • Are exit strategies documented for critical providers?

Red flags:

  • No third-party inventory
  • No security assessment before engaging new providers
  • No contractual security requirements
  • Concentration risk on a single provider with no exit plan

Target: Score 3+

20. Security Architecture Documentation

What we assess:

  • Are architecture diagrams current and include security controls?
  • Are data flow diagrams documented for critical applications?
  • Are trust boundaries clearly defined?
  • Is the shared responsibility model documented for each cloud service?
  • Are architecture decisions recorded (ADRs) with security rationale?

Red flags:

  • No architecture documentation
  • Diagrams exist but are outdated (more than 6 months old)
  • No data flow diagrams for applications handling sensitive data
  • No clear definition of trust boundaries

Target: Score 3+

Running the Assessment

Loading diagram...

Phase 1: Automated Discovery (Days 1-3)

Deploy automated scanning to gather factual data:

  • Defender for Cloud secure score and recommendations
  • Azure Policy compliance state
  • Azure Resource Graph queries for configuration analysis
  • Sentinel workbook for security monitoring coverage
  • Entra ID reports for identity security state

Phase 2: Manual Validation (Days 4-8)

Validate automated findings and assess controls that require human judgment:

  • Interview security, platform, and application teams
  • Review documentation (IR plan, policies, architecture diagrams)
  • Test specific controls (attempt to bypass Conditional Access, test backup restore)
  • Assess process maturity beyond tool deployment

Phase 3: Scoring and Reporting (Days 9-12)

  • Score each control 0-5 with evidence and rationale
  • Calculate aggregate score
  • Identify top 5 highest-impact remediation items
  • Provide remediation guidance with effort estimates
  • Present to stakeholders with executive summary and detailed findings

Using the Results

The review output should drive a prioritised remediation roadmap:

  1. Critical gaps (Score 0-1): Address within 30 days
  2. Significant gaps (Score 2): Address within 90 days
  3. Improvement opportunities (Score 3): Plan for next quarter
  4. Optimisation (Score 4): Continuous improvement backlog

Re-assess quarterly on the previously identified gaps to verify remediation effectiveness and track maturity improvement over time.

Conclusion

A security architecture review is not an audit to survive — it is a diagnostic to improve. The 20 controls in this checklist represent the baseline that every enterprise Azure environment should achieve. They align with DORA, NIS2, ISO 27001, and the requirements of major cyber insurers.

If you want us to run this assessment on your environment — or if you want to build internal capability to run it yourselves — contact us at mbrahim@conceptualise.de. We deliver honest findings with actionable remediation guidance, not a 200-page report that gathers dust.

Topics

security architecture reviewcloud security auditsecurity controls checklistAzure security assessmententerprise security posture

Frequently Asked Questions

A thorough 20-point review typically takes 2-3 weeks for a medium-complexity Azure environment. This includes discovery, technical assessment, stakeholder interviews, and report generation. The initial assessment can be accelerated with automation, but validating findings and understanding context requires human expertise.

Expert engagement

Need expert guidance?

Our team specializes in cloud architecture, security, AI platforms, and DevSecOps. Let's discuss how we can help your organization.

Get in touchNo commitment · No sales pressure

Related articles

All posts