Hi everyone,
Disclosure: I own the project linked below. I’m sharing it because I’m working on the technical side of NIS2 evidence collection, not to pitch services or solicit DMs.
Project context:
https://www.softwareapp-hb.de/projekte.html
The security engineering problem I’m looking at is this:
NIS2 Article 21 requires organizations to address areas like risk management, incident handling, business continuity, supply-chain security, vulnerability handling, access control, asset management, MFA, secure communications, and cyber hygiene. In practice, a lot of “evidence” for these areas still ends up as screenshots, policy PDFs, manual exports, spreadsheets, or consultant-maintained checklists.
That may satisfy some audit workflows, but from a security operations perspective it has obvious weaknesses: evidence goes stale, checks are difficult to reproduce, and there is often a gap between what the policy says and what the infrastructure actually looks like.
I’m building an open-source, self-hostable platform that tries to map NIS2 requirements to concrete technical checks and produce traceable evidence from actual system state. The current design focus is not to replace GRC platforms, legal review, auditors, or an ISMS. The goal is narrower: make certain parts of the evidence layer more repeatable, technical, and defensible.
Examples of evidence areas where this might be useful:
- asset inventory and system classification
- patch/vulnerability state
- account and privilege configuration
- MFA and authentication posture
- backup existence and test evidence
- logging and monitoring configuration
- firewall and network exposure checks
- incident-response process evidence
- technical control mappings to NIS2 Article 21
The hard question is where automation helps and where it becomes misleading.
For example, a system can verify that logging is enabled, but not necessarily that logs are reviewed effectively. A tool can collect patch state, but not decide whether risk acceptance was appropriate. It can validate backup configuration, but not prove that recovery objectives are realistic unless restore tests are captured properly.
For people working in security engineering, SOC, vulnerability management, infrastructure, audit support, or compliance operations:
Where do you think technical automation genuinely improves NIS2 evidence quality?
And where do you think compliance automation creates false confidence?
I’m especially interested in the boundary between measurable technical state and areas that still require human assessment, process maturity, or auditor judgment.