In Post 1 I performed a NIST SP 800-30 risk assessment for a fictional single-site Canadian clinic using OpenMRS. In Post 2 I turned those risks into governance deliverables – policies, a controls library, and a 90-day roadmap. This post closes the full cycle by covering monitoring, residual risk review, basic reporting, and light compliance activities.
Assumption Recap
One on-premise server with co-located PostgreSQL database in a locked room
5 workstations on a local network with basic firewall; single clinic site; no mobile devices
Primary regulatory driver: PHIPA/PIPEDA (patient data confidentiality, integrity, availability)
Visual Risk Register
Below is a simple visual summary of the current state after treatment:
Risk
Original Level
Residual Level
Status
SQL injection attacks could exploit insufficient input validation, leading to unauthorized extraction or modification of patient records
Critical
High
In Progress
Misconfigured access controls could be exploited by unauthorized users, leading to improper viewing or tampering with sensitive patient data
High
Medium
On Track
Cross-site request forgery attacks could exploit missing anti-CSRF tokens, leading to unauthorized changes to patient information
Medium
Low
Planned
Monitoring and Residual Risk Review
Ongoing monitoring ensures controls remain effective and risk does not drift. For this clinic I would implement the following practical activities:
Quarterly Access Reviews: The clinic manager exports the list of OpenMRS user accounts and permissions from the database, cross-checks them against the latest staff roster and role description, removes or downgrades any unnecessary accounts, and documents the review (including date, reviewer, and change made) for audit evidence.
Log Analysis for Suspicious Activity: Review PostgreSQL audit logs and web server logs weekly using simple commands (e.g. grep for failed logins attempts or unusual data export volumes). Flag anomalies such as repeated failed logins from the same workstation or large data queries outside business hours, then investigate and document findings.
Annual Policy Reviews: Review all policies once per year (or after any major system change) to confirm they still reflect current operations, staff roles and PHIPA requirements. Any identified gaps trigger formal updates, staff retraining, and version control tracking.
Residual risk is now lower across all three risks. This level aligns with a small clinic’s risk appetite – accepting limited residual risk due to resource constraints while keeping patient data reasonably protected.
Light Compliance Checklist
To verify compliance with PHIPA in practice, I would use the simple evidence-based checklist:
Access Logs Reviewed Monthly: Pull login and data access logs from OpenMRS and server, then compare them against expected user activity (normal working hours, authorized roles). Flag and investigate any unusual access, documenting the review date, findings, and actions taken.
Encryption Settings Verified: Use an online SSL checker (e.g., SSL Labs) to confirm TLS 1.3 is active on the web interface, and query the PostgreSQL configuration (pg_settings or SHOW commands) to verify at-rest encryption is enabled and properly configured.
Incident Response Tested Annually: Conduct a tabletop exercise with clinic staff using a simulated breach scenario (e.g., suspected SQL injection). Walk through notification, containment, PHIPPA reporting, and recovery steps, then document lessons learned and update the Incident Response Policy accordingly.
These checks provide basic evidence that controls are operating as intended.
Conclusion and Lessons Learned
Completing the full cycle from risk assessment to governance, treatment, and monitoring showed how a small clinic can build practical GRC from the ground up. The biggest takeaway is the importance of clear risk statements, assigned owners, and ongoing verification – not just writing policies, but making sure they actually work.
Full versions of the Access Control Policy and Incident Response Policy will be available for download in the portfolio resources section once completed.
In my previous post, I completed a NIST SP-300 Risk Assessment for a fictional single-site Canadian Clinic running the OpenMRS electronic medical record system. I identified three key risks:
SQL Injection attacks that could allow unauthorized extraction or modification of patient records,
Misconfigured access controls that could lead to unauthorized viewing or tempering with sensitive data,
Cross-site request forgery attacks that could enable unauthorized changes to patient information.
This post turns these risk findings into practical governance deliverables – targeted security policies, a simple controls library, a 90 day implementation roadmap, and an updated residual risk estimates. The goal is to show how risk assessment flows directly into actionable governance and treatment in a small healthcare environment.
Assumptions
One on-premise server with co-located PostgreSQL database in a locked room
5 workstations on a local network with basic firewall; single clinic site; no mobile devices
Primary regulatory driver: PHIPA/PIPEDA, with emphasis on protecting patient data confidentiality, integrity, and availability for uninterrupted care
My approach – Turning Risk Findings into Governance
Building on NIST SP 800-30 risk assessment from Post 1, I follow a standard 6-step GRC cycle to convert the identified risks into practical governance outputs. The cycle includes defining business/regulatory requirements, understanding system architecture (covered in the Assumptions & Risk Recap section), risk identification (already completed in Post 1), selecting and implementing controls and policies, documentation, and ongoing monitoring/residual risk review. In this post I focus on the governance and risk treatment steps – creating targeted, a controls library, an implementation roadmap, and residual risk estimates.
Policy Development
Policies serve as both governance documents and controls. I drafted four concise policies tailored to assessed risks and the clinic’s small scale operations.
Acceptable Use Policy (excerpt)
Define permissible use of OpenMRS workstation and data. Key rule: Staff may only access patient records required for their role; personal use of clinic systems is prohibited.
Access Control Policy (addresses misconfigured access control risk)
Enforce role-based access control (RBAC) with least privilege.
Require quarterly access reviews by the clinic manager.
Mandate unique accounts; shared login are not permitted.
Incident Response Policy (covers all three risks)
Require notification to clinic leadership and PHIPA reporting within 24 hours of suspected breach
Define roles for investigation, containment, and post-incident interview
Data Protection & Encryption Policy
Mandate TLS 1.3 for all OpenMRS communications and at-rest encryption for the PostgreSQL
Controls Implementation Roadmap
90 day prioritized plan with assigned owners, estimated effort, and expected residual-risk reduction.
Controls Library Snippets
Risk
Control Objective
Owner
Timeline
Expected Residual Risk
SQL injection attacks could exploit insufficient input validation, leading to unauthorized extraction or modification of patient records
Parameterized queries + input validation
IT Lead
Days 1–30
Medium → Low
Misconfigured access controls could be exploited by unauthorized users, leading to improper viewing or tampering with sensitive patient data
RBAC + quarterly reviews
Clinic Manager
Days 31–60
High → Low
Cross-site request forgery attacks could exploit missing anti-CSRF tokens, leading to unauthorized changes to patient information
Anti-CSRF tokens + session management
Developer (contract)
Days 61–90
Medium → Low
Business and Regulatory Alignment
These policies and controls directly support the clinic’s core obligations under PHIPA and PIPEDA by establishing clear rules for data protection, access management, and incident handling. For example, the Access Control Policy and associated RBAC controls help satisfy requirements to limit access to personal health information to only those who need it for care and administrative purposes. The Data Protection & Encryption Policy addresses secure transmission and storage obligations, while the Incident Response Policy ensures timely breach notification and documentation as required by PHIPA.
In a real engagement, this controls library will be used to demonstrate compliance readiness and to identify any remaining gaps during internal reviews or external audits.
Conclusion & Next Steps
Completing the full cycle of risk assessment to governance deliverables for the OpenMRS clinic reinforced how identified risks can be systematically translated into practical policies and controls. In post 3, I will build a visual risk register, simulate basic monitoring and reporting, and review residual risk in the context of the clinic’s objective.
Full versions of the Access Control policy and the Incident Response Policy will be available for download in the portfolio resources section once completed.
This risk register simulates a GRC assessment for small clinics that use OpenMRS, an open-source EMR system. It manages patient demographics, medical history, lab results, and other clinical records, deployed on-premise for enhanced control and data sovereignty, minimizing third-party risks. This setup supports a practical cybersecurity risk evaluation. The primary objectives are protecting patient privacy (PHIPA/PIPEDA compliance) and maintaining operational efficiency (revenue uptime).
Risk Context
The 2025 Verizon DBIR notes medical data, like OpenMRS’s, tops breach targets, with system intrusion as the leading cause. PHIPA imposes fine of up to $1M for organizations, underscoring breach severity. This context necessitates a simulated risk assessment.
My Approach to Risk Assessment
This is my general risk assessment approach, drawn from various frameworks, designed to adapt and evolve. I’ll enhance it with ISO and SOC comparisons in future posts [links TBD], reflecting my learning journey.
Assumptions
Assumptions reflect a typical small Canadian clinic in a suburban/rural setting (e.g., 15,000 population). Larger urban clinics (Toronto) may have more hybrid/cloud elements, which can be explored in future assessments.
Server and Database: One on-premise server housed in a locked server room, with the PostgreSQL database running on the same physical computer to simulate a basic clinic setup.
Workstations: 5 desktop workstations used by staff for data entry and administration, all located within the same clinic building.
Mobile Devices: None, to simplify the scope for this initial assessment.
Site: A single clinic site, with all assets on-premises and no multi-town or distributed locations.
Network: A local network connecting the server and workstations, with basic security (e.g., firewall, no public internet exposure for now).
Usage: The system supports patient record management, with the web interface as the primary access point.
Step 1 – Identify Assets
I began this GRC assessment by identifying assets through a systematic review of risk exposure. I analyzed OpenMRS’s Implementer Documentation, PostgreSQL setup, and Security resources to pinpoint the PostgreSQL database for patient data and the server as critical components. This process establishes the foundation for Step 2’s threat analysis.
Step 2 – Identify Threats
Next, I identified threats based on the assets using my approach. I assessed the PostgreSQL database’s data value and the server’s hosting role by reviewing Security documentation, leading to threats like system intrusion and unauthorized access. These, tailored to my system’s exposure, will inform Step 3’s likelihood assessment.
Step 3 – Determine Overall Likelihood of Threat Events
This Step 3 follows NIST SP 800-30’s three-step process for overall likelihood:
(1) Likelihood of threat event initiation/occurrence (Table G-2 and G-3- ) – how likely is an adversary to attempt the attack? (mapped to Substeps 1–2 below)
(2) Likelihood that the threat event, once initiated, results in adverse impacts (Table G-4) – how likely is success and harm given our environment? (mapped to Substeps 3–6)
We use a 1–5 qualitative scale (1 = None, 2 = Low, 3 = Medium, 4 = High, 5 = Critical). Likelihood is estimated over the next 12 months.
Substep 1 – Define Threat-Vulnerability Context
This assessment focuses on adversarial threats (per CAPEC); non-adversarial threats (e.g., hardware failure, accidental misconfiguration by staff, or natural disaster) will be explicitly included in future expansions to demonstrate the full NIST SP 800-30 scope.
Threat: [CAPEC-180] Exploiting Incorrectly Configured Access Control Security Levels; Vulnerability: [CWE-732] Incorrect Permission Assignment for Critical Resource; Risk: Unauthorized Access to Patient Data Due to Misconfigured Server Permissions
Threat:[CAPEC-108] Command Line Execution through SQL Injection; Vulnerability: [CWE-89] Improper Neutralization of Special Elements used in an SQL Command (‘SQL Injection’); Risk: Data Breach via SQL Injection into the OpenMRS Database
Threat 3: [CAPEC-62] Cross Site Request Forgery; Vulnerability: [CWE-352] Cross-Site Request Forgery; Risk: Unauthorized Data Modification via CSRF attack on the OpenMRS Web Interface.
Substep 2 – Likelihood of Initiation/Occurrence
[CAPEC-180] (Access Control Misconfiguration, CWE-732): High (4/5) – DBIR 2025: Privilege Misuse ~12% of healthcare breaches; CAPEC-180 likelihood rated High.
[CAPEC-108] (SQL Injection, CWE-89): High (4/5) – DBIR 2025: System Intrusion ~53% of healthcare breaches; CAPEC-108 adjusted upward for high prevalence.
[CAPEC-62] (CSRF, CWE-352): Medium (3/5) – DBIR 2025: Social Engineering ~6% of healthcare breaches; CAPEC-62 likelihood moderated by low prevalence.
Substep 3 – Assess Vulnerability Severity (Adjusted for Context)
[CWE-732]: High (4/5) – CVE-2021-42115 (CVSS 8.1). Base severity remains high but slightly reduced by internal-only environment.
[CWE-89]: Critical (5/5) – CVE-2021-43094 (CVSS 9.8, #3 in CWE Top 25). AV:N base score adjusted to effective AV:A due to internal network only.
[CWE-352]: Medium (3/5) – CVE-2015-5215 (CVSS 6.1). Moderate severity unchanged.).
Substep 4 – Assess Existing Control Effectiveness
Based on basic security assumptions:
Ineffective (4/5) for CWE-732 (default roles, no granular RBAC);
Ineffective (4/5) for CWE-89 (no parameterized queries, no WAF);
Neutral (3/5) for CWE-352 (basic sessions, no anti-CSRF tokens).
Substep 5 – Assess Pervasiveness of Predisposing Conditions
CWE-732 (Incorrect Permission Assignment): High pervasiveness (4/5). Wiki-documented admin endpoints and default roles on the 5-workstation internal network create a large attack surface for privilege probes (Technical – Functional: networked multiuser; Operational – Size of population per NIST Table F-4).
CWE-89 (SQL Injection): High pervasiveness (4/5). REST API endpoints (e.g., /ws/rest/v1/obs, /patient/search) are listed in the OpenMRS wiki and accessible from any of the 5 workstations, amplifying the chance that an internal probe discovers and exploits the lack of input filtering (Technical – Functional: networked multiuser + Technical – Security: common controls).
CWE-352 (CSRF): Medium pervasiveness (3/5). State-changing forms are documented, but session-based privilege gating and the internal-only setup limit the number of viable attack paths (Operational – Size of population and limited external vectors).
Overall, these predisposing conditions make successful adverse impacts more likely once an attack is attempted.
CWE-732 (Incorrect Permission Assignment): Medium (3/5). Fewer turnkey exploits exist; the attack requires internal access and manual privilege probing rather than automated tools.
CWE-89 (SQL Injection): High (4/5). Public CVEs (e.g., 2021-43094) and ready-to-use PoCs are widely available on Exploit-DB and NVD; automated SQLi tools can be used once an internal probe reaches the database endpoints.
CWE-352 (CSRF): Medium (3/5). Fewer weaponized exploits; the attack depends on social engineering and session riding rather than direct code execution or public toolkits.
Substep 7 – Determine Overall Likelihood
We follow NIST SP 800-30’s G-5 step and combine all previous subscores using simple averaging on the 1–5 scale. The calculation is:
Take the score from each relevant substep (2 through 6).
Average them to produce the overall likelihood for that threat-vulnerability pair.
Map the average back to the 1–5 qualitative scale.
All likelihood scores are assessed over the next 12 months (NIST p. 10).
The next phase (Step 4 – Determine Impact and Recommend Treatment) will calculate final risk levels and propose cost-effective controls. This portfolio piece showcases my ability to translate technical vulnerabilities into business-relevant risk decisions — a core mid-level GRC skill.
Step 4 – Determine Impact and Recommend Treatment
With likelihood now established, we assess the potential impact (magnitude of harm) if the threat event succeeds. NIST SP 800-30 uses a 1–5 qualitative scale for impact (1 = Negligible, 2 = Low, 3 = Medium, 4 = High, 5 = Critical), considering effects on confidentiality, integrity, availability, regulatory compliance (PHIPA), patient safety, and financial/operational consequences for a small Canadian clinic.
Impact scores are tailored to the clinic’s context: loss of patient data could trigger fines up to $1M, loss of trust, and operational downtime estimated at $15,000–$25,000 per incident.
Impact Assessment (per Threat-Vulnerability Pair)
Impact is evaluated qualitatively across the CIA triad (Confidentiality, Integrity, Availability) on a 1–5 scale, considering PHIPA compliance, patient safety, reputation, and operational consequences for a small Canadian clinic. The overall impact score is the highest of the three CIA components (conservative approach commonly used in healthcare GRC).
CAPEC-180 / CWE-732 (Misconfigured Permissions)
Overall impact: High (4/5)
Confidentiality: High (4/5) – Unauthorized access to patient records
Integrity: Medium (3/5) – Possible but limited data tampering
Availability: Low (2/5) – Minimal disruption to care
CAPEC-108 / CWE-89 (SQL Injection)
Overall impact: Critical (5/5)
Confidentiality: Critical (5/5) – Full database breach and potential exfiltration of all patient records
Integrity: Critical (5/5) – Data could be altered or deleted
Availability: Medium (3/5) – Possible temporary database outage
CAPEC-62 / CWE-352 (CSRF)
Overall impact: Medium (3/5)
Confidentiality: Low (2/5) – No direct data exposure
Integrity: Medium (3/5) – Unauthorized modification of individual records possible
Availability: Low (2/5) – Negligible effect on system uptime
Overall Risk Level (Likelihood × Impact)
We multiply the overall likelihood (from Step 3) by the impact score to produce a final risk level (1–25 scale, mapped to Low/Medium/High/Critical).
Risk levels are mapped using the standard 5×5 matrix commonly used in healthcare GRC: 1–5 = Low, 6–10 = Medium, 11–15 = High, 16–25 = Critical.
We recommend the following prioritized treatments, balancing cost, effort, and business value for a small clinic:
Risk
Risk Level
Recommended Treatment
Rationale & Estimated Effort
CAPEC-108 / CWE-89 (SQL Injection)
Critical
Mitigate (immediate priority)
Implement parameterized queries and input validation across all REST endpoints. Add a lightweight WAF if budget allows. Effort: 2–4 weeks developer time. High ROI – prevents largest breach risk.
CAPEC-180 / CWE-732 (Misconfigured Permissions)
High
Mitigate
Conduct full RBAC review and implement role-based access controls with quarterly audits. Effort: 1 week + ongoing 2 hours/quarter. Essential for PHIPA compliance.
CAPEC-62 / CWE-352 (CSRF)
Medium
Mitigate (low effort)
Add anti-CSRF tokens to all state-changing forms. Effort: 1–2 days developer time. Quick win with minimal cost.
All treatments align with the clinic’s limited resources: focus first on high-impact, low-complexity fixes that protect patient data and maintain care continuity.
Business Alignment and Risk Prioritization
In a small Canadian community clinic, the top business objectives are uninterrupted patient care and full PHIPA/PIPEDA compliance. A breach could result in fines up to $1M, loss of patient trust, and operational downtime estimated at $15,000–$25,000 per incident. Therefore, risks threatening confidentiality and integrity of patient records (CWE-89 and CWE-732) are prioritized over lower-impact issues.
Conclusion
This risk assessment demonstrates a complete NIST SP 800-30-aligned process applied to a realistic small Canadian clinic using OpenMRS. By breaking down likelihood into threat initiation, adjusted vulnerability severity (including controls, exposure, and exploit maturity), and pervasiveness of predisposing conditions, we produce clear, defensible scores tied directly to PHIPA compliance and patient-care priorities.