Document history

DateSummary
2003-01-14Created:  GO-ITS 25.0  draft v0.1
2008-01-07New draft number changed to version 1.1
2008-02-19Aligned with ISO 27002:2005, Input from 2007-2008 review process incorporated
2008-02-20Approved by IT Standards Council
2008-04-09Minor corrections and changes to more closely align with ISO 27002:2005, adjustment to include access control and monitoring sections as per consultations with Ontario Internal Audit Division
2008-04-17Approved by Architecture Review Board
2012-04-14Organizational updates, updated hyperlinks and references, minor adjustment and errata
2012-04-19GO-ITS 25.19 content has been merged
2012-06-12Document format updated, minor adjustments
2012-11-15Minor updates approved by Information Technology Executive Leadership Council (ITELC). Approved document version number set to 1.2
2015-01-19Minor updates per rationale to ARB in Dec. 2014 (administrative updates, ISO/IEC alignment), draft version number set to 1.3
2015-03-06Minor update per received ARB feedback
2016-01-26Updated guidance regarding generic/shared/privileged account management, least privilege applicability, version number set to 1.4
2016-02-03Endorsed by Architecture Review Board
2016-03-31Approved by IT Executive Leadership Council
2023-08-03Major update per rationale to ARB in Aug. 2023. Draft version number set to 1.5
2023-11-01Architecture Review Board endorsement.
2024-02-14IT Executive Leadership Council approval. Approved version number 1.5

1. Foreword

Government of Ontario Information Technology Standards (GO-ITS) are the official publications on the IT standards adopted through the Office of the Corporate Chief Information Officer (OCCIO) and IT Executive Leadership Council (ITELC) for use across the government’s information and information technology (I&IT) infrastructure.

These publications support the responsibilities of the Treasury Board Secretariat for coordinating the standardization of I&IT in the Government of Ontario. In particular, GO-IT Standards describe where the application of an IT standard is mandatory and specify any qualifications governing the implementation of the IT standards.

All GO-ITS 25 Standards are based on the work of recognized global authorities in information and operational security, both in government and industry.

2. Introduction

This document defines general security requirements for the protection of the integrity, confidentiality and availability of Government of Ontario services, systems and networks. This document is one in a series, which define platform-independent technical security requirements.

This document references the following four sections from ISO/IEC 27002:2013 “Information technology - Security techniques - Code of practice for information security controls”:

  • Section 9 – Access control
  • Section 12 – Operations security
  • Section 13 – Information systems acquisition, development and maintenance
  • Section 18 – Compliance

Security requirements in this document are derived from government, standards bodies and industry, and are published both internally and to the public. The requirements in this document may reflect advances in knowledge since the publication of the ISO/IEC code of practice, and must be implemented unless exigent business or functional requirements preclude doing so, and exemptions are approved.

2.1. Background and rationale

The GO-ITS 25 Security Standards describe configurations and parameters that define context-specific requirements. The implementing business unit may choose the appropriate method to satisfy the standard, as long as the security objective of the standard is met or exceeded.

The GO-ITS 25 documents will be reviewed on an ongoing basis to account for the evolution of security practices/controls and other related technologies. Changes or additions to the GO-ITS 25 Security Standards will be established in writing and communicated to all appropriate personnel.

It is intended that context-specific, step-by-step implementation procedures will be derived from the GO-ITS 25 Security Standards by business units. These specific procedures may be influenced by requirements arising from any or all of the following:

  • Threat and Risk Assessments (TRA);
  • Privacy Impact Assessments (PIA);
  • Security Testing and Evaluation (ST&E) (Vulnerability Assessments [VA], Penetration Testing, Red Team Exercises, Code Review, etc.);
  • Threat Intelligence
  • Tabletop Exercises
  • Provincial Privacy regulations (FIPPA, MFIPPA, PHIPA) and relevant guidance; and
  • Federal Privacy regulations (Privacy Act, PIPEDA).

2.2. Target audience

The target audience for this document includes, but is not limited to:

  • Technical Implementers and developers
  • TRA and PIA analysts
  • Program owners/managers
  • Procurement staff
  • Internal auditors

2.3 Scope

2.3.1. In scope

The scope of this document includes providing a working set of general requirements that offer guidance and direction with regards to cyber security. Its primary focus is on the integrity of the infrastructure and processes required for delivering services and applications throughout the various Government of Ontario IT environments. Additional documents in this series cover more specific themes and requirements, and it is recommended that they be consulted for more specific guidance.

The Cyber Security Division (CSD) or any successor organization should be contacted if the requirements in this document need clarification, or if it is not clear whether this standard applies to a given situation.

2.3.2. Out of scope

N/A

2.4. Applicability statements

2.4.1. Organization

All Ministries and Clusters are subject to Government of Ontario IT Standards.

All adjudicative and advisory agencies are subject to Government of Ontario IT Standards.

All other agencies that are using OPS information and information technology products or services are required to comply with Government of Ontario IT standards if they are subject to the  Governance and Management of Information Technology (IT) Directive and Government of Ontario IT Standards by a Memorandum of Understanding.

GO-ITS 25 security requirements apply to all vendors and third parties (including any information technology system or network that processes ministry and agency information) under contract to the Government of Ontario unless exempted by a Memorandum of Understanding.

As new GO IT Standards are approved, they are deemed mandatory on a go-forward basis (go-forward basis means at the next available project development or procurement opportunity).

When implementing or adopting any Government of Ontario IT standards or IT standards updates, Ministries, IT Clusters and applicable agencies must follow their organization's pre-approved policies and practices for ensuring that adequate change control, change management, risk treatment and mitigation, and control selection mechanisms are in place and employed. For the purposes of this document, any reference to Ministries or the Government includes applicable agencies.

2.4.2. Other applicability

Interdependence of systems and networks is a reality when delivering digital services and maintaining Government IT operations; this means the actions of one organization or program may influence the security posture of another.  Changes to IT environments in one Ministry or Cluster may affect the environment of another Ministry or Cluster.  This requires that the measurement of risk must be based on an “enterprise-wide” evaluation and include input and representation from a number of sources.

These considerations also require that every Ontario public service employee must comply with mandatory security requirements. Details on responsibilities for members of the Ontario Public Service are outlined in the Corporate Policy on Cyber Security and Cyber Risk Management, the Information Sensitivity Classification Policy and Guidelines, and the Acceptable Use of Information Technology (IT) Resources Policy.

Additionally, given modern adoption of third-party and vendor-managed services, Cloud Services, and Alternative Service Delivery methods, security requirements and procedures must also form part of any contract or agreement that affects or potentially impacts the IT environments of the Ontario government, and be applied with equal force to vendors and third -party contractors (including sub-contractors) as to Ontario government employees.

2.4.3. Terms

Within this document, certain wording conventions are followed. There are precise requirements and obligations associated with the following terms:

Must: The requirement is mandatory. Without it, the system is not considered secure.

Should: The requirement ought to be adhered to, unless exigent business needs dictate otherwise and the full implications of non-compliance are understood. All exceptions are to be documented and approved in writing by management, identifying the rationale for the exception to standard practice.

2.5. Roles and responsibilities

2.5.1. Contact information

If you have questions or require further information about this document or the GO-ITS 25 series, please contact the following Cyber Security Division staff:

Contact informationContact 1Contact 2
Name/TitleAlex Fanourgiakis, Senior ManagerTim Dafoe, Senior Security Policy Advisor
Organization/MinistryMinistry of Public and Business Service DeliveryMinistry of Public and Business Service Delivery
DivisionCyber Security DivisionCyber Security Division
BranchCyber Security Strategy, Risk Management & Architecture BranchCyber Security Strategy, Risk Management & Architecture Branch
Section/UnitPolicy and Standards UnitPolicy and Standards Unit
Office Phone(647) 982-5216(416) 327-1260
E-mailAlex.Fanourgiakis@ontario.caTim.Dafoe@ontario.ca

3. Technical specification

The following requirements apply to all IT assets and operations within the scope of the Governance and Management of Information Technology (IT) Directive.

3.1. Operational procedures and responsibilities

Responsibilities and procedures for the management and operation of all services, systems, and networks should be established. This includes the development of appropriate operating instructions and incident response procedures.

3.1.1. Access control procedures

Management of user access and privileges must be conducted in a comprehensive manner to ensure that only those users and operations staff with formal authorization from system owners can access associated data and services.

Account TypeDescriptionExamples
Individually-assigned, Unique User AccountsTypical account types and permissions associated with the bulk of the organization’s account creation and business activities for known, named individuals Standard Active Directory or unix user accounts with typical permissions
Privileged AccountsAccounts with inherent privileges or membership in privileged groups, regardless of other account typeunix root accounts (or equivalent), Windows local or domain administrator accounts (by direct assignment or group rights), content management administrators, Default Vendor-supplied Accounts that are assigned privileges 
Generic User AccountsShared or role accounts that are not assigned to known, named individualsAccounts with typical permissions used for non-operational use, such as for training purposes
Service AccountsAccounts created for the purpose of being assigned to a running service or application Accounts associated with Windows or unix operating system services or applications
Device AccountsAccounts associated with a device for the purpose of handling automated tasks and authentication, device permissions, or other attributesWindows computer/machine accounts
Default Vendor-supplied AccountsAccounts configured by default in vendor-supplied services, applications, and devices (often with well-known default credentials)Built-in operating system accounts, default network device accounts

The following requirements must be addressed within access control procedures:

  1. Segregation of duties and least privilege principles must be generally implemented to reduce the risk of unauthorized access and negligent or deliberate system misuse within the organization;
  2. Access must only be provided after formal authorization is granted, via unique identifiers/accounts and credentials, with formal records documenting the provision of access;
  3. Access assignment and revocation must be formally documented, with procedures for periodic validation (e.g., frequent and routine searches for redundant or duplicate entries in access control databases), to ensure only authorized users maintain access to the system, and to ensure access will be revoked and assets protected should the duties of the associated user change such that access is no longer required;
  4. System roles and duties must be assigned to individual and accountable users via Individually-assigned, Unique User Accounts, and the use of Generic User Accounts must be avoided wherever possible;
  5. Where the use of Generic User Accounts cannot be avoided due to a business requirement, such accounts must not be created for use on behalf of a specific individual and/or in the name of a specific individual;
  6. Generic User Accounts and Service Accounts must be created through established processes, documented, tracked, continuously managed, and removed when no longer required for business purposes;
  7. Generic User Accounts must be subject to the same password requirements as Individually-assigned, Unique User Accounts;
  8. Service Accounts must be understood to be both resources and principals, excluded from membership in privileged groups, disallowed from supporting local/remote logons and interactive sessions, and where possible, limited as to where any authentication or object access may take place;
  9. Elevated privileges, or membership in privileged groups (i.e., privileges typically used for system administration, user account administration, or content management) must be assigned to secondary Privileged Accounts not used for routine access or running of services, and granted on a strict least privilege basis with assignment subject to more frequent routine review (i.e., to ensure only authorized users are granted privileges due to an ongoing business requirement); and
  10. Management of user access and privileges for all account types must be conducted in a comprehensive manner to ensure that only those users and operations staff with formal authorization from system owners can access associated data and services.

3.1.2. Access control systems

The following design and operational requirements must be addressed by access control systems:

  1. Access control systems must be centrally deployed and managed;
  2. The design and operation of centralized access control systems must include support for resilience and redundancy to reduce impact if failures occur;
  3. Access must be granted only after all authorization and authentication procedures are complete, and a successful result has been returned for the credentials presented and/or the initiated session;
  4. Access control systems must support documented business and security requirements (e.g., the rationale for deploying the  access control system);
  5. Access control systems must be managed such that all authorized users of the system (and the roles, rules and/or privileges associated with their accounts or credentials) are individually authorized by the relevant, responsible program manager (with documentation);
  6. Access control systems must be managed such that authorized user accounts or credentials that are no longer in use beyond 45 days, or no longer required (e.g., due to a change in employee role), are identified and removed with 24 hours of detection and/or notification unless a valid business reason exists;
  7. Expired entries in access control databases must not be assigned to new users (to reduce the risk of expired privileges being provided to users who do not require them);
  8. Temporarily elevated privileges (e.g., independent of a Privileged Account) must be assigned on a “need for use” or “just in time” basis (e.g., per event or request), such that these privileges are not provided for an unnecessary duration (and where possible, avoided using automation or other routines and tools);
  9. Log and audit information associated with elevated privileges and Privileged Accounts must be subject to more frequent review, to assist in detecting unauthorized access or misuse;
  10. All Privileged Accounts and Service Accounts must be documented, tracked, and continuously managed;
  11. In virtualized, software-defined environments, or Cloud Services, elevated privileges must be subject to assignment according to least privilege (e.g., limit access to and use of “super” administrative or global/root account roles, temporarily elevated privileges across planes or categories of service, etc.);
  12. Where possible, Privileged Accounts should be limited from exposure to typical security threat vectors (e.g., links/payloads received via e-mail);
  13. The robustness (e.g., the degree of identity assurance associated with credentials, the number of authentication factors required as credentials, and/or rigour of supporting processes) for a given access control system should be increased in accordance with the Identity and Credential Assurance Policy if sensitive information is to be processed or stored by a reliant service (e.g., a requirement to enforce multi-factor authentication);
  14. Access control systems should be managed in a manner that requires authorized users to be provided with a written statement of the access rights and responsibilities for the system; and
  15. Access control systems should be managed in a manner that requires authorized users to sign a use agreement that indicates their acceptance of disclosed access rights and responsibilities.

Additional, situation-specific access control guidance is provided throughout this document. Other GO-ITS 25 standards should be also consulted for technology-specific advice.

3.1.3. Password management

Passwords remain the most common credentials deployed within Government of Ontario access control systems. The following password management requirements must be addressed by access control systems intended to protect Government of Ontario IT assets and services:

  1. Passwords are highly sensitive, and must be protected in accordance with the ISC Policy and Guidelines (i.e., encrypted both in storage and routine online transmission, as to be irretrievable from authentication and system processes);
  2. Passwords must be initially issued directly to the user (in person, by telephone, or through GO-PKI protected e-mail);
  3. Initial passwords must require the user to change passwords upon first login, and where technically possible, initial passwords should expire within five days of issuance;
  4. Managers must assist users in understanding the risk of improper password use and maintenance, make required technical tools required available to staff to this end, and participate in efforts to revoke credentials when they are no longer required;
  5. While complexity rules do not apply to most password values, passwords must be checked against lists of weak, common, and known-compromised values via automated means, with such passwords rejected for use;
  6. Password complexity rules (at least one numeric digit, and at least one upper case and one lower case letter) continue to apply for Service Accounts, where platform support exists;
  7. Authentication interfaces that accept passwords must employ technical controls such as rate-limiting to address known attack techniques;
  8. A mechanism must be in place to ensure password values are not reused by the same user within a span of twelve consecutive months;
  9. Software or devices with the capability to capture unencrypted passwords must not be permitted within IT environments, unless for authorized CSD use (e.g., forensics, investigations);
  10. Passwords for authorized vendor service/field/support accounts must be reset upon each use;
  11. All Default Vendor-supplied Accounts, including “guest” accounts, must have the associated password(s) changed upon deployment (as these values are known to attackers), or be disabled in a manner that does not preserve a valid password value;
  12. Password re-entry must be enforced upon inactivity timeouts, such as screen “blanking” or device/application session locking;
  13. Administrators with access to Privileged Accounts that offer potentially broad access to IT assets and infrastructure must, through separation of duties, not also share the job function of user password maintenance;
  14. Administration and use of passwords must be consistent, uniform, and documented;
  15. Error/exception messages for denial of access associated with password creation, entry, or change failures must provide the briefest possible explanation for denied access, and not disclose unnecessary detail;
  16. Controls must be in place to ensure that emergency passwords are changed after use, with details regarding use of emergency accounts/passwords submitted to management (e.g., how, why, when, and by who);
  17. Access must be denied after the fifth consecutive incorrect password entry; users must  contact the appropriate area (e.g., Service Desk) to enable further attempts or reset the account password (upon strong, positive user/requestor authentication), with access failures recorded in regularly reviewed, investigated, and appropriately escalated audit or system logs;
  18. If encrypted passwords are stored in files, there must be no descriptive indication of what use or systems the passwords correspond to, unless this information is also encrypted;
  19. Passwords must never be hard-coded into applications, automated processes, scripts, macros, or function keys, as these are often discovered by attackers; and
  20. Passwords must not be cached or stored locally in unencrypted form.

3.1.4. Password requirements

Users must be aware of their responsibilities and the risks to I&IT assets, and adhere to the following password selection requirements:

  1. Passwords must be chosen so they can be reliably recalled, but not so easily that they can be guessed or deduced by others;
  2. Passwords must be at least twelve characters in length, with the exception of cellular mobile devices;
  3. For cellular mobile devices, passwords must be at least six characters in length;
  4. Cellular mobile devices must lock after 20 min. of inactivity, with ten consecutive password failures causing the device to be disabled and wiped of information (i.e., users must contact Service Desk to reinitiate device activation);footnote 1
  5. Passwords must not be blank (i.e., null passwords are prohibited);
  6. Users must not include easily obtained or deduced personal information (e.g., hobbies, family member names, type or name of pets, birthdays, etc.), any portion of their given name or username, or words, phrases, or acronyms that are part of the broader/recognized OPS culture within chosen password values;
  7. Unless managed via approved OPS single sign-on (SSO) implementations, users must select unique passwords for Remote Access Services (RAS), and for access to different platforms (to deter reuse of a compromised credential), where possible; and
  8. In instances where technology allows, the use of passphrases is preferable to the use of shorter password values.

3.1.5. Password use

Users must adhere to the following password use requirements:

  1. Users must not disclose their passwords to anyone, and understand their accountability for any access to IT assets or services gained through use of their password;
  2. With the exception of approved password manager software, users must not install or use other password completion software or plug-ins, and must disable or avoid such functionality should it be provided within an application not authorized for this use;
  3. Users must know whom to contact for assistance with their password, and how and when to report confirmed or even suspected breach/leakage or disclosure of a password; and
  4. Users must immediately change any confirmed or suspected compromised password and alert Service Desk, unless directly instructed otherwise by CSD. Users must receive training regarding indicators of suspected compromise. Indicators of potential password compromise include, but are not limited to, unanticipated increments of last logon indicators, incorrect last logon timestamps, unrecognized last logon locations, current/active sessions, new device alerts, unexpected and persistent loss of access to the account, discovery of unusual e-mail activity (e.g., message forwarding, unusual sent messages, etc.), or unusual new prompts.

3.1.6. Password change and maintenance

The following password change and maintenance requirements must be addressed by OPS access control systems:

  1. Privileged Accounts must still be required to change associated passwords at least once every 30 days, with the highest standard of care practiced regarding the security of these credentials;
  2. These intervals must be enforced by automated means that cannot be bypassed, or, if not feasible through automation, by means of administrator and/or manager intervention and verification; and
  3. Password changes, if required, must not involve trivial alterations or the use of easily-recognized patterns/iteration (e.g., changing the example password comPop10 to comPop11).

Reasonable compensating controls must be used if compliance with these requirements cannot be immediately met.

Access to backup media that contains, or may contain, passwords (in any format) must be limited to authorized personnel. Authorized personnel must not discuss or transmit password values or password/authentication security details with, or in the vicinity of, staff who are not authorized to receive the information, or contractors that have not signed a non-disclosure agreement.

3.1.7 Password manager software

Password manager software is now widely used in industry to assist with the management and use of passwords. To reduce risk, however, the following requirements apply to selection, acquisition, configuration, deployment, and use of password manager software:

  1. Approved password manager software must be standalone, dedicated, vendor-supported software;
  2. Approved password manager software must only enroll Individually-assigned, Unique User Accounts and Generic User Accounts, per the definitions in section 3.1.1 of this document;
  3. Care must be taken regarding file/folder permissions, configuration, and hardening to reduce the risk of attack against deployed password manager software;
  4. Password or key vaults must be maintained in standalone, encrypted local storage;
  5. All cryptographic constructions and specifications must comply with the high-risk requirements described in GO-ITS 25.12 Use of Cryptography;
  6. Any password “fill” functionality must require user initiation, and never automatically “fill” a field with a password value without user intent; and
  7. Training in the secure and appropriate operation and use of password managers, including how to securely exit or “lock” the application, must be made available to users.

For purpose-built High Sensitivity internal environments or where criticality is high, the deployment and use of password manager software intended for general use should be avoided.

3.1.8. Documented operating procedures

IT operating procedures for services, systems, and networks must be documented and maintained, and treated as formal documents.

Revisions to operating procedures must be reviewed and approved by management. The procedures should specify all service/system/device operation instructions, including but not limited to the following:

  1. Processing and handling of information and metadata;
  2. Scheduling requirements, including interdependencies with other services, systems, or and networks, and mandatory start/stop times for operations, maintenance windows, or changes, if applicable;
  3. Instructions for handling errors or other exceptional conditions including restrictions on the use of roles, permissions/privileges, and utilities;
  4. Support contacts in the event of unexpected operational or technical difficulties;
  5. Special output handling instructions, such as the use of special stationery or the management of confidential output, including procedures for secure disposal of equipment and/or data in support of GO-ITS 25.20; and
  6. Contingency, restart, and incident/recovery procedures for use in the event of service or system failure.

Documented procedures should also be prepared for operations activities within physical facilities associated with systems and information processing, such as boot and halt procedures, information backup, equipment and facility maintenance, data centre operations/equipment procedures, and mail handling management/safety.

3.1.9. Operational change control

Changes to IT environments and systems must be controlled. Inadequate control of changes is a common cause of system and security failures. Formal management responsibilities, policies, and procedures should be in place to ensure adequate control of all changes to equipment, software, configurations, and procedures.

Operational programs should be subject to strict change control. When programs undergo changes, an audit log containing all relevant information should be retained. Changes to the operational environment can impact services and applications, and vice versa. Wherever possible, operational and service/application change control procedures should be integrated; these procedures should address business criticality and all requirements described in GO-ITS 35 OPS Enterprise Change Management.

All changes, however minimal, have a measurable impact on both the systems being changed, and adjacent systems/services that interact or share resources with the system being changed. Change control processes are intended to reflect this reality, and must support managing the lifecycle of a system, from planning, through implementation, management, and production operations, to decommissioning, removal, and disposal. The program manager responsible for the system must ensure that associated change control records are accurately and promptly retained.

Change control must detail changes to the IT environment, and how changes affect the enforcement of security policy and controls. Changes must be reviewed to ensure that the security posture of and security controls within the changed environment have not been reduced in robustness or effect. Changes may also increase the relative complexity of an environment, and in turn increase demands on security components.

The following must be detailed during change control activities:

  1. Service or system composition and configuration (hardware, software, instances/workloads, etc.);
  2. Practices and procedures;
  3. Confidentiality, availability, and integrity requirements of existing systems and information;
  4. Identification and authentication mechanisms, assigned privileges, and credentials;
  5. Monitoring of, response to, and recovery from events, and review of related procedures;
  6. Planning, implementation, management, and review/audit of systems and procedures;
  7. Resource utilisation, particularly as it affects adjacent services and environments;
  8. Required changes to existing Service Level Agreements (SLAs); and
  9. Required changes to agreements, contracts, or licences.

The change approval procedure must include security oversight, as well as approvals from program managers responsible for requesting changes, and those responsible for services or systems affected by the change.

Security Testing and Evaluation (ST&E), such as a vulnerability assessment (VA) and/or penetration test, should be performed against production environments prior to being returned to operations to verify that any changes made have not reduced the intended level of security. For significant changes, or for environments processing sensitive information, a vulnerability assessment and/or penetration test must be performed to re-validate the security of the system, prior to that system returning to production use.

Procedures must exist for retroactively documenting an unscheduled, but unavoidable change (e.g., in response to an emergency, unanticipated discovery of serious vulnerability, or an urgent patch requirement). These procedures must include a definition of what constitutes a reasonable exception to the usual, proactive change control procedures.

Changes must be tested to confirm that the change was successful, and that the security posture of the environment was not weakened. Roll back procedures must exist for reversing unsuccessful changes.

3.1.10. Incident management procedures

Incident management responsibilities and procedures must be established to ensure a quick, effective and orderly response to security incidents, and enable post-incident analysis. Incident response and management must comply with the requirements described in GO-ITS 37.

To prepare for security events and incidents:

  1. Procedures should be established to cover all potential types of security incidents, including:
    1. Service, system or device failures or critical errors;
    2. Denial of service, or other loss of availability;
    3. Errors resulting from incomplete or inaccurate business data;
    4. Breaches of confidentiality, leakage of information, etc.; and
    5. Unauthorized use, access, change, or other loss of integrity.
  2. In addition to normal contingency plans (i.e. designed to recover services or systems as quickly as possible) the procedures should also cover:
    1. Analysis and identification of the cause of the incident;
    2. Planning and implementation of remedies to prevent recurrence, if necessary;
    3. Collection of audit trails and similar evidence, with documented chain of custody and protection of evidence;
    4. Communication with those affected by or involved with recovery from the incident; and
    5. Escalating and reporting the action to the appropriate authority.
  3. Audit trails and similar evidence should be collected and secured by qualified professionals, as appropriate, for:
    1. Internal problem analysis;
    2. Use as evidence in relation to a potential breach of contract, breach of regulatory requirement or in the event of civil or criminal proceedings; and
    3. Negotiating for compensation, service credits, etc. from vendors/suppliers.
  4. Actions taken to recover from security incidents/breaches and correct for failures should be carefully and formally controlled. The procedures should ensure that:
    1. Only clearly identified and authorized staff are allowed access to live systems and data;
    2. All emergency, remediation and/or forensic actions taken are documented in detail;
    3. Emergency action is reported to management and reviewed in an orderly manner; and
    4. The integrity of business systems and controls is confirmed with minimal delay.

A document detailing responses to anomalous events, with response measures corresponding to the frequency or severity of the event must exist. Response measures and processes must be practiced via appropriately conducted exercises and testing, with the results of such exercises reviewed to evaluate response effectiveness. Examples of events with security implications that must be detailed include but are not limited to the following:

  1. Denied connection or transaction attempts based on incorrect data or conditions, including:
    1. User ID, password, or other credential failure;
    2. Client certificate errors;
    3. Failed challenge/response or device enrollment;
    4. Prohibited or monitored hours of operation or time of day;
    5. Prohibited or monitored source/client location; and
    6. Violation of established session/connection rules.
  2. Patterns of activity which may not have been denied, but which are suspicious, include but are not limited to the following:
    1. Anomalous connection behaviour with multiple invalid user IDs or identifiers;
    2. Anomalous connection behaviour with a valid user ID or identifier (frequency, concurrency, time and date, widely varying location or “impossible travel”, multiple access from disparate locations and/or devices, etc.); and
    3. Sequential or random anonymous connection attempts (port/service, APIs, network addresses, etc.).

Responses to suspected unauthorized access, or persistent unauthorized access attempts, must include an effort to determine whether an incident or breach/compromise has occurred and to help detect and deter future attempts. These measures must include, but are not limited to, the following (as applicable and appropriate):

  1. Limiting or denying access, based on source or location of connection attempts, including the use of access controls, rate limiting, reputation filtering, and other techniques;
  2. Changes to network connection mechanisms, access to interfaces, telephone numbers, or other means of access;
  3. Locking accounts;
  4. Auditing accounts for:
    1. Inactivity;
    2. Unauthorized creation;
    3. Weak or unchanged passwords/credentials;
    4. Inappropriate /suspicious device enrollment;
    5. Inappropriate/unnecessary roles, group memberships, or privileges/permissions;
    6. Use of privileges;
    7. Inappropriate/unnecessary use of task/job scheduling; and
    8. Expired or revoked access.

Services or systems involved in a security incident must be removed from networks if it is determined that they are adversely affecting or propagating problems to other services or systems, or otherwise jeopardizing the security of other environments.

Systems involved in incidents must not be turned off without consultation with the responsible security authority for the group and/or the Ministry and Cluster involved. If they have been shut down, for whatever reason, they must not be started again, and administration of the system must be turned over to the relevant security operations group and/or appropriate CSD analysts.

Security administrators and forensic analysts may power the system on under controlled circumstances to capture and record system state information. Precautions must also be taken by such individuals to safeguard any sensitive information on devices under forensic review.

Incident response procedures for incidents involving specific services or systems should include the following:

  1. Limiting or denying further access, based on source or location of connection attempts, including the use of access controls, rate limiting, reputation filtering, and other techniques;
  2. A single point of contact to co-ordinate handling of the incident and any required communications, and a backup in case they cannot be reached;
  3. A published and current contact escalation chain, with milestones, timeframes, or other criteria to indicate points at which an event is required to be escalated;
  4. Backup contacts for each point in the escalation chain;
  5. A response checklist with room for comments; and
  6. Post-event review and revision of procedures, based on exceptions to the prepared plan.

There must also be procedures for secure handling and preservation of any evidence trail (with documented chain of custody and protection of evidence) for:

  1. Client requests, system activity, device enrollment, etc. (both allowed and denied);
  2. System configuration changes and/or use of privileges;
  3. System behaviour, events/logs, errors/exceptions, etc.; and
  4. Interpreted events as reported by Intrusion Detection/Prevention Systems, or any other form of network monitoring/visibility or analysis.

3.1.11. Segregation of duties

Duties and areas of responsibility must be segregated in IT environments in order to reduce opportunities for unauthorized access, modification, or other misuse of information, resources, or services.

Small organizational units may find this method of control difficult to implement, but the principle should be applied as far as is possible and practicable. In environments where it is difficult to segregate duties, other controls, such as monitoring of activities, use of audit, and management supervision must be implemented. It is important however that security audit remains independent.

The initiation of an event (e.g., transactions, requests for resources, etc.) should be separated from its authorization. This practice can reduce the likelihood of successful external fraud attempts, and if the ability of a single internal individual to perpetrate theft or fraud in their area of responsibility without being detected is controlled, collusion is forced. An audit mechanism should be used to detect such collusion attempts. The following associated controls must be implemented:

  1. Roles or duties required to enable unauthorized access, theft, or fraud must be segregated;
  2. Basic separation of duties to require a minimum of two roles (e.g., separate actors for raising a purchase order and verifying that the goods have been received, or authorizing cheques and having physical access to cheque printing equipment); and
  3. If there is a documented high danger of theft or fraud, and a high degree of adverse impact associated with successful collusion, processes and controls must be devised such that three or more actors need be involved to satisfy unique roles.

To satisfy the requirement that the duties of administration and configuration be separated, there should be segregation between staff responsible for operation and maintenance of a service or system, those responsible for the configuration of the service or system (where platform support exists for such a distinction), and administrators of external support applications. Different staff members must hold these roles/accounts. This principle should also be extended to virtualized and software-defined environments, as well as Cloud Services.

Applications running persistently (i.e., as services or agents) must be configured to run under their own Service Accounts and privileges, in accordance with section 3.1.1 of this document. Associated permissions must result in denying access by default and allowing file or resource access only as required.

3.1.12. Separation of development and operational environments

Development and testing environments must be physically (not logically or virtually) separated from production/operational environments when data confidentiality is at a High Sensitivity level, or due to criticality (e.g., documented integrity or availability concerns, sensitive data aggregation, business impact, etc.). Other sensitivity levels , and environments such as approved Cloud Services platforms, must provide for logical isolation as a minimum requirement (some security requirements for Cloud Services development and testing environments are described in GO-ITS 25.21). Rules for the transfer of software from development and testing to production/operational status should be defined and documented.

Development and test activities can cause serious problems, including unwanted modification of files or system environments, disclosure of confidential information, or service/system failure. A strong degree of separation is necessary between production/operational and development/testing environments where possible to reduce the likelihood of business impact. A similar separation should also be implemented between development and test functions, to maintain a stable control environment in which to perform meaningful tests, and to prevent authorized or inappropriate access during testing. Mutual separation of development, test and production/operational environments can reduce the risk of accidental change, unauthorized access to operational software and business data, and interference with the quality of testing. If development or test staff have access to production/operational systems and information, they may be able to introduce unauthorized and untested code, alter operational data, or exfiltrate information. On some platforms/systems this capability could also be misused to commit fraud, or introduce untested code or malware, potentially resulting in serious adverse impacts.

Developers and testers, and attacks against insufficiently protected development and testing environments, can all pose a potential threat to the confidentiality of operational information. Development and testing activities (including ST&E) may also cause unintended changes to software and information if they share the same computing environment.

The following controls should be implemented:

  1. Development, test, and operational software should be run on different computer equipment (e.g., separate memory, processor, bus, etc.), and must be, in the case of High Sensitivity information or requirements for a high degree of integrity or availability. With the exceptions noted above, a virtual environment (one employing logical components to enforce separation) does not fully meet these criteria;
  2. Development and testing activities should be reliably separated from production operations, and physical segregation (not logical or virtual), with the exceptions noted above, must be employed in instances of High Sensitivity information processing and/or storage;
  3. Compilers, source code, editors, and other system utilities should not be accessible from operational systems when not required;
  4. Different authentication procedures and credentials must be used for operational and test systems, to reduce the risk of error and/or loss of credentials, and menus should display appropriate service/system identification messages; and
  5. Development staff must only have access to production/operational passwords where required for support purposes. Controls should ensure that such passwords are changed after use or when such support is no longer required.

Changes to systems must be validated in a test environment with the same services as those in production. Changes must be tested against existing services, existing security mechanisms, and current server and client connection mechanisms.

Testing must be done within isolated environments, using anonymous and depersonalized data taken from the production environment or random information. If data cannot be effectively made anonymous or strongly de-identified (i.e., low re-identification risk), the test environment must be protected using equivalent controls with regard those protecting the production environment. In general, personal information should not be introduced to any test or development environment.

Isolated network environments must be within a physically secure location to prevent against observation of, or interference with, test systems. Clients or systems accessing the test environment must not be able to access a production environment, so as to avoid potentially damaging communication (be it intentional or otherwise) between test and production environments, or operator confusion during service/system administration or configuration.

Test systems must have all information securely removed before being relinquished for other use. The information must be securely removed as described in the GO-ITS 25.20 standard and related operating procedure for Disposal, Loss and Incident Reporting of Computerized Devices & Digital Storage Media.

Systems must not be migrated directly from development to production environments. Production components must be “built from scratch”, based on the system build instructions compiled during development and testing (e.g., “build books”).

3.1.13. External facilities, vendors, and service management

The use of third-party services offers benefits to the Government of Ontario, but without appropriate service acquisition and management, these can introduce potential security exposures, such as the possibility of compromise, impact to the confidentiality or integrity of Government of Ontario information, or loss of availability. Prior to using external, vendor-provided facilities management services or third-party environments, relevant risks must be identified, with appropriate controls agreed to with the contractor and incorporated into any agreements.

For adoption of Cloud Services, security requirements for procurement/selection of services and third-party responsibilities are described in GO-ITS 25.21.

Issues that should be addressed include:

  1. Identifying particularly sensitive or critical applications better retained in-house, due to limitations in vendor offerings, service expectations, or other considerations;
  2. Insider threat management;
  3. Third-party supplier risk and supply chain security;
  4. Identifying implications for information backup, business continuity plans, disaster recovery, incident response, and forensics;
  5. Definition and documentation of responsibilities and procedures for reporting and handling security incidents;
  6. Specifying applicable security standards and controls, and the process for measuring compliance;
  7. Obtaining the approval of business application owners, and consulting clients;
  8. Allocation of specific responsibilities and procedures to monitor all relevant security activities in an effective manner;
  9. Protecting personal information and sensitive data from unauthorized access; and
  10. Support for privacy breach notification, log/audit requirements, and recordkeeping according to Government of Ontario directions and regulatory requirements.

External parties/vendors who are managing services, operations, and/or facilities on behalf of the Government of Ontario must agree to:

  1. Be bound by the principles, requirements, and best practices under which the Government of Ontario operates, including applicable GO-ITS standards and requirements described in the Information Sensitivity Classification Policy and Guidelines;
  2. Access to allow for periodic audit by the Government of Ontario (or a third party approved for this purpose) to confirm adherence to those principles, requirements, and best practices, or provision of independent evidence, reports, etc. regarding such practices and related security controls, by means of third-party audit/reporting frameworks and certification schemes recognized by the Government of Ontario;
  3. Oblige other customers and business partners (e.g., subcontractors, suppliers, etc.), via agreements, to adhere to practices that offer or exceed a security posture comparable to that of the Government of Ontario in any circumstances where they may influence Government of Ontario IT operations; and
  4. Reporting and notification regarding the ongoing suitability and performance of controls, in accordance with any compliance frequency/schedule stated in agreements.

Liabilities and recourse due to failure to comply with these requirements must be specified in advance.
Circumstances under which mutual obligations must be stipulated include, but are not limited to:

  1. Service outage or degraded capability;
  2. Breach/compromise of the environment and/or unauthorized release of sensitive data;
  3. Privacy breach and/or unauthorized disclosure of personal information;
  4. Quality or security issues with designs or developed code, or the presence of malware;
  5. Failure to follow documented procedures; and
  6. Third-party audit and/or certification of external facilities, with respect to recognized audit/reporting frameworks and certification schemes, agreed service levels, and security posture/control requirements

3.2. System planning and acceptance

Advance planning and preparation are required to ensure the availability of adequate capacity and resources to minimize the risk of service outage or systems failure.

The operational, functional, and non-functional requirements of new services and systems should be established, documented and tested prior to their acceptance and use.

3.2.1. Capacity planning

Capacity demands must be monitored and projections of future capacity requirements made to ensure the availability of services, such that resources including adequate processing power, network capacity, and storage are available. These projections should take into account new business and system requirements, and current and projected trends in both organizational needs and client expectations.

Mainframe computers require particular attention because of the much greater cost and lead time required for procurement of new capacity. Managers of mainframe services should monitor the utilisation of key system resources, including processors, main storage, file storage, printers and other output devices, and communications systems. Trends in system use should be identified. Managers should use metrics and trend information to identify and avoid potential bottlenecks that might present a threat to system security or user services, and plan appropriate remedial action.

Capacity planning must take into account business impact, criticality, the demands of providing functionality, and the requirement to maintain accurate activity records under both typical and peak loads. Modern services now provide more flexibility, scaling, and options to address these needs.

To ensure security is supported in capacity planning activities, all specified access control systems within the scope of this document must:

  1. Fail over to either a high security or access denied state in the event of critical resource (e.g., processing, storage, etc.) shortage;
  2. Record authorization, authentication, and privilege use events, both failed and successful; and
  3. Properly manage the authentication of users throughout the performance of functions such as stateful or stateless session management, even if during a transition of system state.

Consideration must be given to the length of time required to acquire additional components as per vendor availability and supply chain capacity. Larger and more complex systems generally incur an increased lead-time for resource procurement. Enough lead-time should be taken to acquire additional resources, if projections indicate an increase in utilization (e.g., traffic volume or resource use).

Business and mission critical services and systems must not be deployed such that they represent a single point of failure. Fail-over capacity, redundancies, or scaling capability adequate to meet availability requirements must be designed into these services and systems, minimally providing an ‘N+1’ architecture (where ‘N+1’ dictates that there must always be one more than the bare minimum number of components or an equivalent scaling capability necessary to deliver a functional/reliable service or system).

3.2.2. System acceptance

Acceptance criteria for new services and systems, upgrades, and new versions must be established, with suitable tests of the system carried out prior to acceptance. System owners and program managers should ensure that the requirements and criteria for acceptance of new systems are clearly defined, agreed upon, documented, and tested.
The following controls should be considered when planning an acceptance strategy:

  1. Performance, capacity, and service level requirements;
  2. Error recovery and restart procedures, and contingency plans;
  3. Preparation and testing of routine operating procedures to defined standards;
  4. Agreed set of appropriate security controls in place;
  5. Effective manual procedures and processes;
  6. Documented business continuity and disaster recovery arrangements, as required;
  7. Evidence that deployment of the new service or system will not adversely affect existing services or systems, particularly during peak processing/capacity periods, such as month end;
  8. Evidence that consideration has been given to the effect the new service or system has on the overall security of the organization; and
  9. Training in the operation and use of any new services, platforms, systems, and applications.

For major projects, both the relevant IT organization(s) and intended users should be consulted at all stages in the development process to ensure operational efficiency and acceptance of the proposed system design. Appropriate tests should be carried out to confirm that all acceptance criteria are fully satisfied.

Prior to deployment, new systems must be evaluated to ensure that business and security requirements have been met. Systems must be subject to Security Testing and Evaluation (ST&E) methods geared to the sensitivity level (i.e., per ISC Policy) of processing and/or information storage prior to being accepted for production use. In critical instances, services and systems should be subjected to external assessment via a third-party expert group in Security Testing and Evaluation prior to initial deployment and operation, and whenever major changes have been made to the system configuration or composition.

3.3. Protection against malware

Controls are required to prevent and detect the introduction of malware to Government of Ontario IT environments.

Software, services, systems, and networks are vulnerable to the introduction of malware. Users should be made aware of the dangers of both unauthorized software and malware via training, and managers should, where appropriate, introduce special controls to prevent and detect its introduction.

3.3.1. Controls against malware

Detection and prevention controls to protect against malware, and appropriate user training efforts and procedures , must be implemented. Protection against malware should include security training and simulation, reduction of vulnerability to malware, technical measures, appropriate roles and service/system access, vulnerability management, change management controls, information backup, and incident response capabilities.

All systems must employ controls to protect against malware. The specific controls will vary by platform, service, and system type. Not all controls will be effective in or appropriate for all situations.

The following controls should be considered when planning a strategy for malware control:

  1. Formal procedures requiring compliance with software licenses, use of approved software and services, and prohibitions on use or installation of unauthorized software and/or changes to authorized packages;
  2. Formal procedures to protect against risks associated with obtaining files and software either from or via external networks, or on any other medium, indicating what protective measures should be taken;
  3. Installation and regular update of anti-malware detection and repair software to scan devices and media either as a precautionary control or on a routine basis for certain types of malware;
  4. Conducting regular software review for systems supporting critical business processes. The presence of any unapproved files or unauthorized software should be formally investigated;
  5. Scanning any files on electronic media of uncertain or unauthorized origin, or files received over unknown networks, for malware, prior to use;
  6. Scanning any electronic mail attachments and downloads for malware before use. This may be carried out at different places (e.g., e-mail servers, on desktop/laptop computers, or upon joining a Ministry or Cluster network);
  7. Management procedures and responsibilities to deal with malware protection on systems, training in their use, reporting and recovering from attacks/incidents;
  8. Information backup, with testing of backups and procedures;
  9. Appropriate business continuity and disaster recovery plans for recovering from attacks/incidents, with all necessary testing, facility, and recovery arrangements; and
  10. Procedures to verify all information relating to malware, and ensure that any training, simulations, warning bulletins, etc., are accurate and informative, and based on current threat behaviour and threat intelligence. Managers should ensure that reliable sources of information are used to differentiate between hoaxes/misinformation and real incidents.

Staff should be made aware of the problem of hoaxes/misinformation, and what to do upon receipt of such messaging. In addition, all users should be made familiar with their obligations for business use of IT assets under the published Acceptable Use of Information Technology (IT) Resources Policy.

Malware may be present as executable or interpretable scripts, shell code, or program commands in files, e-mail or web content. All of these should be scanned and verified against known signatures/indicators of malware. Attack signatures and/or behavioural indicators must be monitored daily and updated immediately upon release for detection functions. In addition:

  1. The installation of software must be restricted to authorized administrative staff;
  2. Anti-malware software must be installed, operated, and updated regularly for both clients and servers, as appropriate; and
  3. Firewall/gateway devices or software must manage and limit both incoming and outgoing network connections to a pre-defined spectrum of approved connections, services and applications.

Software updates must only be downloaded from authoritative sites; cryptographic signatures or certificates of origin for the software must be provided by the vendor, and must be confirmed to be accurate prior to installation. Patch and vulnerability management operations can help protect assets from malware and must comply with the requirements and metrics described in GO-ITS 42.

3.4 System administration

Routine procedures should be established for carrying out an approved information backup strategy, taking backup copies of data and rehearsing their timely restoration, logging events and faults and, where appropriate, monitoring the equipment environment to protect the integrity and availability of information processing and communication services.

3.4.1. Information backup

Backups of business information and software must be taken regularly. Adequate backup facilities should be provided to ensure that all essential business information and software could be recovered following a disaster or media failure. Backup arrangements for individual systems should be regularly tested to ensure that they meet the requirements of business continuity and disaster recovery plans. The following controls should be considered:

  1. A minimum level of backup information, together with accurate and complete records of the backup copies and documented retention and restoration requirements/procedures, should be stored in a remote location, at a sufficient distance to escape any damage from a disaster at the main site. At least three generations or cycles of backup information should be retained for critical business applications;
  2. Backup information should be given an appropriate level of physical and environmental protection consistent with the standards applied at the main site. The controls applied to media at the main site should be extended to cover the backup site;
  3. Backup media should be regularly tested, where practicable, to ensure that they can be relied upon for emergency use when necessary;
  4. Restoration procedures must be regularly checked and tested to ensure that they are effective and that they can be completed within the time allotted in operational procedures (or other relevant documentation/metrics) for recovery;
  5. The retention period for the backup media should be determined; and
  6. Information from systems processing High Sensitivity information, or data sets that by nature of inherent confidentiality or aggregation constitute High Sensitivity information (as defined by the ISC Policy), must be encrypted using approved methods for the purposes of backup.

The frequency of data backups must be based upon availability requirements, as defined by the business case for the service or system. Storage must take place in a secure off-site facility.

Configuration of systems, and sensitive media such as production system images, must be stored offline, such that they may not be viewed, copied, or modified by unauthorized staff.

3.4.2. Activity logging

The activities and events on a system must be logged and archived for the purpose of routine monitoring and audit. Operator, system, and audit/event logs must be stored on a centralized, secure log server. These logs must include, as a minimum requirement:

  1. System or instance start/boot and halt times;
  2. System activities, errors, and any corrective action taken in response;
  3. Confirmation of the correct boot procedure, and handling of data and output;
  4. The identity of the individual invoking commands or functions resulting in a log entry;
  5. The timestamps associated with the beginning and end of any user or operator session;
  6. The issuance and use of privilege if granted;
  7. Errors, success/failure, and related messages associated with user or operator activities;
  8. Connections and session initiations related to user or operator access to the system;
  9. The origin of user or operator sessions, be this an indication of node, device, client, endpoint, location, or other mode, specific to the platform/technology, of indicating session origin;
  10. Changes to system mode, run level, or security context where applicable; and
  11. A timestamp indicating when the log entry was generated by the system, with system time synchronized to a redundant and validated reference time source.

System logs must also be generated to capture output resulting from automated processes. Specific details of files accessed or modified should be recorded in the audit logs, where applicable and given the configuration of the system.

3.4.3. Fault logging

Faults must be reported and corrective action taken. Faults reported by users regarding problems with information processing or communications systems should be logged. There should be clear rules for handling reported faults including:

  1. Review of fault logs to ensure that faults have been satisfactorily resolved; and
  2. Review of corrective measures to ensure that controls have not been compromised, and that the action taken is fully authorized.

The following measures should be taken, depending on the frequency and nature of the faults, once detected:

  1. Monitoring of all network interfaces should increase in detail/depth;
  2. Monitoring of servers and services should increase in detail/depth; and
  3. Escalate the event and notify support/security staff, as dictated by relevant incident response procedures.

In the event of a critical system fault, the administrative contact for the system must be notified, and provided with the following information:

  1. User or system ID associated with the fault;
  2. Operator and process name(s);
  3. Date and time fault occurred;
  4. Description of fault;
  5. Description of actions that caused fault (if possible); and
  6. Description of responsive action taken (if any).

If it is determined that a fault is security related or relevant, the incident must be escalated to the security contact for the system and/or program area. Escalation and incident response processes should reference the requirements expressed in GO-ITS 37 Incident Management.

Fault logs are intended to track errors that occur during normal system, application, or service operation. They may also indicate an attack on the system, the network, or adjacent systems.

Logging must include, but is not limited to, the following events:

  1. Device errors and status messages;
  2. File or other resource access failures;
  3. Licensing activities;
  4. Authentication failures;
  5. Session duration;
  6. New device/endpoint or device enrollment detection;
  7. Network connection activities (e.g., host up/down, connectivity problems, changes involving network interfaces or system hardware/network addresses, etc.);
  8. Operator activities (e.g., backups, restores, rollback, etc.);
  9. Remote connections to/from the system;
  10. Any security alerts not captured by operator, system, application, or audit /event logs);
  11. System or application error messages;
  12. Use of privileged commands, and/or attempts to invoke a privileged mode or alter system or instance mode, security context, or run level; and
  13. User logons.

3.5. Network management

The security management of networks that span organizational boundaries requires attention. Additional controls should be implemented to protect sensitive data passing over public networks and supporting infrastructure.

Network managers must implement controls to ensure the security of data in networks, and the protection of connected services from unauthorized access. In particular, the following controls should be implemented:

3.5.1 Management

  1. Operational responsibility for network management should be separated from server/system operations where appropriate and technically feasible;
  2. Responsibilities and procedures for the management of remote equipment, including equipment in user areas, should be established;
  3. If necessary, special controls should be established to safeguard the confidentiality and integrity of data passing over public networks, and to protect the connected systems. Special controls may also be required to maintain the availability of the network services and computers connected;
  4. Management activities should be closely co-ordinated both to optimize the service to the business and to ensure that controls are consistently applied across the infrastructure;
  5. Logical or virtual network segmentation must not be used as the primary security safeguard in environments where High Sensitivity processing is performed and/or threats have been identified via a TRA ;
  6. Security Zones must be employed to separate operations on the network, such as the components or layers of an application. The boundaries between such zones should be physical and robust in nature (discrete hardware instances, not logically or virtually enforced):
    1. Where High Sensitivity data is processed or stored;
    2. Where a TRA has identified a requirement for high levels of integrity and/or availability; and
    3. Where the boundary is considered to be an external perimeter;
  7. Discrete hardware implementations of Security Zones should leverage multiple platforms to increase the technical difficulty of subverting defence in depth controls; and
  8. All hardware network components of logically or virtually enforced implementations of Security Zones (e.g., SDN, approved Cloud Services platforms, etc.) must be managed as security or policy enforcement devices, with a commensurate degree of rigour in operation and change control.

3.5.2. Identification and authentication

  1. Authentication requests must be denied by default, and permitted only upon presentation of the proper combination of password and/or credentials, and user ID.

3.5.3. Privileges and parameters

  1. Assignment of access rights must be limited, based on use cases, and granted according to the principle of least privilege.

3.5.4. Confidentiality, integrity, availability, and non-repudiation

  1. Sensitive communications must be protected from malicious observation or modification;
  2. System technologies from different vendors must integrate so as to not cause an aggregate reduction in the efficacy of combined security features;
  3. Operating systems of clients and servers must be hardened, as per CSD recommendations, applicable industry best practices, and relevant build books;
  4. All access must default to ‘prohibited’, and all ports must default to ‘closed’, upon initial power-on or warm reset of network equipment;
  5. Firewalls, gateways, and traffic filters that employ enterprise-grade designs (or other technology approved per GO-ITS 25.1 Security Requirements for Routers and Switches or section 2.1.2 of GO-ITS 25.11 Security Design) must be used to enforce network policy by forwarding only pre-approved connections;
  6. Systems must be placed between firewalls, gateways, or traffic filters (as described above) in a DMZ network segment, to protect them from internal and external threats; and
  7. Networks with physical media that cannot be effectively managed, or is public, must employ additional confidentiality and integrity safeguards.

3.5.5. Monitoring, response, recovery, and review

  1. Network traffic must be monitored and analysed by an Intrusion Detection System (IDS), Intrusion Prevention System (IPS), or other forms of network monitoring/visibility or analysis;
  2. Incident response and escalation procedures must exist, and be practised, in anticipation of hostile network events and/or adverse impacts;
  3. Business Continuity Management must be exercised, to ensure that network resources are available during, and restored as quickly as possible after, hostile events and/or adverse impacts;
  4. Processes and procedures for dealing with hostile events and/or adverse impacts must be reviewed after practice or exercise, and must be updated to reflect lessons learned; and
  5. Interactive network activity from client systems must be reliably associated with a specific user account at any given time.

3.5.6. Planning, implementation, management, review and audit

  1. Proposed services and systems must have their security measures validated via Security Testing and Evaluation (ST&E), prior to implementation in a production environment;
  2. Changes may not be made to proposed plans during implementation, without approval of the project owner and the security team;
  3. Documentation must be updated to include exceptions made during implementation to proposed designs;
  4. Remote access to systems must be securely managed as described in relevant GO-ITS 25 series documents;
  5. A change control process must manage requests for a change in, or granting of, access;
  6. Users must be educated as to what security mechanisms exist, the reasons for their use, and the requirements for compliance with those mechanisms; and
  7. Security mechanisms must periodically be reviewed, to validate their efficacy in the context of current intrusion techniques, evolving platforms/technology, current threat behaviour and threat intelligence, and changes to established control sets.

Security mechanisms must periodically be audited to ensure that they are functioning as designed, as safeguard efficacy is required if they are to reduce risk.

3.6. Media handling and security

The management of all removable computer media types, such as drives, flash memory, tapes, disks, cassettes, and printed reports must be controlled and physically protected to prevent damage to assets and interruption of business activities.

Media must be disposed of securely and safely in accordance with the ISC Policy and GO-ITS 25.20 requirements when no longer required. Sensitive information can be inadvertently disclosed to unauthorized persons through careless disposal of media, causing harm to the Government or citizens.

3.6.1. Management of removable computer media

Appropriate operating procedures must be established to protect files, computer media (drives, flash memory, tapes, disks, cassettes, etc.), and system documentation from damage, theft, and unauthorized access.

  1. If no longer required, the previous contents of any reusable media that is to be removed from the organization should be erased permanently and completely using reliably secure methods;
  2. Authorization should be required for all media removed from the organization, and a record of all such removals (i.e., to maintain an audit trail) should be kept;
  3. All media should be stored in a safe, secure environment, in accordance with manufacturers’ specifications and/or environment requirements; and
  4. All procedures and authorization levels should be clearly documented.

Intrusion Detection System (IDS), Intrusion Prevention System (IPS) components, or any other form of network monitoring/visibility or analysis, should not be equipped with removable media devices. If a removable media device is required as part of installation, it should be removed (if possible) once the installation is complete. This reduces the possibility that the system could be easily booted via removable media (floppy, CD, etc.) and compromised. This also reduces the likelihood that staff with physical access to systems will remove data without authorization.

Removable media must be stored according to the environmental requirements of the media, and must be stored such that only staff authorized to perform backup and recovery of data have access to the media. Accountability for media must be enforced before, during, and after use.

3.6.2. Information handling procedures

All Government of Ontario information must be classified and handled according to ISC Policy requirements, to help protect IT assets from unauthorized disclosure or misuse.
Processes for handling data related to documents, systems, services, networks, mobile computing, mobile communications, mail, voice mail, voice communications, multimedia, postal services or facilities, use of fax machines and any other sensitive items (e.g. blank cheques, blank card stock, or invoices) must be consistent with asset classification and adhere to ISC policy.
The following controls must be implemented:

  1. Labelling of all media, in accordance with ISC Policy requirements;
  2. Access restrictions, badge procedures, etc. to identify unauthorized personnel;
  3. Maintenance of a formal record of the authorized recipients of High Sensitivity data;
  4. Ensuring that input data is complete, that processing is properly completed and that output validation is applied;
  5. Protection of spooled data awaiting output to a level consistent with its identified sensitivity;
  6. Storage of media in an environment consistent with manufacturer specifications;
  7. Keeping the distribution of data to a minimum in accordance with the principles of least privilege and need-to-know;
  8. Clear marking of all copies of data for the attention of the authorized recipient; and
  9. Review of distribution lists and lists of authorized recipients at regular intervals.

When handling sensitive data such as configuration files and passwords, the following precautions must be taken:

  1. Transmitted and stored information that has been classified as High Sensitivity data must be protected through use of encryption of a type and strength sufficient to withstand attack, as documented in GO-ITS 25.12;
  2. Unencrypted password or credential information must not be cached, and must be encrypted when in transport or during session initiations (e.g., over a network); and
  3. Access to backup media must be limited to authorized personnel.

3.6.3. Security of system documentation

System documentation may contain a range of sensitive information (e.g., descriptions of applications processes, procedures, data structures, and authorization processes). As such, system documentation must be protected from unauthorized access, and kept current. The following controls should be implemented:

  1. System documentation should be classified, labelled, and stored securely;
  2. The access list for system documentation should be kept to a minimum and authorized by the application owner; and
  3. System documentation held on a public network, or supplied via a public network, should be appropriately protected.

Access to documentation, whether in printed format or online, must be restricted to authorized staff. Online documentation must be protected through application of:

  1. Access rights;
  2. User and group ownership (i.e., unauthenticated or unauthorized users must not have access to information); and
  3. Approved, GO-ITS 25.12 compliant encryption (e.g., for documentation concerning security configuration settings).

In addition, online documentation must not be copied, transmitted, or modified without permission.
Printed documentation must be protected through the following restrictions:

  1. All documentation either must be retrieved from printers as soon as printing is complete, or the security feature available on some printers must be used (a code or other credential, once provided, prints the job while the user is present at the printer);
  2. All documentation must be secured when not in use;
  3. Security documentation must not be left unattended;
  4. Security documentation must not be removed from secure areas; and
  5. Security documentation must not be copied without permission.

Third -party maintenance personnel who require access to system documentation must sign a Non-Disclosure Agreement. In addition, third -party personnel must not be permitted to remove system documentation.

3.7. Exchanges of information and software

Exchanges of information and software between organizations must be controlled, and must be compliant with both the ISC Policy and any relevant legal requirements (e.g., privacy legislation).

Exchanges must be carried out on the basis of agreements. Procedures and standards to protect information and media in transit must be established. The business and security implications associated with electronic data interchange, electronic commerce, electronic mail and the requirements for controls must be considered.

3.7.1. Information and software exchange agreements

Agreements must be established for the electronic or manual exchange of information and software between organizations.
The security provisions present in such an agreement should reflect the sensitivity of the business information involved.
Agreements on security provisions should include:

  1. Management responsibilities for controlling and notifying transmission, dispatch and receipt;
  2. Procedures for notifying sender, transmission, dispatch, and receipt;
  3. Minimum technical standards for packaging and transmission;
  4. Courier identification standards;
  5. Responsibilities and liabilities in the event of loss, leakage, or redirection of data;
  6. Use of labels according to ISC Policy requirements, to ensure the meaning of the labels is immediately understood throughout the OPS, and that the information can be appropriately protected;
  7. Information and software ownership and responsibilities for data protection, software copyright compliance and similar considerations;
  8. Technical standards for recording and reading information and software; and
  9. Any special controls that may be required to protect sensitive items, such as cryptographic measures, or reliable anonymization/de-identification.

Agreements as to the use, protection, duplication, and re-transmission of shared information and software must exist and be agreed to prior to any transmission of data between organizations. High Sensitivity information must not be distributed without written permission from the relevant program manager (or a delegated authority).
These agreements must specify intent and usage parameters for all shared data. Remedies must be specified to enforce agreed-upon usage and protection standards, and to provide recourse for failure to adhere to those standards.
Where ongoing measures must be taken, both parties must have an agreement regarding the type of service that must be provided, how the provision of that service may be verified, and penalties for failure to provide the agreed upon level of service.
Measures for the control of data must be defined both while in transmission, and in storage at the recipient’s location. Security provisions must include:

  1. Protection from unintended use, sharing, or duplication; and
  2. Means by which the operation of data protection mechanisms may be verified.

Protection of data must take into account current legislation, which may prohibit certain uses, or require specific data handling measures.
External data sharing partners must:

  1. Follow practices that meet or exceed Government of Ontario internal security practices;
  2. Understand any required data handling measures, including anonymization/de-identification;
  3. Submit to regular Government of Ontario and/or external audit of those practices, including by means of third-party audit/reporting frameworks and certification schemes recognized by the Government of Ontario;
  4. Agree to penalties for failure to adhere to all agreements; and
  5. Agree to be held liable for all repercussions arising from a failure to adhere to all agreements.

3.7.2. Security of media in transit

Information can be vulnerable to unauthorized access, misuse or corruption during physical transport, for instance when sending media via the postal service or via courier. As such, media being transported must be stored securely, and protected from unauthorized access, misuse or corruption. For example:

  1. Reliable transport or couriers under contract should be used. A list of authorized couriers should be agreed with management and a procedure to check the identification of couriers implemented;
  2. Packaging should be sufficient to protect the contents from any physical damage likely to arise during transit and in accordance with manufacturers’ specifications; and
  3. Special security provisions should be adopted, where necessary, to protect sensitive information from unauthorized disclosure, modification, or removal. Examples include:
    1. Use of locked containers;
    2. Use of item/cargo/document manifests and records of receipt;
    3. Delivery by hand;
    4. Reliable tamper-evident packaging (which reveals attempt to gain access);
    5. In exceptional cases, splitting of the consignment into more than one delivery and dispatch by different routes; and
    6. Use of digital signatures and approved cryptography at a level and of a type that meets GO-ITS 25.12 and ISC Policy requirements.

3.7.3. Electronic commerce security

Electronic commerce can involve the use of electronic data interchange, e-mail, and online transactions. Electronic commerce is vulnerable to a number of network threats that may result in fraudulent activity, redirected or incomplete transactions, contract disputes, or leakage or modification of information, and must be protected against these threats.

Consideration should be given to the resilience to attack of the host used for electronic commerce, protection of key documents/information, and the security implications of any network interconnection required for its implementation. Software used for electronic commerce systems must also be subject to ST&E practices, such as a vulnerability assessment, before initial deployment.

3.7.4. Security of electronic mail

E-mail differs from traditional forms of business communications due to its speed, message structure, degree of informality and vulnerability to unauthorized actions. Consideration should be given to the need for controls to reduce security risks created by e-mail.

3.7.4.1. Security risks

The following security risks should be addressed:

  1. Vulnerability of messages to unauthorized access/interception, modification, or denial of service;
  2. Vulnerability to error (e.g., incorrect addressing or misdirection/forwarding, and the general reliability and availability of the service);
  3. Vulnerability to known attacker techniques, such as phishing, impersonation, social engineering, elicitation, pretexting, and business email compromise (BEC);
  4. Impact of a change of communication media on business processes (e.g., the effect of increased speed of dispatch or the effect of sending messages from person to person rather than company to company);
  5. Legal considerations, such as the potential need for proof of origin, dispatch, delivery and acceptance;
  6. Implications of publishing e-mail distribution lists that reach internal addresses or many staff; and
  7. Controlling remote user access to electronic mail accounts.
3.7.4.2. Procedures for electronic mail

Procedures for the use of electronic mail must be developed and controls put in place to reduce security risks created by electronic mail. These controls should address the following:

  1. Attacks on electronic mail (e.g., malware, phishing, fraud, social engineering, interception, etc.);
  2. Reputation and trust metrics;
  3. Protection of electronic mail attachments;
  4. Guidelines on when not to use electronic mail;
  5. Employee education and training regarding inappropriate use (e.g., sending defamatory electronic mail, use for harassment, or unauthorized purchasing);
  6. Use of approved cryptographic techniques to protect the confidentiality and integrity of electronic messages, such as GO-PKI, or a successor service intended for electronic mail use; and
  7. Additional controls for vetting messaging that cannot be authenticated.

If e-mail is used to provide alerts or other communications to analysts or incident responders within IDS, IPS, or network monitoring/analysis environments, these e-mail messages must be protected from observation or modification if they will travel over an unmanaged or public network.
E-mail messages must be sent at appropriate intervals so as not to overwhelm the end user’s channel of communications.
If available on mail servers, the following features must be configured and enabled:

  1. Scanning of SMTP traffic for illegal commands;
  2. Scanning of traffic for hostile executable and/or malicious content;
  3. Removal of executable and/or otherwise malicious content; and
  4. Control over abuse, such as the unauthorized use of SMTP relay, or user enumeration.

3.7.5. Security of electronic office systems

Procedures and guidelines must be prepared and implemented to control the business and security risks associated with electronic office systems.
Consideration given to the security and business implications of interconnecting such systems should include:

  1. Vulnerabilities of information in office systems (e.g., recording phone calls or conference calls, confidentiality of calls, video conferencing equipment, storage of faxes, opening mail, distribution of mail and increasing complexity of multi-function office devices with network interfaces, remote management, and local storage);
  2. Procedures and appropriate controls to manage information sharing (e.g., the use of corporate electronic messaging or collaboration systems/applications);
  3. Excluding categories of sensitive business information if the system does not provide an appropriate level of protection;
  4. The suitability, or otherwise, of the system to support business applications such as communicating orders or authorizations/approvals;
  5. Categories of staff, contractors or business partners allowed to use the system and the locations from which it may be accessed;
  6. Restricting selected access to specific categories of user (e.g., restricting access to sensitive project information by limiting collaboration materials to the staff working on that project);
  7. Identifying the status of users (e.g., employees of the organization or contractors in directories for the benefit of other users), and managing changes to their status (i.e., change or termination of responsibilities);
  8. Retention and backup of information held on the system; and
  9. Fallback requirements and arrangements.

Communication between interconnected components, and other network policy enforcement systems, must adhere to the activity logging requirements described in section 3.4.2 of this document.

3.7.6. Publicly available systems

Care must be taken to protect the integrity of electronically published information and publicly accessible information systems, to prevent unauthorized modification that could jeopardize transactions and/or harm the reputation of the publishing organization.
Information on a publicly available system (e.g., information on a Web server or Cloud Service accessible via the Internet), may need to comply with laws, rules, and regulations in the jurisdiction in which the system/instance is located or where trade is taking place. In addition, publicly available Web servers must abide by the GO-ITS 23 Web Standard and ISC Policy requirements. There must be a formal authorization process before information is made publicly available, and the integrity of such information must be protected to prevent unauthorized modification.
Software, data, and other information requiring a high level of integrity, made available on a publicly accessible system, should be protected by appropriate mechanisms (e.g., cryptography, digital signatures, etc.). Electronic publishing and content management systems, especially those that permit feedback and direct entering and/or revision of information, should be carefully controlled so that:

  1. Information is obtained in compliance with any applicable data protection legislation;
  2. Information input to, and processed by, the publishing system will be processed completely and accurately in a timely manner;
  3. Sensitive information will be protected during the collection process and when stored; and
  4. Access to the publishing system does not allow unintended access to networks to which it is connected.

Measures taken to ensure the confidentiality, integrity, availability, and authenticity of publicly observable systems, and their communications, must periodically be reviewed to ensure their adequacy given current threat intelligence, intrusion techniques, and technology. These measures must include, but are not limited to:

  1. Systems that are publicly available (e.g., a public web server) must have security controls in place to ensure that the integrity of the data is protected;
  2. Services that require public access for functionality (e.g., a public web server would require public access to HTTP and HTTPS services) should be the only services that are enabled on the system, in keeping with the principle of least privilege;
  3. All services that do not require public access must be disabled and/or removed after review of technical and functional requirements (for example, a public web server would have TCP/UDP small servers disabled, and services such as networked file systems, SMTP, and SSH disabled and removed, if they were found to be supported);
  4. Information backup must be performed daily on public systems (at least incrementally) to ensure minimal loss of data;
  5. If a public system is used for processing transactions, a secure, encrypted transport protocol that complies with GO-ITS 25.12 must be used along with appropriately managed server certificates;
  6. Sensitive or personal information (e.g., names, credit card numbers, etc.) must not be stored on publicly accessible components of a transaction processing system any longer than required for the completion of a transaction;
  7. Sensitive or personal information stored on a system for the purpose of completing a transaction must be isolated such that it cannot be interpreted or modified by the system (e.g., by means of secure application design);
  8. Transactions must be monitored (inbound and outbound). Traffic data must not include potentially harmful or sensitive content, for example:
    1. System structure or configuration information;
    2. Networking configuration;
    3. System account information;
    4. Executable or interpretable code, unless required; and
    5. Non-alphanumeric characters, unless required; and
  9. Input or output content must be subject to inspection techniques such as bounds checking, and must not exceed identified acceptable values.

Data confidentiality and integrity must be maintained via a cryptographic system, when sharing data or performing transactions between servers, via public networks. Cryptographic systems deployed within the Government of Ontario must be evaluated (ideally by a third party and/or via a recognized evaluation scheme) for suitability against current attack techniques and technology. The strength of deployed cryptography must be appropriate given the sensitivity of transmitted data, and compliant with GO-ITS 25.12.

The Cyber Security Division, or a successor organization, is the cryptographic authority for the Government of Ontario; algorithms, implementations, effective key length, and other factors must meet CSD requirements.

3.7.7 Other forms of information exchange

Verbal discussion of security mechanisms, system configuration, access controls, credentials, user accounts, use of cryptography, or other similar sensitive information should not be conducted in public areas, before personnel who lack appropriate administrative authorization (or whose authorization is unknown), or before staff who have not been subject to personnel screening and signed an NDA or relevant Government of Ontario security documentation.

4. Related standards

4.1. Impacts to existing standards

 Standards that reference or are referenced by this standard and description of impact.

GO-IT StandardImpactRecommended Action
All GO-ITS 25 standardsAll standards in this series refer to this document; this document is considered a normative reference for the entire series.Update only, no impact. Relevant sections refer to those approved standards. Compliance with all standards is mandatory.

4.2. Impacts to existing environment

Impacted InfrastructureImpactRecommended Action
---

5. Compliance requirements

5.1. Internal compliance

Managers must ensure that all security operations within their area of responsibility are carried out correctly and in a manner that meets security requirements. All areas within the organization must be subject to regular review to ensure compliance with the information technology requirements and additional standards outlined in this document.

Owners of information systems, programs, and/or data should support regular review of their compliance with all relevant directives, policies, standards, and procedures to ensure that security requirements have been properly addressed, and safeguards are appropriately deployed.

5.1.1. Access to accounts and information

For computing environments and access control systems within the scope of this document, program managers must not be permitted to request the following:

  • Access to a user’s credential or identity as assigned by an access control system;
  • Access to data, files, etc. stored by a user whereby access to such information is managed by an access control system, including approved password manager software; and/or
  • Information encrypted by a user through the use of a key or cryptographic process controlled by an access control system, even if desired for recovery purposes.

Program managers must ensure that employees use central repositories, shared folders, or other mechanisms such that any critical work products remain accessible to relevant staff. In such instances, effective management of project information is the primary means by which access to such information should be safeguarded.

Program managers must act to protect the integrity of credentials granted to users. This must include the following:

  • Ensuring appropriate handling of user credentials;
  • Ensuring appropriate education for users regarding the use and security of their credentials, and any related access control system or identity service, including approved password manager software; and
  • Ensuring that inappropriate requests that could harm the integrity of user accounts, assigned credentials, or electronic identities are not accepted.

It is incumbent on staff with Director-level authority (or greater) to diligently review and authorize requests for the recovery of the information described above.

5.2. External compliance

Several areas of external compliance requirements exist. Government of OntarioIT assets must be managed appropriately to comply with these requirements. All external vendors and service operators must be subject to regular review, where feasible and supported, to ensure compliance with the information technology requirements and additional standards outlined in this document.

5.2.1. Compliance with legal requirements

The design, operation, and management of information systems are subject to statutory, regulatory, and contractual requirements that may influence security considerations. Advice on specific legal issues should be sought in instances where security techniques may contravene local legal requirements (i.e., system monitoring) or when these techniques must be used in other jurisdictions (e.g., encryption).

5.2.2. Intellectual property and copyright

Procedures should be implemented to ensure compliance with legal restrictions on the use of software (including per-seat or CPU/processing related restrictions) and copyrighted intellectual property. Failure to comply with requirements can lead to legal action and financial consequences.

Legislative, regulatory, and/or contractual requirements may exist that place restrictions on the kinds of reproduction or transmission permitted for proprietary materials or materials in respect of which there may be an intellectual property right. Legal advice should be sought in instances (or jurisdictions) where these limits are not clear.

In particular, the following safeguards should be implemented:

  1. Use existing procurement and vendor of record channels to obtain software and media;
  2. Maintain risk awareness among staff regarding acquisition of software products;
  3. Maintain asset inventory, including proof of license ownership, primary copies, etc.;
  4. Maintain controls to ensure fixed seat licenses are not inadvertently exceeded;
  5. Maintain awareness of any CPU/processing restrictions for software licences, particularly in virtual or Cloud Services environments.
  6. Monitor the installation of acquired packages and control inappropriate installation; and
  7. Maintain license conditions and remove software where license status is unknown.

5.2.3. Organizational records

Records must be protected from loss, unauthorized release, destruction, and falsification. Some types of organizational records may need to be securely retained to meet statutory or regulatory requirements for recordkeeping, as well as to support essential business activities (such as those records which confirm financial details or status).

Records should be categorized, with retention periods and storage requirements made clear for each type of record that is identified for the program area. Consideration should be given to the archival procedure used for the storage of records to ensure they are protected. Technology change and integrity protection should be factors in the choice of medium for electronic records, to ensure they can be accessed, and cannot be modified.

5.2.4. Data protection and privacy of personal information

The Government of Ontario is bound by legislative requirements at both the provincial and federal level regarding the protection of personal privacy (as outlined in the Introduction of this document). Security techniques and methods must fully comply with ISC Policy requirements and all relevant federal and provincial privacy legislation.

Processes such as Privacy Impact Assessments should be employed to identify areas of privacy risk, and legal advice should be obtained where uncertainty exists regarding privacy impact, particularly when data is being archived, aggregated, transferred to third parties, or transmitted to other jurisdictions. If personal information is subject to disclosure due to a breach, any duties under Government of Ontario privacy breach procedures, or those outlined within any relevant legislation, must be undertaken.

5.2.5. Cryptography

The Government of Ontario uses cryptography to protect the confidentiality and integrity of information.

Cryptography deployed as a technical safeguard within an access control system (e.g., to pass credentials and/or provide for integrity assurance), to provide for communications security, or to protect information in storage must meet the requirements described in GO-ITS 25.12.

Controls must be in place to ensure compliance with national and international agreements, laws, regulations, and/or other instruments regarding use, import, and export of cryptographic software and devices. The cryptographic use cases described or implied in this document, and the specifications in GO-ITS 25.12, are robust and may not be legal for use or import/export in all jurisdictions. Required review may include:

  1. Determination of import/export status for cryptographic hardware and software;
  2. Determination of import/export status for items to which cryptography is to be added;
  3. Determination of use of any cryptography in any jurisdiction where not sanctioned; and
  4. Lawful access requirements of other jurisdictions.

Legal advice should be obtained to ensure compliance with all relevant requirements, particularly if any solution employing cryptographic methods is to be located in another jurisdiction.

5.3. Audit

Periodic audit is required to ensure that security practices and safeguards meet the minimum requirements expressed in this document and that the operation of a given project or environment is sound.

5.3.1. Impact

To reduce the potential impact to operations and other risk of such audit on production systems:

  1. Audit scope and requirements should be discussed and controlled prior to audit activities;
  2. Audit activity should be conducted without write/modify privileges where possible;
  3. Access provided during audit activities should be monitored and supervised;
  4. All relevant procedures and resources required to conduct the audit should be planned for and provided in advance; and
  5. Audit tools should be protected from unauthorized use or modification.

5.3.2. Monitoring

Systems must be monitored on an ongoing basis, in accordance with any legal/regulatory (e.g., privacy) requirements, to detect failure or compromise/subversion of safeguards or controls. Operator, system, audit/event, and activity logs must be routinely monitored (by staff trained to identify relevant events, user transactions, and system activity) for this purpose; the extent of monitoring should be commensurate with the operational criticality of the system, the extent of system interconnection, the nature of any prior incidents, and data sensitivity (as determined by the ISC Policy). Appropriate separation of duties must be maintained during monitoring activity, and the integrity of log information must be protected against both deliberate acts and automated alteration due to overwriting or other action of a process or service.
As maintenance of an evidentiary trail requires consistent business practices and safeguards, the following measures must be followed as a part of normal daily activities:

  1. Log data must be forwarded to a log host. The log host must be a dedicated, purpose-built, and hardened system on a separately managed network, whose only function is to receive and store activity data; stored data must be protected against modification or overwriting, even by privileged users;
  2. Data for the log host should be encrypted while in transit, and the log host should generate an alert if a monitored device fails to send data for a given period of time;
  3. Timestamps for messages from all monitored systems and other log-generating devices (firewalls, routers, servers, etc.) must be synchronized to a redundant and validated reference time source; and
  4. Consider sending specific/selected activity logs to a dedicated printer in a secure room in situations where such logging is absolutely essential (e.g., preserving evidentiary data with certainty in an ongoing investigation), or cryptographically signing them when additional integrity assurance is required.

5.4. Implementation and metrics

In order to manage the effectiveness and implementation of this standard, Ministries, Clusters and applicable agencies are expected to monitor compliance.

6. Acknowledgements

Consulted

Consulted as part of the development of this standard – Includes individuals (by role and organization) and committees, councils and/or working groups. (Note: consulted means those whose opinions are sought, generally characterized by two-way communications such as workshops):

Organization consulted (Ministry/Cluster)DivisionBranchDate
CSB Internal ConsultationMGS OCCIOCSBJan./Feb. 2008
Ontario Internal Audit DivisionFinanceOIADMar. 2008
OCIPO (now IPA)OCIPO / IPAPrivacyJan./Feb. 2008
Committee/Working Group consultedDate
SADWG (Dissolved)Jan./Feb. 2008
Architecture Review BoardDec. 2014
Architecture Review BoardFeb. 2016
ITELCMar. 2016

7. Recommended versioning and/or change management

Changes (i.e., all revisions, updates, versioning) to the standard require authorization from the “responsible” organization(s).

Once a determination has been made by the responsible organization to proceed with changes, OCCIO as custodians of the I&IT Rules Management Plan will coordinate and provide assistance with respect to the approvals process.

The approval process for changes to standards will be determined based on the degree and impact of the change. The degree and impact of changes fall into one of two categories:

Minor updates - require confirmation from ARB, and communication to stakeholders and ITELC. Changes are noted in the “Document History” section of the standard. Minor updates generally consist of:

  • Editorial corrections (spelling, grammar, references, etc.) made with the intention to eliminate confusion and produce consistent, accurate, and complete work.
  • Formatting changes (due to template updates or to improve readability of document).
  • Documented organizational changes e.g., renaming of committees, approved transition of committee responsibilities, approved reporting relationship changes.

Standard revisions - require consultation with stakeholders, ARB endorsement, and ITELC approval. Standard revisions consist of any updates to the I&IT Rules Refresh Plan that are not considered minor and may:

  • represent new standard or significant revision to an existing standard
  • represent a major version change to one or more specifications
  • impact procurement
  • require configuration changes to current solutions
  • impact other standards
  • respond to legislative, policy or procurement changes

7.1. Publication details

Publication of GO-ITS standardYes/No

Standard to be published on both the OPS Intranet and the GO ITS Internet web site (available to the public, vendors etc.)

yes

8. Appendices

8.1. Normative references

Governance and Management of Information Technology Directive

Acceptable Use of Information Technology (IT) Resources Policy

Corporate Policy on Cyber Security and Cyber Risk Management

Corporate Policy on the Government of Ontario’s Identity and Credential Assurance (GO-ICA)

Corporate Policy on Information Sensitivity Classification (ISC) and Guidelines

GO-ITS standards

8.2. Informative references

ISO/IEC Standards