top of page
  • Stewart Room

Where do we find the detail of security law? Part 4.

Now that we understand the twinning of legal and operational security, the role of the consensus of expert opinion and the differences between negative and positive security, we can journey further into the operational landscape to flesh out our understanding of the law's focus.


Security operations break down into many distinct areas of specialisation and discipline. There are security experts who specialise in computer security, some who specialise in network security, some in application security, or computer forensics, or behavioural change, or cryptography, or secure coding, or penetration testing – and so on. This small and very incomplete snapshot of the operational landscape hints at some of key truths within the world of operational security.


Firstly, the definition of operational security that is applied might depend on whom you talk to.


Secondly, to be truly effective, operational security needs to be context-specific, including that the steps and measures that need to be taken for security should be risk-based. This means that the internal and external context of the organisation to be secured has to be properly understood, which includes factors such as the technology and data landscape in use, the users of them, the vulnerabilities and weaknesses in the landscape and the threats that apply. It seems axiomatic therefore that truly effective operational security requires bespoke solutions, because no two organisations are the same. Due to the uniqueness of every organisation, it would be inappropriate to simply lift a security controls framework from one organisation and transplant it to another.


So truly effective operational security rejects generalisation, hence the plethora of distinct areas of subject matter expertise within the operational envelop. However, it would be a mistake to think that operational security is not joined-up. In fact, there is a huge amount of convergence of thinking in this area and most security experts recognise that there are some fundamental objectives and requirements that have to be addressed in all situations. For example, experts have developed a series of security design principles that can apply in any situation; controls libraries have been developed, which organisations can select from; there are universally accepted protocols in use for secure communications; equally, there are renowned algorithms for cryptography, some of which have been developed through open competitions - and to aid quality, interoperability and adoption, many parts of the fundamental objectives and requirements of operational security have been standardised by standards-making bodies, or have been professionalised by industry bodies.


This landscape looks vast, but we can detect a focal point. It is the CIA Triad. CIA stands for Confidentiality, Integrity and Availability and if we consider this from the viewpoint of protecting people and things, then we can give these concepts the following definitions, where the idea of a system means computers and communications systems and the idea of data means information that is, or is intended to be, processed by or stored within such systems:

  • Confidentiality is the assurance given that access to particular systems and data will be restricted to authorised entities.

  • Integrity is the assurance given that systems and data will be subject only to authorised changes and alterations.

  • Availability is the assurance given that systems and data will be available for use by authorised entities.

Of course, these ideas can be expressed in different ways. For example, confidentiality means protection from unauthorised access, integrity means protection from unauthorised changes and alterations and availability means protection from unauthorised service interruptions and shut downs.


There are other overlays that can be applied. For example, in August 2023 the UK Air Traffic Control system suffered a technical glitch, causing massive disruption to domestic and international flights. The glitch or its impacts were unplanned and it caused various computers and communications systems or operational data to be rendered unavailable, but NATS quickly ruled out a cyber-attack. Therefore, did this situation constitute a security breach, in the sense of it being an availability breach? Does the idea of security encompass non-malicious threats to confidentiality, integrity and availability, such as accidents and natural disasters, or does it only apply to malicious threats, such as the acts of rogue insiders or cyber-attackers such as hackers and malware writers?

At face value, it would seem odd if the ideas of CIA were to limit operational security only to the idea of providing assurance of CIA, or protection from loss of CIA, in respect of malicious threats because, after all, the consequences of accidental threats can be as dramatic as the consequences of malicious ones. We also see reinforcement of the perspective that operational security is concerned both with accidental and malicious threats within international standards. For example, ISO/IEC 27005:2022, titled “Information security, cybersecurity and privacy protection – Guidance on managing information security risks” points out that there are three forms of risk sources, deliberate ones, accidental ones and environmental ones.


The idea that CIA covers more than deliberate risks is widely recognised in the law. If the GDPR is considered, we find nothing within it to limit the scope of the security rules to threats from rogue actors. Mere accidents that impact the security of personal data are regulated.


So, operationally speaking, we have to achieve CIA and our journey of understanding must continue.



Recent Posts

See All

Comentarios


Los comentarios se han desactivado.
bottom of page