Register now for better personalized quote!

A Circular Problem in Current Information Security Principles

Dec, 16, 2014 Hi-network.com

Editor's Note: In this second installment of the blog series on more responsive security, we take a closer look at the circular problems associated with four common security principles in managing "weak link" risks in Information Technology organizations.

Before discussing what constitutes this responsive approach to security, let us first look at a few of the fundamental principles of information security to understand the unique challenges organizations face today in managing security risks.

Information security principles are those commonly accepted or professed rules of action or conduct, or general laws or truth from which other principles are derived that influence or guide our design, implementation, and/or operation of information security systems. One of the most commonly cited is the principle of the "weakest link." This principle asserts that security of any given information system, like a chain, will break at its weakest link, no matter how strong the rest of the chain links are.

When considering a system's security design, every aspect should be considered, including people, processes, and technology so that no weak links emerge in response to an incident, regulatory requirement, or audit. The weakest link principle also shows the asymmetric problem of securing information systems. Potential attackers only need a single weak point to compromise the system, whereas the protector needs to address every aspect.

Ensuring that no weak links exist within a security system is a formidable task. As such, the principle of defense in depth has been widely adopted by organizations to protect information assets with not one countermeasure, but multiple, overlapping layers complementing each other. This approach assures that weaknesses discovered at a lower layer are covered by the protection rendered at the higher layer(s), and vice versa.

The practice of defense in depth has many benefits. It reduces the surface area of attacks in software products and application systems, and increases the probability that an attack can be detected earlier when detective mechanisms are placed at each layer. Provisions of redundancy and backup security mechanisms further enhance resiliency of the systems. This layering approach, however, comes with a price. Each layer of security added means an increase in the cost of security, and at the same time adds complexity to the environment, especially when point solutions are adopted and need further integration or separate efforts of management. Limiting complexity through platform-based approaches wherever possible is important to simplify security architectures, rather than layer on point solutions. If complexity is not limited, it results in emergent behaviors, creating new or unanticipated weak links, and potentially increases the difficulties in managing the systems involved.

The defense in depth approach further builds a dependency of security protection by one layer (i.e., application) on another (i.e., network or database systems), in which failure of the one layer may result in unintended exposure at another layer. The unintended consequence of defense in depth is therefore the dilemma that it negates the purpose of its intention, which is the creation of new weak links (which may not be known), while closing others (which are known).

Another generally accepted principle is that of "no perfect security," based on the notion that information systems involve human actions, and a human is not a perfect actor. In cryptography, the notion of no perfect secrecy has been well known. The "One-time Pad" (OTP) data encryption technique, for example, was a classic cryptosystem designed to achieve perfect secrecy. The OTP, however, was too expensive to implement for any practical use, as the key has to be equal to the length of the message that it encrypts, and can only be used once. OTP therefore shows that perfect secrecy is not practically achievable. As such, only variations of the OTP are being used in practice, with tradeoffs such as allowing for a shorter key length, and reusing the key for multiple messages. In other words, allowing certain "weak links" in the systems. Inadvertently, the "no perfect security" principle counters the principles of weakest links and defense in depth. We cannot achieve perfect security-weak links will prevail in practical systems.

Recognizing the wisdom of "no perfect security," and the need to observe and manage the cost of security, a risk management approach-often based on cost-benefits analysis-has been widely adopted. Risk management entails the practice of risk identification, analysis, assessment, and subsequent determination of what to do with those risk issues involved. The process normally includes classifying the risk level of each issue based on a combination of a number of vectors, such as the vulnerability identified; the potential impact to the business (if compromised); the probability of occurrence considering past events; and the possible threats faced by the organization.

Numerous challenges exist when trying to achieve an "effective" risk management process. For one, if the process is overly simplified, it will not identify key risk issues and, if too involved it becomes resource intensive and requires much more experience and knowledge to execute.

The probability of an information security risk event occurring is not necessarily pre-determined. Nor is the outcome of a risk event always independent of previous and/or other risks. A massive collection of data relating to a past event is necessary to predict the possibility of a future event. Even if the data is available, ongoing changes in the systems environment will affect future outcomes, which past data will not provide. The action of risk management itself will change the risk environment and therefore may render the risk assessment and decision obsolete.

As such, any risk forecast will be subjective. The subjective nature of risk analysis would likely result in incorrect assessment, leaving weaknesses (or weak links) in the system with a false sense of security. An overestimation of risk would cause an excessive amount of mitigation effort, whereas an underestimation would likely result in undesirable surprises and panic. Even if the risk assessment is highly accurate, issues that are assessed as low-risk normally get accepted without mitigation, or pushed to a low priority bucket that become forgotten over time. Retaining or accepting risk means leaving gaps in the system, which may again end up as weak links waiting for perpetrators to exploit.

The above analysis of the four common security principles shows a circular problem-a dilemma that each principle further contributes to the original security issue of the weak links that it tries to address. This raises the question of whether current approaches of using traditional risk management strategies and techniques to achieve information security, which are largely based on the above principles, are indeed adequate or practical. The question suggests the need for more studies and research in the practice of information security risk management.

In the next blog, we will discuss another facet of the challenges in managing information security, those relating to the practice environment itself.

Part 1: Understanding and Addressing the Challenges of Managing Information Security -a More Responsive Approach


tag-icon Hot Tags : Security design Risk Management

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.