This article about computer security describes how security can be achieved through design and engineering. Please see computer insecurity article for an alternative approach that describes the current battlefield of computer security exploits and defenses.
Computer security is the effort to create a secure computing platform, designed so that agents (users or programs) can only perform actions that have been allowed. This involves specifying and implementing a security policy. The actions in question can be reduced to operations of access, modification and deletion. Computer security can be seen as a subfield of security engineering, which looks at broader security issues in addition to computer security.
It is important to understand that in a secure system, the legitimate users of that system are still able to do what they should be able to do. It has been said pejoratively that the only truly secure computer is one locked in a vault without any means of power or communication; however, this would not be regarded as a useful secure system because of the above requirement.
It is also important to distinguish the techniques employed to increase a system's security from the issue of that system's security status. In particular, systems which contain fundamental flaws in their security designs cannot be made secure without compromising their utility. Consequently, most computer systems cannot be made secure even after the application of extensive "computer security" measures.
Computer security by design
There are two different approaches to security in computing. One focuses mainly on external threats, and generally treats the computer system itself as a trusted system. This philosophy is discussed in the computer insecurity article.
The other, discussed in this article, regards the computer system itself as largely an untrusted system, and redesigns it to make it more secure in a number of ways.
This technique enforces privilege separation, where an entity has only the privileges that are needed for its function. That way, even if an attacker has subverted one part of the system, fine-grained security ensures that it is just as difficult for them to subvert the rest.
Furthermore, by breaking the system up into smaller components, the complexity of individual components is reduced, opening up the possibility of using techniques such as automated theorem proving to prove the correctness of crucial software subsystems. Where formal correctness proofs are not possible, rigorous use of code review and unit testing measures can be used to try to make modules as secure as possible.
The design should use "defense in depth", where more than one subsystem needs to be compromised to compromise the security of the system and the information it holds. Subsystems should default to secure settings, and wherever possible should be designed to "fail secure" rather than "fail insecure" (see fail safe for the equivalent in safety engineering). Ideally, a secure system should require a deliberate, conscious, knowledgeable and free decision on the part of legitimate authorities in order to make it insecure.
In addition, security should not be an all-or-nothing issue. The designers and operators of systems should assume that security breaches are inevitable in the long term.
Full audit trails should be kept of system activity, so that when a security breach occurs, the mechanism and extent of the breach can be determined. Storing audit trails remotely, where they can only be appended to, can keep intruders from covering their tracks. Finally, full disclosure helps to ensure that when bugs are found the "window of vulnerability " is kept as short as possible.
Early history of security by design
The early Multics operating system was notable for its early emphasis on computer security by design, and Multics was possibly the very first operating system to be designed as a secure system from the ground up. In spite of this, Multics security was broken, not once, but repeatedly. This led to further work on computer security that prefigured modern security engineering techniques..
Techniques for creating secure systems
The following techniques can be used in engineering secure systems. Note that these techniques, whilst useful, do not of themselves ensure security -- a security system is no stronger than its weakest link.
- Automated theorem proving and other verification tools can enable critical algorithms and code used in secure systems to be mathematically proven to meet their specifications.
- Thus simple microkernels can be written so that we can be sure they don't contain any bugs: eg EROS and Coyotos.
- A bigger OS, able of providing a standard API like POSIX, can be built on a microkernel using small API servers running as normal programs. If one of these API servers has a bug, the kernel and the other servers are not affected: eg Hurd.
- Cryptographic techniques can be used to defend data in transit between systems, reducing the probability that data exchanged between systems can be intercepted or modified.
- Strong authentication techniques can be used to ensure that communication end-points are who they say they are.
- Secure cryptoprocessors can be used to leverage physical security techniques into protecting the security of the computer system.
- Chain of trust techniques can be used to attempt to ensure that all software loaded has been certified as authentic by the system's designers.
- Mandatory access control can be used to ensure that privileged access is withdrawn when privileges are revoked. For example, deleting a user account should also stop any processes that are running with that user's privileges.
- Capability and access control list techniques can be used to ensure privilege separation and mandatory access control. The next sections discuss their use.
Some of the following items may belong to the computer insecurity article:
- In a production system when an application provides no way to patch already known security flaws, don't use it or use another one (at least until the fix is available). Publicly known flaws are the main entry used by worms to automatically break into a system and then spread to other systems connected to it. The security website Secunia provides a search tool for unpatched known flaws in popular products.
- Backups are a way of securing your information; they are another copy of all your important computer files kept in another location. These files are kept on hard disks, CD-R’s, CD-RW’s, and tapes. Backups can be kept in a multitude of locations, some of the suggested places would be a fireproof, waterproof, and heat proof safe, or separate location than that in which the original files are contained. There is also a third option, which involves using one of the companies on the internet that backs up files for both business and individuals.
- Anti-virus software deletes or quarantines viruses on your computer, in essence protecting you against viruses. This software once on your computer needs to be updated regularly, as there are new viruses created daily. There are a couple things that are an important part of any antivirus software, one should look for a good detection rate, compatibility with your system, easy to use, and must have the ability to update.
- Firewalls are hardware and/or software components that protects computers from intruders. The firewall will not allow anything to enter your computer without the correct markings. All networks require a firewall to keep out people and files that are hazardous to the system.
- Access authorization is a way of protecting your computer by using authentication systems, so you know who is trying to get in. This system would allow only those with authorized access into certain areas of the computer or to open certain files. There are a lot of methods in detecting one's identity. The most commonly used are passwords or identification cards, however as technology advances more methods are becoming common such as smart cards or biometrics, for example with fingerprints.
- Encryption is used to protect your message from the eyes of others. It can be done in several ways by switching the characters around, replacing characters with others, and even removing characters from the message. These have to be used in combination to make the encryption secure enough, that's to say difficult to crack. Public key encryption is a refined and practical way of doing encryption. It allows for example anyone to write a message for a list of recipients, and only those recipients will be able to read that message.
- Intrusion-detection systems can scan a network for people that are on the network but who should not be there or are doing things that they should not be doing, for example trying a lot of passwords to gain access to the network.
Capabilities vs. ACLs
Within computer systems, the two fundamental means of
enforcing privilege separation are access control lists (ACLs) and capabilities. The semantics of ACLs have been proven to be insecure in many situations (e.g., Confused deputy problem). It has also been shown that ACL's promise of giving access to an object to only one person can never be guaranteed in practice. Both of these problems are resolved by capabilities. This does not mean practical flaws exist in all ACL-based systems — only that the designers of certain utilities must take responsibility to ensure that they do not introduce flaws.
Unfortunately, for various historical reasons, capabilities have been mostly restricted to research operating systems and commercial OSes still use ACLs. Capabilities can, however, also be implemented at the language level, leading to a style of programing that is essentially a refinement of standard object-oriented design. An open source project in the area is the E language.
The Cambridge CAP computer demonstrated the use of capabilities, both in hardware and software, in the 1970s, so this technology is hardly new. A reason for the lack of adoption of capabilities may be that ACLs appeared to offer a 'quick fix' for security without pervasive redesign of the operating system and hardware.
A good example of a current secure system is EROS.
But see also the article on secure operating systems. TrustedBSD is an example of an opensource project with a goal, among other things, of building capability functionality into the FreeBSD operating system. Much of the work is already done.
Other uses of the term "trusted"
The term "trusted" is often applied to operating systems that meet different levels of the common criteria, some of which are discussed above as the techniques for creating secure systems.
A computer industry group led by Microsoft has used the term "trusted system" to include making computer hardware that could impose restrictions on how people use their computers. The project is called the Trusted Computing Group (TCG). See also Next-Generation Secure Computing Base.
Computer security is a highly complex field, and it is relatively immature. The ever-greater amounts of money dependent on electronic information make protecting it a growing industry and an active research topic.
Notable persons in computer security
For additional persons, see also: Category:Cryptographers.
See Category:Computer security for a complete list of all related articles.
- Ross J. Anderson: Security Engineering: A Guide to Building Dependable Distributed Systems, ISBN 0471389226
- Bruce Schneier: Secrets & Lies: Digital Security in a Networked World, ISBN 0471253111
- Paul A. Karger , Roger R. Schell : Thirty Years Later: Lessons from the Multics Security Evaluation, IBM white paper.
- Clifford Stoll: Cuckoo's Egg: Tracking a Spy Through the Maze of Computer Espionage, Pocket Books, ISBN 0743411463
- Stephen Haag , Maeve Cummings , Donald McCubbrey , Alain Pinsonneault , Richard Donovan : Management Information Systems for the information age, ISBN 0070911207