SECURITY+FRAME+AND+PRINCIPLIS

=**Computer security**=

is a branch of computer technology known as [|information security] as applied to [|computers] and networks. The objective of computer security includes protection of information and property from theft, corruption, or natural disaster, while allowing the information and property to remain accessible and productive to its intended users. The term computer system security means the collective processes and mechanisms by which sensitive and valuable information and services are protected from publication, tampering or collapse by unauthorized activities or untrustworthy individuals and unplanned events respectively. The strategies and methodologies of computer security often differ from most other computer technologies because of its somewhat elusive objective of preventing unwanted computer behavior instead of enabling wanted computer behavior.

=** Confidentiality **=

[|Confidentiality] is the term used to prevent the disclosure of information to unauthorized individuals or systems. For example, a [|credit card] [|transaction] on the Internet requires the [|credit card number] to be transmitted from the buyer to the merchant and from the merchant to a [|transaction processing] network. The system attempts to enforce confidentiality by encrypting the card number during transmission, by limiting the places where it might appear (in databases, log files, backups, printed receipts, and so on), and by restricting access to the places where it is stored. If an unauthorized party obtains the card number in any way, a breach of confidentiality has occurred. Breaches of confidentiality take many forms. Permitting someone to look over your shoulder at your computer screen while you have confidential data displayed on it could be a breach of confidentiality. If a [|laptop computer] containing sensitive information about a company's employees is stolen or sold, it could result in a breach of confidentiality. Giving out confidential information over the telephone is a breach of confidentiality if the caller is not authorized to have the information. Confidentiality is necessary (but not sufficient) for maintaining the [|privacy] of the people whose personal information a system holds. [//[|citation needed]//] =** Integrity **=

In information security, integrity means that data cannot be modified undetectably. This is not the same thing as [|referential integrity] in [|databases], although it can be viewed as a special case of Consistency as understood in the classic ACID model of [|transaction processing]. Integrity is violated when a message is actively modified in transit. Most cipher systems provide message integrity along with privacy as part of the encryption process. Messages that have been tampered with in flight will not decrypt successfully. =** Availability **=

For any information system to serve its purpose, the information must be [|available] when it is needed. This means that the computing systems used to store and process the information, the security controls used to protect it, and the communication channels used to access it must be functioning correctly. [|High availability] systems aim to remain available at all times, preventing service disruptions due to power outages, hardware failures, and system upgrades. Ensuring availability also involves preventing [|denial-of-service attacks].

** Authenticity **
In computing, [|e-Business] and information security it is necessary to ensure that the data, transactions, communications or documents (electronic or physical) are genuine. It is also important for authenticity to validate that both parties involved are who they claim they are.

** Non-repudiation **
In law, [|non-repudiation] implies one's intention to fulfill their obligations to a contract. It also implies that one party of a transaction cannot deny having received a transaction nor can the other party deny having sent a transaction. [|Electronic commerce] uses technology such as [|digital signatures] and encryption to establish authenticity and non-repudiation.

=**DOLLS**=

**__S__implicity** - Though you want to build your network complex enough on the outside to ward off intruders, you do not want to create internal policies and procedures that are too difficult to manage preventing users from being productive in their daily tasks.

**__O__bscurity** - Concealing internal network activity from external view should be one objective of the security implementation. Included in obscurity should also be the avoidance of clear patterns of behavior -- even to the point of random time settings for synchronizing critical data across the domain.

**__L__ayering** - Building layers of defense to protect information security is critical. Layering includes the physical grounds as well.

**__l__imiting** - Allowing limited access to information through authentication, permissions, access rights, distribution of keys or other access to the physical grounds reduces attacks.

**__D__iversity** - The application of security techniques (e.g., technologies, hardware and software manufacturers, passwords, traffic filters) that are different will ensure that intrusion at one layer will not guarantee further access by the same method.


 * DOLLS - Principles of Information Security**

Diversity -Different password types, Different authentication methods. Different Software, Operating Systems Obscurity -Hide information: Operating System, Application types and versions, Internal Addresses (NAT, PAT) Limiting -Physical Access, RBAC/IBAC, privileges: root, read, write, modify, delete Layering -Multiple Obstacles, firewalls Simplicity -Usability, Biometrics, Management Tools


 * Characteristics of Secure Information**

//Confidentiality// Authorization - login/pw Access control - physical limitations -Identity Based Access Control (IBAC) -Role Based Access Control (RBAC) Authenticate (examples) Single Factor - username/pw Two Factor - ATM card/code Multifactor - tokens, card, dongle, USB key, biometrics

//Integrity// Information is correct: entered correctly, processed correctly, stored correctly, not modified without authorization.

//Availability// What is needed is where it's needed, in the form that it's needed. -redundant systems -backups -failsafe/failover protection

Stored Processed Transmitted
 * Three States of Information**

Hardware Software Information People Procedures
 * Parts of Information Security**

Asset Threat Threat Agent Vulnerability Exploit Risk
 * Security Threat Framework**


 * Increases in Security Lead to Decreases in Productivity**

[|Security by design] The technologies of computer security are based on [|logic]. As security is not necessarily the primary goal of most computer applications, designing a program with security in mind often imposes restrictions on that program's behavior. There are 4 approaches to [|security] in [|computing], sometimes a combination of approaches is valid: Computers consist of software executing atop hardware, and a "computer system" is, by frank definition, a combination of hardware, software (and, arguably, firmware, should one choose so separately to categorize it) that provides specific functionality, to include either an explicitly expressed or (as is more often the case) implicitly carried along security policy. Indeed, citing the Department of Defense Trusted Computer System Evaluation Criteria (the TCSEC, or Orange Book)—archaic though that may be —the inclusion of specially designed hardware features, to include such approaches as tagged architectures and (to particularly address "stack smashing" attacks of recent notoriety) restriction of executable text to specific memory regions and/or register groups, was a //sine qua non// of the higher evaluation classes, to wit, B2 and above.) Many systems have unintentionally resulted in the first possibility. Since approach two is expensive and non-deterministic, its use is very limited. Approaches one and three lead to failure. Because approach number four is often based on hardware mechanisms and avoids abstractions and a multiplicity of degrees of freedom, it is more practical. Combinations of approaches two and four are often used in a layered architecture with thin layers of two and thick layers of four. There are various strategies and techniques used to design security systems. However there are few, if any, effective strategies to enhance security after design. One technique enforces the [|principle of least privilege] to great extent, where an entity has only the privileges that are needed for its function. That way even if an [|attacker] gains access to one part of the system, fine-grained security ensures that it is just as difficult for them to access the rest. Furthermore, by breaking the system up into smaller components, the complexity of individual components is reduced, opening up the possibility of using techniques such as [|automated theorem proving] to prove the correctness of crucial software subsystems. This enables a [|closed form solution] to security that works well when only a single well-characterized property can be isolated as critical, and that property is also assessible to math. Not surprisingly, it is impractical for generalized correctness, which probably cannot even be defined, much less proven. Where formal correctness proofs are not possible, rigorous use of [|code review] and [|unit testing] represent a best-effort approach to make modules secure. The design should use "[|defense in depth]", where more than one subsystem needs to be violated to compromise the integrity of the system and the information it holds. Defense in depth works when the breaching of one security measure does not provide a platform to facilitate subverting another. Also, the cascading principle acknowledges that several low hurdles does not make a high hurdle. So cascading several weak mechanisms does not provide the safety of a single stronger mechanism. Subsystems should default to secure settings, and wherever possible should be designed to "fail secure" rather than "fail insecure" (see [|fail-safe] for the equivalent in safety engineering). Ideally, a secure system should require a deliberate, conscious, knowledgeable and free decision on the part of legitimate authorities in order to make it insecure. In addition, security should not be an all or nothing issue. The designers and operators of systems should assume that security breaches are inevitable. Full [|audit trails] should be kept of system activity, so that when a security breach occurs, the mechanism and extent of the breach can be determined. Storing audit trails remotely, where they can only be appended to, can keep intruders from covering their tracks. Finally, [|full disclosure] helps to ensure that when bugs are found the "[|window of vulnerability]" is kept as short as possible.
 * 1) Trust all the software to abide by a security policy but the software is not trustworthy (this is [|computer insecurity]).
 * 2) Trust all the software to abide by a security policy and the software is validated as trustworthy (by tedious branch and path analysis for example).
 * 3) Trust no software but enforce a security policy with [|mechanisms] that are not trustworthy (again this is [|computer insecurity]).
 * 4) Trust no software but enforce a security policy with trustworthy hardware mechanisms.

[|Security architecture] Security Architecture can be defined as the design artifacts that describe how the security controls (security countermeasures) are positioned, and how they relate to the overall information technology architecture. These controls serve the purpose to maintain the system's quality attributes, among them [|confidentiality], [|integrity], [|availability], [|accountability] and [|assurance]."[|[][|1][|]].

Hardware based or assisted computer security offers an alternative to software-only computer security. Devices such as [|dongles] may be considered more secure due to the physical access required in order to be compromised.

[|Secure operating systems] One use of the term computer security refers to technology to implement a secure [|operating system]. Much of this technology is based on science developed in the 1980s and used to produce what may be some of the most impenetrable operating systems ever. Though still valid, the technology is in limited use today, primarily because it imposes some changes to system management and also because it is not widely understood. Such ultra-strong secure operating systems are based on [|operating system kernel] technology that can guarantee that certain security policies are absolutely enforced in an operating environment. An example of such a [|Computer security policy] is the [|Bell-LaPadula model]. The strategy is based on a coupling of special [|microprocessor] hardware features, often involving the [|memory management unit], to a special correctly implemented operating system kernel. This forms the foundation for a secure operating system which, if certain critical parts are designed and implemented correctly, can ensure the absolute impossibility of penetration by hostile elements. This capability is enabled because the configuration not only imposes a security policy, but in theory completely protects itself from corruption. Ordinary operating systems, on the other hand, lack the features that assure this maximal level of security. The design methodology to produce such secure systems is precise, deterministic and logical. Systems designed with such methodology represent the state of the art [//[|clarification needed]//] of computer security although products using such security are not widely known. In sharp contrast to most kinds of software, they meet specifications with verifiable certainty comparable to specifications for size, weight and power. Secure operating systems designed this way are used primarily to protect national security information, military secrets, and the data of international financial institutions. These are very powerful security tools and very few secure operating systems have been certified at the highest level ([|Orange Book] A-1) to operate over the range of "Top Secret" to "unclassified" (including Honeywell SCOMP, USAF SACDIN, NSA Blacker and Boeing MLS LAN.) The assurance of security depends not only on the soundness of the design strategy, but also on the assurance of correctness of the implementation, and therefore there are degrees of security strength defined for COMPUSEC. The [|Common Criteria] quantifies security strength of products in terms of two components, security functionality and assurance level (such as EAL levels), and these are specified in a [|Protection Profile] for requirements and a [|Security Target] for product descriptions. None of these ultra-high assurance secure general purpose operating systems have been produced for decades or certified under the Common Criteria. In USA parlance, the term High Assurance usually suggests the system has the right security functions that are implemented robustly enough to protect DoD and DoE classified information. Medium assurance suggests it can protect less valuable information, such as income tax information. Secure operating systems designed to meet medium robustness levels of security functionality and assurance have seen wider use within both government and commercial markets. Medium robust systems may provide the same security functions as high assurance secure operating systems but do so at a lower assurance level (such as Common Criteria levels EAL4 or EAL5). Lower levels mean we can be less certain that the security functions are implemented flawlessly, and therefore less dependable. These systems are found in use on web servers, guards, database servers, and management hosts and are used not only to protect the data stored on these systems but also to provide a high level of protection for network connections and routing services.

[|Secure coding] If the operating environment is not based on a secure operating system capable of maintaining a domain for its own execution, and capable of protecting application code from malicious subversion, and capable of protecting the system from subverted code, then high degrees of security are understandably not possible. While such secure operating systems are possible and have been implemented, most commercial systems fall in a 'low security' category because they rely on features not supported by secure operating systems (like portability, et al.). In low security operating environments, applications must be relied on to participate in their own protection. There are 'best effort' secure coding practices that can be followed to make an application more resistant to malicious subversion. In commercial environments, the majority of software subversion [|vulnerabilities] result from a few known kinds of coding defects. Common software defects include [|buffer overflows], [|format string vulnerabilities], [|integer overflow], and [|code/command injection]. It is to be immediately noted that all of the foregoing are specific instances of a general class of attacks, where situations in which putative "data" actually contains implicit or explicit, executable instructions are cleverly exploited. Some common languages such as C and C++ are vulnerable to all of these defects (see [|Seacord, //"Secure Coding in C and C++"//]). Other languages, such as Java, are more resistant to some of these defects, but are still prone to code/command injection and other software defects which facilitate subversion. Recently another bad coding practice has come under scrutiny; [|dangling pointers]. The first known exploit for this particular problem was presented in July 2007. Before this publication the problem was known but considered to be academic and not practically exploitable.[|[][|2][|]] Unfortunately, there is no theoretical model of "secure coding" practices, nor is one practically achievable, insofar as the variety of mechanisms are too wide and the manners in which they can be exploited are too variegated. It is interesting to note, however, that such vulnerabilities often arise from archaic philosophies in which computers were assumed to be narrowly disseminated entities used by a chosen few, all of whom were likely highly educated, solidly trained academics with naught but the goodness of mankind in mind. Thus, it was considered quite harmless if, for (fictitious) example, a FORMAT string in a FORTRAN program could contain the J format specifier to mean "shut down system after printing." After all, who would use such a feature but a well-intentioned system programmer? It was simply beyond conception that software could be deployed in a destructive fashion. It is worth noting that, in some languages, the distinction between code (ideally, read-only) and data (generally read/write) is blurred. In LISP, particularly, there is no distinction whatsoever between code and data, both taking the same form: an S-expression can be code, or data, or both, and the "user" of a LISP program who manages to insert an executable LAMBDA segment into putative "data" can achieve arbitrarily general and dangerous functionality. Even something as "modern" as Perl offers the eval function, which enables one to generate Perl code and submit it to the interpreter, disguised as string data.

[|Access control list] and [|Capability (computers)] Within computer systems, two security models capable of enforcing privilege separation are [|access control lists] (ACLs) and [|capability-based security]. The semantics of ACLs have been proven to be insecure in many situations, e.g., the [|confused deputy problem]. It has also been shown that the promise of ACLs of giving access to an object to only one person can never be guaranteed in practice. Both of these problems are resolved by capabilities. This does not mean practical flaws exist in all ACL-based systems, but only that the designers of certain utilities must take responsibility to ensure that they do not introduce flaws. [//[|citation needed]//] Capabilities have been mostly restricted to research [|operating systems] and commercial OSs still use ACLs. Capabilities can, however, also be implemented at the language level, leading to a style of programming that is essentially a refinement of standard object-oriented design. An open source project in the area is the [|E language]. First the Plessey [|System 250] and then Cambridge [|CAP computer] demonstrated the use of capabilities, both in hardware and software, in the 1970s. A reason for the lack of adoption of capabilities may be that ACLs appeared to offer a 'quick fix' for security without pervasive redesign of the operating system and hardware. [//[|citation needed]//] The most secure computers are those not connected to the Internet and shielded from any interference. In the real world, the most security comes from [|operating systems] where [|security] is not an add-on.

Computer security is critical in almost any technology-driven industry which operates on computer systems. Computer security can also be referred to as computer safety. The issues of computer based systems and addressing their countless vulnerabilities are an integral part of maintaining an operational industry

[[|edit]] Cybersecurity Act of 2010
On April 1, 2009, Senator [|Jay Rockefeller] (D-WV) introduced the "Cybersecurity Act of 2009 - S. 773" ([|full text]) in the [|Senate]; the bill, co-written with Senators [|Evan Bayh] (D-IN), [|Barbara Mikulski] (D-MD), [|Bill Nelson] (D-FL), and [|Olympia Snowe] (R-ME), was referred to the [|Committee on Commerce, Science, and Transportation], which approved a revised version of the same bill (the "Cybersecurity Act of 2010") on March 24, 2010[|[][|8][|]]. The bill seeks to increase collaboration between the public and the private sector on cybersecurity issues, especially those private entities that own infrastructures that are critical to national security interests (the bill quotes [|John Brennan], the Assistant to the President for Homeland Security and Counterterrorism: "our nation’s security and economic prosperity depend on the security, stability, and integrity of communications and information infrastructure that are largely privately-owned and globally-operated" and talks about the country's response to a "cyber-[|Katrina]"[|[][|9][|]].), increase public awareness on cybersecurity issues, and foster and fund cybersecurity research. Some of the most controversial parts of the bill include Paragraph 315, which grants the [|President] the right to "order the limitation or shutdown of Internet traffic to and from any compromised Federal Government or United States critical infrastructure information system or network[|[][|9][|]]." The [|Electronic Frontier Foundation], an international [|non-profit] [|digital rights] advocacy and legal organization based in the [|United States], characterized the bill as promoting a "potentially dangerous approach that favors the dramatic over the sober response"[|[][|10][|]].

[[|edit]] International Cybercrime Reporting and Cooperation Act
On March 25, 2010, Representative [|Yvette Clarke] (D-NY) introduced the "International Cybercrime Reporting and Cooperation Act - H.R.4962" ([|full text]) in the [|House of Representatives]; the bill, co-sponsored by seven other representatives (among whom only one [|Republican]), was referred to three [|House committees][|[][|11][|]]. The bill seeks to make sure that the administration keeps [|Congress] informed on information infrastructure, [|cybercrime], and end-user protection worldwide. It also "directs the President to give priority for assistance to improve legal, judicial, and enforcement capabilities with respect to cybercrime to countries with low information and communications technology levels of development or utilization in their critical infrastructure, telecommunications systems, and financial industries"[|[][|11][|]] as well as to develop an action plan and an annual compliance assessment for countries of "cyber concern"[|[][|11][|]].

[[|edit]] Protecting Cyberspace as a National Asset Act of 2010 ("//Kill switch bill//")
On June 19, 2010, [|United States Senator] [|Joe Lieberman] (I-CT) introduced a bill called "Protecting Cyberspace as a National Asset Act of 2010 - S.3480" ([|full text in pdf]), which he co-wrote with Senator [|Susan Collins] (R-ME) and Senator [|Thomas Carper] (D-DE). If signed into law, this controversial bill, which the American media dubbed the "//[|Kill switch bill]//", would grant the [|President] emergency powers over the Internet. However, all three co-authors of the bill issued a statement claiming that instead, the bill "[narrowed] existing broad Presidential authority to take over telecommunications networks"

The following terms used in engineering secure systems are explained below. //Some of the following items may belong to the [|computer insecurity] article:// [|Cryptographic] techniques involve transforming information, scrambling it so it becomes unreadable during transmission. The intended recipient can unscramble the message, but eavesdroppers cannot.
 * [|Authentication] techniques can be used to ensure that communication end-points are who they say they are.
 * [|Automated theorem proving] and other verification tools can enable critical algorithms and code used in secure systems to be mathematically proven to meet their specifications.
 * [|Capability] and [|access control list] techniques can be used to ensure privilege separation and mandatory access control. [|This section] discusses their use.
 * [|Chain of trust] techniques can be used to attempt to ensure that all software loaded has been certified as authentic by the system's designers.
 * [|Cryptographic] techniques can be used to defend data in transit between systems, reducing the probability that data exchanged between systems can be intercepted or modified.
 * [|Firewalls] can provide some protection from online intrusion.
 * A [|microkernel] is a carefully crafted, deliberately small corpus of software that underlies the operating system //per se// and is used solely to provide very low-level, very precisely defined primitives upon which an operating system can be developed. A simple example with considerable didactic value is the early '90s GEMSOS (Gemini Computers), which provided extremely low-level primitives, such as "segment" management, atop which an operating system could be built. The theory (in the case of "segments") was that—rather than have the operating system itself worry about mandatory access separation by means of military-style labeling—it is safer if a low-level, independently scrutinized module can be charged **solely** with the management of individually labeled segments, be they memory "segments" or file system "segments" or executable text "segments." If software below the visibility of the operating system is (as in this case) charged with labeling, there is no theoretically viable means for a clever hacker to subvert the labeling scheme, since the operating system //per se// does **not** provide mechanisms for interfering with labeling: the operating system is, essentially, a client (an "application," arguably) atop the microkernel and, as such, subject to its restrictions.
 * Endpoint Security software helps networks to prevent data theft and virus infection through portable storage devices, such as USB drives.
 * Access [|authorization] restricts access to a computer to group of users through the use of [|authentication] systems. These systems can protect either the whole computer – such as through an interactive [|logon] screen – or individual services, such as an [|FTP] server. There are many methods for identifying and authenticating users, such as [|passwords], [|identification cards], and, more recently, [|smart cards] and [|biometric] systems.
 * [|Anti-virus software] consists of computer programs that attempt to identify, thwart and eliminate [|computer viruses] and other malicious software ([|malware]).
 * [|Applications] with known security flaws should not be run. Either leave it turned off until it can be patched or otherwise fixed, or delete it and replace it with some other application. Publicly known flaws are the main entry used by [|worms] to automatically break into a system and then spread to other systems connected to it. The security website [|Secunia] provides a search tool for unpatched known flaws in popular products.
 * [|Backups] are a way of securing information; they are another copy of all the important computer files kept in another location. These files are kept on hard disks, [|CD-Rs], [|CD-RWs], and [|tapes]. Suggested locations for backups are a fireproof, waterproof, and heat proof safe, or in a separate, offsite location than that in which the original files are contained. Some individuals and companies also keep their backups in [|safe deposit boxes] inside [|bank vaults]. There is also a fourth option, which involves using one of the [|file hosting services] that backs up files over the [|Internet] for both business and individuals.
 * Backups are also important for reasons other than security. Natural disasters, such as earthquakes, hurricanes, or tornadoes, may strike the building where the computer is located. The building can be on fire, or an explosion may occur. There needs to be a recent backup at an alternate secure location, in case of such kind of disaster. Further, it is recommended that the alternate location be placed where the same disaster would not affect both locations. Examples of alternate disaster recovery sites being compromised by the same disaster that affected the primary site include having had a primary site in [|World Trade Center] I and the recovery site in [|7 World Trade Center], both of which were destroyed in the [|9/11] attack, and having one's primary site and recovery site in the same coastal region, which leads to both being vulnerable to hurricane damage (e.g. primary site in New Orleans and recovery site in [|Jefferson Parish], both of which were hit by [|Hurricane Katrina] in 2005). The backup media should be moved between the geographic sites in a secure manner, in order to prevent them from being stolen.
 * [|Encryption] is used to protect the message from the eyes of others. [|Cryptographically] secure [|ciphers] are designed to make any practical attempt of [|breaking] infeasible. [|Symmetric-key] ciphers are suitable for bulk encryption using [|shared keys], and [|public-key encryption] using [|digital certificates] can provide a practical solution for the problem of securely communicating when no key is shared in advance.
 * [|Firewalls] are systems which help protect computers and computer networks from attack and subsequent intrusion by restricting the network traffic which can pass through them, based on a set of system administrator defined rules.
 * [|Honey pots] are computers that are either intentionally or unintentionally left vulnerable to attack by crackers. They can be used to catch crackers or fix vulnerabilities.
 * [|Intrusion-detection systems] can scan a network for people that are on the network but who should not be there or are doing things that they should not be doing, for example trying a lot of passwords to gain access to the network.
 * [|Pinging] The ping application can be used by potential crackers to find if an IP address is reachable. If a cracker finds a computer, they can try a port scan to detect and attack services on that computer.
 * [|Social engineering] awareness keeps employees aware of the dangers of social engineering and/or having a policy in place to prevent social engineering can reduce successful breaches of the network and servers.
 * [|File Integrity Monitors] are tools used to detect changes in the integrity of systems and files.

The terms information security, [|computer security] and [|information assurance] are frequently incorrectly used interchangeably. These fields are interrelated often and share the common goals of protecting the [|confidentiality], [|integrity] and [|availability] of information; however, there are some subtle differences between them. These differences lie primarily in the approach to the subject, the methodologies used, and the areas of concentration. Information security is concerned with the confidentiality, integrity and availability of [|data] regardless of the form the data may take: electronic, print, or other forms. Computer security can focus on ensuring the availability and correct operation of a [|computer system] without concern for the information stored or processed by the computer. [|Governments], [|military], [|corporations], [|financial institutions], [|hospitals], and private [|businesses] amass a great deal of confidential information about their employees, customers, products, research, and financial status. Most of this information is now collected, processed and stored on electronic [|computers] and transmitted across [|networks] to other computers. Should confidential information about a business' customers or finances or new product line fall into the hands of a competitor, such a breach of security could lead to lost business, law suits or even [|bankruptcy] of the business. Protecting confidential information is a business requirement, and in many cases also an ethical and legal requirement. For the individual, information security has a significant effect on [|privacy], which is viewed very differently in different [|cultures]. The field of information security has grown and evolved significantly in recent years. There are many ways of gaining entry into the field as a career. It offers many areas for specialization including: securing network(s) and allied [|infrastructure], securing [|applications] and [|databases], [|security testing], information systems [|auditing], [|business continuity planning] and [|digital forensics] science, etc.
 * Information security** means protecting information and [|information systems] from unauthorized access, use, disclosure, disruption, modification, perusal, inspection, recording or destruction.[|[][|1][|]]

Risk management
A comprehensive treatment of the topic of [|risk management] is beyond the scope of this article. However, a useful definition of risk management will be provided as well as some basic terminology and a commonly used process for risk management. The CISA Review Manual 2006 provides the following definition of risk management: //"Risk management is the process of identifying [|vulnerabilities] and [|threats] to the information resources used by an organization in achieving business objectives, and deciding what [|countermeasures], if any, to take in reducing risk to an acceptable level, based on the value of the information resource to the organization."//[|[][|2][|]] There are two things in this definition that may need some clarification. First, the //process// of risk management is an ongoing iterative [|process]. It must be repeated indefinitely. The business environment is constantly changing and new [|threats] and [|vulnerability] emerge every day. Second, the choice of [|countermeasure (computer)s] ([|controls]) used to manage risks must strike a balance between productivity, cost, effectiveness of the countermeasure, and the value of the informational asset being protected. The likelihood that a threat will use a vulnerability to cause harm creates a risk. When a threat does use a vulnerability to inflict harm, it has an impact. In the context of information security, the impact is a loss of availability, integrity, and confidentiality, and possibly other losses (lost income, loss of life, loss of real property). It should be pointed out that it is not possible to identify all risks, nor is it possible to eliminate all risk. The remaining risk is called //residual risk//. A [|risk assessment] is carried out by a team of people who have knowledge of specific areas of the business. Membership of the team may vary over time as different parts of the business are assessed. The assessment may use a subjective qualitative analysis based on informed opinion, or where reliable dollar figures and historical information is available, the analysis may use [|quantitative] analysis. The research has shown that the most vulnerable point in most information systems is the human user, operator, designer, or other human [|[][|3][|]] The [|ISO/IEC 27002:2005] Code of practice for information security management recommends the following be examined during a risk assessment: In broad terms the risk management process consists of: For any given risk, Executive Management can choose to **accept the risk** based upon the relative low value of the asset, the relative low frequency of occurrence, and the relative low impact on the business. Or, leadership may choose to **mitigate the risk** by selecting and implementing appropriate control measures to reduce the risk. In some cases, the risk can be **transferred** to another business by buying insurance or out-sourcing to another business.[|[][|4][|]] The reality of some risks may be disputed. In such cases leadership may choose to **deny the risk**. This is itself a potential risk. [//[|citation needed]//]
 * Risk** is the likelihood that something bad will happen that causes harm to an informational asset (or the loss of the asset). A **vulnerability** is a weakness that could be used to endanger or cause harm to an informational asset. A **threat** is anything (man made or [|act of nature]) that has the potential to cause harm.
 * [|security policy],
 * [|organization] of information security,
 * [|asset management],
 * [|human resources] security,
 * physical and [|environmental security],
 * [|communications] and operations management,
 * [|access control],
 * information systems acquisition, development and maintenance,
 * information security [|incident management],
 * business continuity management, and
 * regulatory compliance.
 * 1) Identification of assets and estimating their value. Include: people, buildings, hardware, software, data (electronic, print, other), supplies.
 * 2) Conduct a threat assessment. Include: Acts of nature, [|acts of war], accidents, malicious acts originating from inside or outside the organization.
 * 3) Conduct a [|vulnerability assessment], and for each vulnerability, calculate the probability that it will be exploited. Evaluate policies, procedures, standards, training, [|physical security], [|quality control], technical security.
 * 4) Calculate the impact that each threat would have on each asset. Use qualitative analysis or quantitative analysis.
 * 5) Identify, select and implement appropriate controls. Provide a proportional response. Consider productivity, cost effectiveness, and value of the asset.
 * 6) Evaluate the effectiveness of the control measures. Ensure the controls provide the required cost effective protection without discernible loss of productivity.

Controls
Main article: [|security controls] When Management chooses to mitigate a risk, they will do so by implementing one or more of three different types of controls.

Administrative
Administrative controls (also called procedural controls) consist of approved written policies, procedures, standards and guidelines. Administrative controls form the framework for running the business and managing people. They inform people on how the business is to be run and how day to day operations are to be conducted. Laws and regulations created by government bodies are also a type of administrative control because they inform the business. Some industry sectors have policies, procedures, standards and guidelines that must be followed - the Payment Card Industry (PCI) Data Security Standard required by [|Visa] and [|Master Card] is such an example. Other examples of administrative controls include the corporate security policy, [|password policy], hiring policies, and disciplinary policies. Administrative controls form the basis for the selection and implementation of logical and physical controls. Logical and physical controls are manifestations of administrative controls. Administrative controls are of paramount importance.

Logical
Logical controls (also called technical controls) use software and data to monitor and control access to information and computing systems. For example: passwords, network and host based firewalls, network [|intrusion detection] systems, [|access control lists], and data encryption are logical controls. An important logical control that is frequently overlooked is the **principle of least privilege**. The [|principle of least privilege] requires that an individual, program or system process is not granted any more access privileges than are necessary to perform the task. A blatant example of the failure to adhere to the principle of least privilege is logging into Windows as user Administrator to read Email and surf the Web. Violations of this principle can also occur when an individual collects additional access privileges over time. This happens when employees' job duties change, or they are promoted to a new position, or they transfer to another department. The access privileges required by their new duties are frequently added onto their already existing access privileges which may no longer be necessary or appropriate.

Physical
Physical controls monitor and control the environment of the work place and computing facilities. They also monitor and control access to and from such facilities. For example: doors, locks, heating and air conditioning, smoke and fire alarms, fire suppression systems, cameras, barricades, fencing, security guards, cable locks, etc. Separating the network and work place into functional areas are also physical controls. An important physical control that is frequently overlooked is the **separation of duties**. Separation of duties ensures that an individual can not complete a critical task by himself. For example: an employee who submits a request for reimbursement should not also be able to authorize payment or print the check. An applications programmer should not also be the [|server administrator] or the [|database administrator] - these roles and responsibilities must be separated from one another.[|[][|5][|]]

Security classification for information
An important aspect of information security and risk management is recognizing the value of information and defining appropriate procedures and protection requirements for the information. Not all information is equal and so not all information requires the same degree of protection. This requires information to be assigned a [|security classification]. The first step in information classification is to identify a member of senior management as the owner of the particular information to be classified. Next, develop a classification policy. The policy should describe the different classification labels, define the criteria for information to be assigned a particular label, and list the required security controls for each classification. Some factors that influence which classification information should be assigned include how much value that information has to the organization, how old the information is and whether or not the information has become obsolete. Laws and other regulatory requirements are also important considerations when classifying information. The type of information security classification labels selected and used will depend on the nature of the organisation, with examples being: All employees in the organization, as well as business partners, must be trained on the classification schema and understand the required security controls and handling procedures for each classification. The classification a particular information asset has been assigned should be reviewed periodically to ensure the classification is still appropriate for the information and to ensure the security controls required by the classification are in place.
 * In the business sector, labels such as: **Public, Sensitive, Private, Confidential**.
 * In the government sector, labels such as: **Unclassified**, **Sensitive But Unclassified**, **Restricted**, **Confidential**, **Secret**, **Top Secret** and their non-English equivalents.
 * In cross-sectoral formations, the [|Traffic Light Protocol], which consists of: **White, Green, Amber** and **Red**.

Access control
Access to protected information must be restricted to people who are authorized to access the information. The computer programs, and in many cases the computers that process the information, must also be authorized. This requires that mechanisms be in place to control the access to protected information. The sophistication of the access control mechanisms should be in parity with the value of the information being protected - the more sensitive or valuable the information the stronger the control mechanisms need to be. The foundation on which access control mechanisms are built start with identification and authentication. There are three different types of information that can be used for authentication: **something you know, something you have, or something you are.** Examples of //something you know// include such things as a PIN, a password, or your mother's maiden name. Examples of //something you have// include a driver's license or a magnetic [|swipe card]. //Something you are// refers to biometrics. Examples of biometrics include palm prints, finger prints, voice prints and retina (eye) scans. Strong authentication requires providing information from two of the three different types of authentication information. For example, something you know plus something you have. This is called two factor authentication. On computer systems in use today, the Username is the most common form of identification and the Password is the most common form of authentication. Usernames and passwords have served their purpose but in our modern world they are no longer adequate. Usernames and passwords are slowly being replaced with more sophisticated authentication mechanisms. After a person, program or computer has successfully been identified and authenticated then it must be determined what informational resources they are permitted to access and what actions they will be allowed to perform (run, view, create, delete, or change). This is called **authorization**. Authorization to access information and other computing services begins with administrative policies and procedures. The policies prescribe what information and computing services can be accessed, by whom, and under what conditions. The access control mechanisms are then configured to enforce these policies. Different computing systems are equipped with different kinds of access control mechanisms - some may even offer a choice of different access control mechanisms. The access control mechanism a system offers will be based upon one of three approaches to access control or it may be derived from a combination of the three approaches. The **non-discretionary** approach consolidates all access control under a centralized administration. The access to information and other resources is usually based on the individuals function (role) in the organization or the tasks the individual must perform. The **discretionary approach** gives the creator or owner of the information resource the ability to control access to those resources. In the **Mandatory access control approach**, access is granted or denied basing upon the security classification assigned to the information resource. Examples of common access control mechanisms in use today include [|Role-based] access control available in many advanced Database Management Systems, simple [|file permissions] provided in the UNIX and Windows operating systems, [|Group Policy Objects] provided in Windows network systems, [|Kerberos], [|RADIUS], [|TACACS], and the simple access lists used in many [|firewalls] and [|routers]. To be effective, policies and other security controls must be enforceable and upheld. Effective policies ensure that people are held **accountable** for their actions. All failed and successful authentication attempts must be logged, and all access to information must leave some type of audit trail. [//[|citation needed]//]
 * Identification** is an assertion of who someone is or what something is. If a person makes the statement //"Hello, my name is John Doe."// they are making a claim of who they are. However, their claim may or may not be true. Before John Doe can be granted access to protected information it will be necessary to verify that the person claiming to be John Doe really is John Doe.
 * Authentication** is the act of verifying a claim of identity. When John Doe goes into a bank to make a withdrawal, he tells the bank teller he is John Doe (a claim of identity). The bank teller asks to see a photo ID, so he hands the teller his driver's license. The bank teller checks the license to make sure it has John Doe printed on it and compares the photograph on the license against the person claiming to be John Doe. If the photo and name match the person, then the teller has authenticated that John Doe is who he claimed to be.

Cryptography
Information security uses [|cryptography] to transform usable information into a form that renders it unusable by anyone other than an authorized user; this process is called [|encryption]. Information that has been encrypted (rendered unusable) can be transformed back into its original usable form by an authorized user, who possesses the [|cryptographic key], through the process of decryption. Cryptography is used in information security to protect information from unauthorized or accidental disclosure while the [|information] is in transit (either electronically or physically) and while information is in storage. Cryptography provides information security with other useful applications as well including improved authentication methods, message digests, digital signatures, [|non-repudiation], and encrypted network communications. Older less secure application such as telnet and ftp are slowly being replaced with more secure applications such as [|ssh] that use encrypted network communications. Wireless communications can be encrypted using protocols such as [|WPA/WPA2] or the older (and less secure) [|WEP]. Wired communications (such as [|ITU-T] [|G.hn]) are secured using [|AES] for encryption and [|X.1035] for authentication and key exchange. Software applications such as [|GnuPG] or [|PGP] can be used to encrypt data files and Email. Cryptography can introduce security problems when it is not implemented correctly. Cryptographic solutions need to be implemented using industry accepted solutions that have undergone rigorous peer review by independent experts in cryptography. The [|length and strength] of the encryption key is also an important consideration. A key that is [|weak] or too short will produce weak encryption. The keys used for encryption and decryption must be protected with the same degree of rigor as any other confidential information. They must be protected from unauthorized disclosure and destruction and they must be available when needed. [|PKI] solutions address many of the problems that surround [|key management].


 * DEFENSE IN DEPTH**

Information security must protect information throughout the life span of the information, from the initial creation of the information on through to the final disposal of the information. The information must be protected while in motion and while at rest. During its life time, information may pass through many different information processing systems and through many different parts of information processing systems. There are many different ways the information and information systems can be threatened. To fully protect the information during its lifetime, each component of the information processing system must have its own protection mechanisms. The building up, layering on and overlapping of security measures is called defense in depth. The strength of any system is no greater than its weakest link. Using a defence in depth strategy, should one defensive measure fail there are other defensive measures in place that continue to provide protection. Recall the earlier discussion about administrative controls, logical controls, and physical controls. The three types of controls can be used to form the basis upon which to build a defense-in-depth strategy. With this approach, defense-in-depth can be conceptualized as three distinct layers or planes laid one on top of the other. Additional insight into defense-in- depth can be gained by thinking of it as forming the layers of an onion, with data at the core of the onion, people as the outer layer of the onion, and [|network security], host-based security and [|application security] forming the inner layers of the onion. Both perspectives are equally valid and each provides valuable insight into the implementation of a good defense-in-depth strategy.

There are three models used in the practice of IA to define assurance requirements and assist in covering all necessary aspects or attributes. The first is the classic [|information security] model, also called the [|CIA Triad], which addresses three attributes of information and information systems, [|confidentiality], integrity, and [|availability]. This C-I-A model is extremely useful for teaching introductory and basic concepts of information security and assurance; the initials are an easy mnemonic to remember, and when properly understood, can prompt systems designers and users to address the most pressing aspects of assurance. The next most widely known model is the Five Pillars of IA model, promulgated by the U.S. Department of Defense (DoD) in a variety of publications, beginning with the [|National Information Assurance Glossary], [|Committee on National Security Systems] Instruction [|CNSSI-4009]. Here is the definition from that publication: "Measures that protect and defend information and information systems by ensuring their [|availability], integrity, [|authentication], [|confidentiality], and [|non-repudiation]. These measures include providing for restoration of information systems by incorporating protection, detection, and reaction capabilities." The Five Pillars model is sometimes criticized because authentication and non-repudiation are not attributes of information or systems; rather, they are procedures or methods useful to assure the integrity and authenticity of information, and to protect the confidentiality of those same. The third IA model, less widely known but considered by many IA practitioners and professionals to be the most complete and accurate of the three, is the [|Parkerian Hexad], first introduced by [|Donn B. Parker] in 1998. Like the Five Pillars, Parker's hexad begins with the C-I-A model but builds it out by adding three more attributes of authenticity, utility, and possession (or control). It is significant to point out that the concept or attribute of authenticity, as described by Parker, is not identical to the pillar of authentication as described by the U.S. DoD.
 * Information assurance (IA)** is the practice of managing risks related to the use, processing, storage, and transmission of information or data and the systems and processes used for those purposes. While focused dominantly on information in digital form, the full range of IA encompasses not only digital but also analog or physical form. Information assurance as a field has grown from the practice of [|information security] which in turn grew out of practices and procedures of [|computer security].