Security Principles
Secure Programming
Lecture 2
Key terms
- Bug
- flaw in a program that results in unexpected behavior
- Vulnerability
- a bug with security-relevant consequences
- Exploit
- code that leverages a vulnerability to compromise the security of a system
System's security
Expressed in terms of a security policy
List of actions that are permitted and behaviors that should be forbidden
Most often informal; in certain domains (e.g., credit card processing) explicitely expressed
What about formal policies?
Security expectations
Security policies are most often concerned with:
- Confidentiality
- Integrity
- Availability
Risk
risk = f(threat, vulnerability, likelihood, impact)
(Entire course could be done on risk!
If you want to know more, NIST's Guide for Conducting Risk Assessments is a good starting point)
Types of vulnerabilities
- Design
- flaw in the design
- Implementation
- error in how the system is written
- Operational
- issue in how the system is used
Taxonomies of vulnerabilities (read more)
Why do we classify vulnerabilities?
- cost of fixing vulnerabilities
- predicting vulnerabilities
Why so many vulnerabilities?
Complexity
- Code/design is too complex to understand all its implications, relationships, and assumptions
- Maybe it's not sufficiently documented
Dijkstra, Programming is hard
Why so many vulnerabilities?
(Lack of) Education
- Developers may not know about security issues
- That's one of the reasons why you're here!
Why so many vulnerabilities?
Extensibility
- What your system is supposed to do changes over time
- The assumptions about the way your system is going to be used change over time
Why so many vulnerabilities?
(Lack of) time
- Your product launches in 2 weeks: you can fix the vulnerabilities and be late or ignore them and ship on time...
Economy of mechanism
Keep the design as simple and small as possible
- simple != small
- interactions are hard (need to check how each subset interact with others)
Security audits necessary and they are only successful on small and simple systems
Fail-safe defaults
Base security decisions on permission rather than exclusion
Deny as default (good)
- grant access only on explicit permission
- mistakes leads to false negatives (access denied to legitimate user): quickly reported
- denial of service?
Sometimes called “whitelisting” (input validation)
Fail-safe defaults
Base security decisions on permission rather than exclusion
Allow as default (bad)
- grant access when not explicitly prohibited
- mistakes leads to false positives (access allowed to bad users): they don't tend to report…
- hard to consider all corner cases/future cases
- wrong mindset
- ease of use
Sometimes called “blacklisting” (input validation)
Complete mediation
Every access to every object is checked for permission
“every access”: caching of permission check results?
“check for permission”: authentication + authorization
Open design
Security of the system must not depend on the secrecy of its design (known since 1853)
Advantages of openness:
- enables the review of mechanisms by other experts
- establishes trust
- forces correct mindset/psychology on developers
Possible to keep secrecy in widely distributed systems?
What about the price of attacks? Risk of being detected?
Does being open automatically make you secure?
Separation of privilege
Make access depend on more than one condition:
- for example, two keys are needed to access a resource
- privileges can be separated
- more than one attack is needed to compromise the system
Examples:
- Something you know, something you have, something you are
- 2-factor authentication in banks (?) and google
Separation of privileges
Related concept: compartmentalization
- divide system in different, isolated parts
- minimize privileges of each
- don't implement all-or-nothing model
- → minimizes possible damage
Sandbox:
- Virtual machines
- Java sandbox (bytecode verifier, class loader, security manager)
- Janus (research project)
Least privilege
Operate using only the least set of privileges necessary
- minimize damage
- minimize interaction between privileged programs
Interesting cases:
- setuid root programs (UNIX)
- database access
Least privilege
Corollaries:
- minimize time that privilege can be used (drop privileges as soon as possible)
- minimize time privilege is active (temporarily drop privileges)
- minimize components that are granted privilege
- minimize resources that are available to privileged program (e.g., chroot, jail mechanisms, quota)
Example: OpenSSH
Least privileges
Implementation:
- split application into smaller protection domains (compartments)
- assign right privileges to each compartment
- devise channels between compartments
- ensure that channels remain isolated, except for how intended
- make it easy to audit
Sounds complicated, isn't it?
How do you know the set of privileges/capabilities that are required? Technique: start with none and add
Least common mechanism
Minimize the amount of mechanisms shared between and relied on by multiple users
- reduce potentially dangerous information flow
- reduce unintended interactions
- minimize consequences of vulnerabilities found in a mechanism
Software homogeneity and its consequences
Psychological acceptability
User interface must be easy to use:
- ease of applying mechanism routinely and correctly
- password change policies and sticky notes
- firewall policies and bring-your-own-modems
User interface must conform to user's mental model:
- reduce likelihood of mistakes
Circumvention work factor
Security = f(cost of circumvention)
- resources available to adversary?
- cost of using those resources?
- it makes sense to focus on increasing the cost of exploiting bug, rather than on discovering new bugs
Example: password breaking or secret key brute-forcing
“Security is economics”
Compromise recording
Sometimes it sufficient to know that a system has been compromised
- tamperproof logging
- Intrusion Detection Systems (IDSes)
“If you can't prevent, detect”
Other principle: orthogonal security
Sometimes security mechanisms can be implemented orthogonally to the systems they protect
- simpler
- applicable to legacy code
- can be composed into multiple layers (“Defense in depth”)
Examples: security wrappers, IDSes, etc.
Other principle: be skeptical and be paranoid
Skeptical: force people to justify security declarations
Paranoid: Robert Morris, “Never underestimate the amount of time and effort that someone will put into breaking your system”
Other principle: design security in
Applying these principles is not easy when you start off with the intention to do so, imagine if you have to
retrofit a system that was not designed with them in mind
Take away points
- Designing and building secure systems is hard (for many reasons)
- Set of principles help us doing that (and evaluating existing systems)
Next time
Finding vulnerabilities