SlideShare a Scribd company logo
1 of 73
Program SecurityProgram Security
UNIT II
Network Security
3.1. Secure Programs – Defining & Testing
 Introduction
 Judging S/w Security by Fixing Faults
 Judging S/w Security by Testing Pgm Behavior
 Judging S/w Security by Pgm Security Analysis
 Types of Pgm Flaws
3.2. Non malicious Program Errors
 Buffer overflows
 Incomplete mediation
 Time-of-check to time-of-use errors
 Combinations of non malicious program flaws
3.3. Malicious Code
3.3.1. General-Purpose Malicious Code incl. Viruses
 Introduction
 Kinds of Malicious Code
 How Viruses Work
 Virus Signatures
 Preventing Virus Infections
 Seven Truths About Viruses
 Case Studies
 Virus Removal and System Recovery After Infection
3.3.2. Targeted Malicious Code
 Trapdoors
 Salami attack
 Covert channels
3.4. Controls for Security
 Introduction
 Developmental controls for security
 Operating System controls for security
 Administrative controls for security
 Conclusions
 Program security – Our first step on how to
apply security to computing
 Protecting programs is the heart of computer
security
 All kinds of programs, from apps via OS, DBMS,
networks
 Issues:
◦ How to keep pgms free from flaws
◦ How to protect computing resources from pgms with
flaws
 Depends on who you ask
◦ user - fit for his task
◦ programmer - passes all her tests
◦ manager - conformance to all specs
 Developmental criteria for program security
include:
◦ Correctness of security & other requirements
◦ Correctness of implementation
◦ Correctness of testing
 Error - may lead to a fault
 Fault - cause for deviation from intended
function
 Failure - system malfunction caused by fault
 Faults - seen by ”insiders” (e.g., programmers)
 Failures - seen by „outsiders” (e.g., independent
testers, users)
 Error/fault/failure example:
◦ Programmer’s indexing error, leads to buffer overflow
fault
◦ Buffer overflow fault causes system crash (a failure)
 Two categories of faults w.r.t. duration
◦ Permanent faults
◦ Transient faults – can be much more difficult to
diagnose
 An approach to judge s/w security: penetrate and
patch
◦ Red Team / Tiger Team tries to crack s/w. If you withstand
the attack => security is good. Is this true? Rarely.
 Too often developers try to quick-fix problems
discovered by Tiger Team
◦ Quick patches often introduce new faults due to:
 Pressure – causing narrow focus on fault, not context
 Non-obvious side effects
 System performance requirements not allowing for security overhead
 A better approach to judging s/w security: testing
pgm behaviour
◦ Compare behaviour vs. requirements
◦ Program security flaw = inappropriate behaviour caused
by a pgm fault or failure
◦ Flaw detected as a fault or a failure
 Important: If flaw detected as a failure (an
effect), look for the underlying fault (the cause)
 Recall: fault seen by insiders, failure – by
outsiders
 If possible, detect faults before they become
failures
 Any kind of fault/failure can cause a security
incident
◦ Misunderstood requirements / error in coding / typing
error
◦ In a single pgm / interaction of k pgms
◦ Intentional flaws or accidental (inadvertent) flaws
 Therefore, we must consider security
consequences for all kinds of detected
faults/failures
◦ Even inadvertent faults / failures
◦ Inadvertent faults are the biggest source of security
vulnerabilities exploited by attackers
 Limitations of testing
◦ Can’t test exhaustively
◦ Testing checks what the pgm should do - Can’t often
test what the pgm should not do
 Complexity – malicious attacker’s best friend
◦ Too complex to model / to test
◦ Exponential # of pgm states / data combinations
 Evolving technology
◦ New s/w technologies appear
◦ Security techniques catching up with s/w technologies
 Taxonomy of pgm flaws: Flaws can be Intentional
or Unintentional
 Intentional
◦ Malicious
◦ Nonmalicious
 Unintentional
◦ Validation error (incomplete or inconsistent)
◦ Domain error (variable value outside of its domain)
◦ Serialization and aliasing
 serialization – e.g., in DBMSs or OSs
 aliasing - one variable or some reference, when changed, has
an indirect (usually unexpected) effect on some other data
◦ Inadequate ID and authentication (Section 4—on OSs)
◦ Boundary condition violation
◦ Other exploitable logic errors
 Buffer overflows
 Incomplete mediation
 Time-of-check to time-of-use errors
 Combinations of nonmalicious program flaws
 Many languages require buffer size declaration
◦ C language statement: char sample[10];
◦ Execute statement: sample[i] = ‘A’; where i=10
◦ Out of bounds (0-9) subscript – buffer overflow occurs
◦ Some compilers don’t check for exceeding bounds
 Similar problem caused by pointers. No
reasonable way to define limits for pointers
 Where does ‘A’ go? Depends on what is
adjacent to ‘sample[10]’
◦ Affects user’s data - overwrites user’s data
◦ Affects users code - changes user’s instruction
◦ Affects OS data - overwrites OS data
◦ Affects OS code - changes OS instruction
 This is a case of aliasing
 Implications of buffer overflow: Attacker can
insert malicious data values/instruction codes
into ”overflow space”
 Suppose buffer overflow affects OS code area:
◦ Attacker code executed as if it were OS code
 Attacker might need to experiment to see what happens when
he inserts A into OS code area
◦ Can raise attacker’s privileges (to OS privilege level)
when A is an appropriate instruction
◦ Attacker can gain full control of OS
 Other types of overflow
◦ Buffer overflow affects a call stack area
◦ Web server attack similar to buffer overflow attack:
pass very long string to web server
 Buffer overflows still common
◦ Used by attackers
 to crash systems
 to exploit systems by taking over control
Large # of vulnerabilities due to buffer overflows
 Sensitive data are in exposed, uncontrolled
condition
 Example
◦ URL to be generated by client’s browser to access
server, e.g.: http://www.things.com/
order/final&custID=101&part=555A&qy=20&price=10&
ship=boat&shipcost=5&total=205
◦ Instead, user edits URL directly, changing price and
total cost as follows: http://www.things.com
/order/final&custID=101&part=555A&qy=20&price=1&
ship=boat&shipcost=5&total=25
 Unchecked data are a serious vulnerability!
 Possible solution: anticipate problems
◦ Don’t let client return a sensitive result (like total) that
can be easily recomputed by server
◦ Use drop-down boxes / choice lists for data input
◦ Prevent user from editing input directly
◦ Check validity of data values received from client
 Also known as synchronization flaw /
serialization flaw
 Synchronization - mediation with “bait and
switch” in the middle
 Non-computing example:
◦ Swindler shows buyer real Rolex watch (bait)
◦ After buyer pays, switches real Rolex to a forged one
 In computing:
◦ Change of a resource (e.g., data) between time access
checked and time access used
 Prevention of TOCTTOU errors
◦ Be aware of time lags
◦ Use digital signatures and certificates to ”lock” data
values after checking them
 Malicious code or rogue pgm is written to exploit
flaws in pgms
 Malicious code can do anything a pgm can
 Malicious code can change
◦ data
◦ other programs
 Malicious code has been ”officially” defined by
Cohen in 1984 but virus behaviour known since
at least 1970
TrapdoorsTrapdoors
Trojan HorsesTrojan Horses
BacteriBacteriaa
Logic BombsLogic Bombs
WormsWorms
VirusViruseses
X
Files
[cf. Barbara Edicott-Popovsky and Deborah Frincke, CSSE592/492, U. Washington]
 Trojan horse - A computer program that appears
to have a useful function, but also has a hidden
and potentially malicious function that evades
security mechanisms, sometimes by exploiting
legitimate authorizations of a system entity that
invokes the program
 Virus - A hidden, self-replicating section of
computer software, usually malicious logic, that
propagates by infecting (i.e., inserting a copy of
itself into and becoming part of) another
program. A virus cannot run by itself; it requires
that its host program be run to make the virus
active. E.g. – Transient & Resident
 Worm - A computer program that can run
independently, can propagate a complete
working version of itself onto other hosts on a
network, and may consume computer resources
destructively.
 Bacteria or Rabbit - A specialized form of virus which
does not attach to a specific file. Replicates without
boundaries to exhaust resources.
 Logic bomb - Malicious [program] logic that activates
when specified conditions are met. Usually intended to
cause denial of service or otherwise damage system
resources.
 Time bomb - activates when specified time occurs
 Trapdoor / backdoor - A hidden computer flaw
known to an intruder, or a hidden computer
mechanism (usually software) installed by an
intruder, who can activate the trap door to gain
access to the computer without being blocked by
security services or mechanisms.
 Above terms not always used consistently,
especially in popular press
 Combinations of the above kinds even more
confusing
◦ E.g., virus can be a time bomb - spreads like virus,
”explodes” when time occurs
 Term ”virus” often used to refer to any kind of
malicious code. When discussing malicious
code, we’ll often say ”virus” for any malicious
code
 Attach
 Append to program or e-mail
 Executes with program
 Surrounds program
 Executes before and after program
 Erases its tracks
 Integrates or replaces program code
 Gain control
 Virus replaces target
 Reside
 In boot sector
 Memory
 Application program
 Libraries
 Detection
 Virus signatures
 Storage patterns
 Execution patterns
 Transmission patterns
 Prevention
 Don’t share executables
 Use commercial software from reliable sources
 Test new software on isolated computers
 Open only safe attachments
 Keep recoverable system image in safe place
 Backup executable system file copies
 Use virus detectors
 Update virus detectors often
Virus Effect How it is caused
Attach to executable
Modify file directory
Write to executable program file
Attach to data/control
file
Modify directory
Rewrite data
Append to data
Append data to self
Remain in memory
Intercept interrupt by modifying interrupt handler address
table
Load self in non-transient memory area
Infect disks
Intercept interrupt
Intercept OS call (to format disk, for example)
Modify system file
Modify ordinary executable program
Conceal self
Intercept system calls that would reveal self and falsify results
Classify self as “hidden” file
Spread self
Infect boot sector
Infect systems program
Infect ordinary program
Infect data ordinary program reads to control its executable
Prevent deactivation
Activate before deactivating program and block deactivation
Store copy to reinfect after deactivation
 Viruses can infect any platform
 Viruses can modify “hidden” / “read only” files
 Viruses can appear anywhere in system
 Viruses spread anywhere sharing occurs
 Viruses cannot remain in memory, but…
 Viruses infect only software, not hardware, but…
 Viruses can be benevolent, malevolent and benign
 Case Study:
 1988
 Invaded VAX and Sun-3 computers running versions of
Berkeley UNIX
 Used their resources to attack still more computers
 Within hours spread across the U.S
 Infected hundreds / thousands of computers
 Made many computers unusable
 Trapdoors
 Program stubs during testing
 Intentionally or unintentionally left
 Forgotten
 Left for testing or maintenance
 Left for covert access
 Salami attack
 Merges inconsequential pieces to get big results
 Ex: deliberate diversion of fractional cents
 Too difficult to audit
 Covert Channels
◦ Programs that leak information
 Trojan horse
◦ Discovery
 Analyze system resources for patterns
 Flow analysis from a program’s syntax (automated)
◦ Difficult to close
 Not much documented
 Potential damage is extreme
 We have seen a lot of possible security flaws.
How to prevent (some of) them?
 Software engineering concentrates on
developing and maintaining quality s/w
◦ We’ll take a look at some techniques useful
specifically for developing/maintaining secure s/w
 Three types of controls against pgm flaws:
◦ Developmental controls
◦ OS controls
◦ Administrative controls
 Team of developers, each involved in ≥ 1 of stages:
◦ Requirement specification
 Regular req. specs: ”do X”
 Security req. specs: ”do X and nothing more”
◦ Design
◦ Implementation
◦ Testing
◦ Documenting at each stage
◦ Reviewing at each stage
◦ Managing system development through all stages
◦ Maintaining deployed system (updates, patches, new versions, etc.)
 Fundamental principles of s/w engineering
◦ Modularity
◦ Encapsulation
◦ Information hiding
 Modules should be:
◦ Single-purpose - logically/functionally
◦ Small - for a human to grasp
◦ Simple - for a human to grasp
◦ Independent – high cohesion, low coupling
 High cohesion – highly focused on (single) purpose
 Low coupling – free from interference from other modules
 Modularity should improve correctness
◦ Fewer flaws => better security
 Minimizing information sharing with other modules
◦ Limited interfaces reduce # of covert channels
 Well documented interfaces
 ”Hiding what should be hidden and showing what
should be visible.”
 Module is a black box
◦ Well defined function and I/O
 Easy to know what module does but not how it
does it
 Reduces complexity, interactions, covert
channels, ...
=> better security
 Peer reviews
 Hazard analysis
 Testing
 Good design
 Risk prediction & mangement
 Static analysis
 Configuration management
 Additional developmental controls
 Three types of reviews
◦ Reviews
 Informal
 Team of reviewers
 Gain consensus on solutions before development
◦ Walk-throughs
 Developer walks team through code/document
 Discover flaws in a single design document
◦ Inspection
 Formalized and detailed
 Statistical measures used
 Systematic exposure of hazardous system
states
 Technique
 Hazard lists
 What-if scenarios
 System-wide view (not just code)
 Begins Day 1
 Continues throughout SDLC
 Testing phases:
◦ Module/component/unit, testing of indiv. modules
◦ Integration testing of interacting (sub)system modules
◦ (System) function testing checking against requirement specs
◦ (System) performance testing
◦ (System) acceptance testing – with customer against customer’s
requirements — on seller’s or customer’s premises
◦ (System) installation testing after installation on customer’s
system
◦ Regression testing after updates/changes to s/w
 Types of testing
◦ Black Box testing – testers can’t examine code
◦ White Box / Clear box testing – testers can examine
design and code, can see inside modules/system
 Good design uses:
◦ Modularity / encapsulation / info hiding (as discussed
above)
◦ Fault tolerance
◦ Consistent failure handling policies
◦ Design rationale and history
◦ Design patterns
 Fault-tolerant approach:
◦ Anticipate faults
 (car: anticipate having a flat tire)
◦ Active fault detection rather than passive fault
detection
 (e.g., by use of mutual suspicion: active input data checking)
◦ Use redundancy
 (car: have a spare tire)
◦ Isolate damage
◦ Minimize disruption
 (car: replace flat tire, continue your trip)
 Fault-tolerant Example 1: Majority voting (using h/w
redundancy)
◦ 3 processor running the same s/w
◦ E.g., in a spaceship
◦ Result accepted if results of ≥ 2 processors agree
 Fault-tolerant Example 2: Recovery Block (using s/w
redundancy)
Primary Code
e.g., Quick Sort
Secondary
Code
e.g., Bubble Sort
Acceptance Test
Quick Sort –
– new code (faster)
Bubble Sort –
– well-tested code
 Using consistent failure handling policies
 Each failure handled in one of three ways:
◦ Retrying
 Restore previous state, redo service using different „path”
 E.g., use secondary code instead of primary code
◦ Correcting
 Restore previous state, correct it, run service using the
same code as before
◦ Reporting
 Restore previous state, report failure to error handler, don’t
rerun service
 Using design rationale and history
◦ Knowing it (incl. knowing design rationale and
history for security mechanisms) helps developers
modifying or maintaining system
 Using design patterns
◦ Knowing it enables looking for patterns showing
what works best in which situation
 Value of Good Design
◦ Easy maintenance
◦ Understandability
◦ Reuse
◦ Correctness
◦ Better testing
=> translates into (saving) BIG bucks !
 Predict and manage risks involved in system
development and deployment
◦ Make plans to handle unwelcome events should they
occur
 Risk prediction/management are especially
important for security
◦ Because unwelcome and rare events can have
security consequences
 Risk prediction/management helps to select
proper security controls (e.g., proportional to
risk)
 Before system is up and running, examine its
design and code to locate security flaws
◦ More than peer review
 Examines
◦ Control flow structure (sequence in which
instructions are executed, incl. iterations and loops)
◦ Data flow structure (trail of data)
◦ Data structures
 Automated tools available
 CM = Process of controlling system
modifications during development and
maintenance
◦ Offers security benefits by scrutinizing
new/changed code
 Problems with system modifications
◦ One change interfering with other change
 E.g., neutralizing it
◦ Proliferation of different versions and releases
 Older and newer
 For different platforms
 For different application environments
 Learning from mistakes
◦ Avoiding such mistakes in the future enhances security
 Proofs of program correctness
◦ Formal methods to verify pgm correctness
◦ Problems with practical use of pgm correctness proofs
 Esp. for large pgms/systems
◦ Most successful for specific types of apps
 E.g. for communication protocols & security policies
 Even with all these developmental controls – still no
security guarantees!
 Developmental controls are not always used
OR:
 Even if used, not foolproof
=> Need other, complementary controls, incl.
OS controls
 Such OS controls can protect against some
pgm flaws
 Code rigorously developed is analyzed so we
can trust that it does all and only what specs
say
 Trusted code establishes foundation upon
which un-trusted code runs
◦ Trusted code establishes security baseline for the
whole system
 In particular, OS can be trusted s/w
 Key characteristics determining if OS code is trusted
◦ Functional correctness
 OS code consistent with specs
◦ Enforcement of integrity
 OS keeps integrity of its data and other resources even if presented
with flawed or unauthorized commands
◦ Limited privileges
 OS minimizes access to secure data/resources
 Trusted pgms must have ”need to access” and proper access rights
to use resources protected by OS
 Untrusted pgms can’t access resources protected by OS
◦ Appropriate confidence level
 OS code examined and rated at appropriate trust level
 Similar criteria used to establish if s/w other
than OS can be trusted
 Ways of increasing security if untrusted pgms
present:
◦ Mutual suspicion
◦ Confinement
◦ Access log
 Distrust other pgms – treat them as if they were
incorrect or malicious
◦ Pgm protects its interface data
 With data checks, etc.
 OS can confine access to resources by
suspected pgm
 Example 1: strict compartmentalization
◦ Pgm can affect data and other pgms only within its
compartment
 Example 2: sandbox for untrusted pgms
 Can limit spread of viruses
 Records who/when/how (e.g., for how long)
accessed/used which objects
◦ Events logged: logins/logouts, file accesses, pgm
ecxecutions, device uses, failures, repeated
unsuccessful commands (e.g., many repeated failed
login attempts can indicate an attack)
 Audit frequently for unusual events, suspicious patterns
 Forensic measure not protective measure
◦ Forensics – investigation to find who broke law,
policies, or rules
 They prohibit or demand certain human behavior via
policies, procedures, etc.
 They include:
◦ Standards of program development
◦ Security audits
◦ Separation of duties
 Capture experience and wisdom from previous
projects
 Facilitate building higher-quality s/w (incl. more secure)
 They include:
◦ Design S&G – design tools, languages, methodologies
◦ S&G for documentation, language, and coding style
◦ Programming S&G - incl. reviews, audits
◦ Testing S&G
◦ Configuration management S&G
 Check compliance with S&G
 Scare potential dishonest programmer from
including illegitimate code (e.g., a trapdoor)
 Break sensitive tasks into ≥ 2 pieces to be
performed by different people (learned from
banks)
 Example 1: modularity
◦ Different developers for cooperating modules
 Example 2: independent testers
◦ Rather than developer testing her own code
 Developmental / OS / administrative controls
help produce/maintain higher-quality (also more
secure) s/w
 Art and science - no ”silver bullet” solutions
 ”A good developer who truly understands
security will incorporate security into all phases
of development.”
Control Purpose Benefit
Develop-
mental
Limit mistakes
Make malicious code difficult
Produce better software
Operating
System
Limit access to system Promotes safe sharing of info
Adminis-
trative
Limit actions of people Improve usability, reusability and
maintainability

More Related Content

What's hot (20)

Metasploit seminar
Metasploit seminarMetasploit seminar
Metasploit seminar
 
system Security
system Security system Security
system Security
 
Computer Security
Computer SecurityComputer Security
Computer Security
 
Operating System Security
Operating System SecurityOperating System Security
Operating System Security
 
Chapter 2 program-security
Chapter 2 program-securityChapter 2 program-security
Chapter 2 program-security
 
OPERATING SYSTEM SECURITY
OPERATING SYSTEM SECURITYOPERATING SYSTEM SECURITY
OPERATING SYSTEM SECURITY
 
Software Vulnerabilities
Software VulnerabilitiesSoftware Vulnerabilities
Software Vulnerabilities
 
OWASP SB -Threat modeling 101
OWASP SB -Threat modeling 101OWASP SB -Threat modeling 101
OWASP SB -Threat modeling 101
 
Network security model.pptx
Network security model.pptxNetwork security model.pptx
Network security model.pptx
 
Network Security Fundamentals
Network Security FundamentalsNetwork Security Fundamentals
Network Security Fundamentals
 
Operating system security
Operating system securityOperating system security
Operating system security
 
Web Security Attacks
Web Security AttacksWeb Security Attacks
Web Security Attacks
 
Security threats and attacks in cyber security
Security threats and attacks in cyber securitySecurity threats and attacks in cyber security
Security threats and attacks in cyber security
 
Protection and security
Protection and securityProtection and security
Protection and security
 
CSE-Ethical-Hacking-ppt.pptx
CSE-Ethical-Hacking-ppt.pptxCSE-Ethical-Hacking-ppt.pptx
CSE-Ethical-Hacking-ppt.pptx
 
System security
System securitySystem security
System security
 
Administering security
Administering securityAdministering security
Administering security
 
Information Security and the SDLC
Information Security and the SDLCInformation Security and the SDLC
Information Security and the SDLC
 
Keyloggers and Spywares
Keyloggers and SpywaresKeyloggers and Spywares
Keyloggers and Spywares
 
Metasploit framework in Network Security
Metasploit framework in Network SecurityMetasploit framework in Network Security
Metasploit framework in Network Security
 

Similar to Ns

unit 2 -program security.pdf
unit 2 -program security.pdfunit 2 -program security.pdf
unit 2 -program security.pdfKavithaK23
 
presentation_security_1510578971_320573.pptx
presentation_security_1510578971_320573.pptxpresentation_security_1510578971_320573.pptx
presentation_security_1510578971_320573.pptxAadityaRauniyar1
 
CISSP Week 14
CISSP Week 14CISSP Week 14
CISSP Week 14jemtallon
 
Testingfor Sw Security
Testingfor Sw SecurityTestingfor Sw Security
Testingfor Sw Securityankitmehta21
 
CohenNancyPresentation.ppt
CohenNancyPresentation.pptCohenNancyPresentation.ppt
CohenNancyPresentation.pptmypc72
 
Software security (vulnerabilities) and physical security
Software security (vulnerabilities) and physical securitySoftware security (vulnerabilities) and physical security
Software security (vulnerabilities) and physical securityNicholas Davis
 
Software Security (Vulnerabilities) And Physical Security
Software Security (Vulnerabilities) And Physical SecuritySoftware Security (Vulnerabilities) And Physical Security
Software Security (Vulnerabilities) And Physical SecurityNicholas Davis
 
Portakal Teknoloji Otc Lyon Part 1
Portakal Teknoloji Otc  Lyon Part 1Portakal Teknoloji Otc  Lyon Part 1
Portakal Teknoloji Otc Lyon Part 1bora.gungoren
 
Chapter 9 system penetration [compatibility mode]
Chapter 9 system penetration [compatibility mode]Chapter 9 system penetration [compatibility mode]
Chapter 9 system penetration [compatibility mode]Setia Juli Irzal Ismail
 
IT6701 Information Management - Unit II
IT6701 Information Management - Unit II   IT6701 Information Management - Unit II
IT6701 Information Management - Unit II pkaviya
 
Software Security Testing
Software Security TestingSoftware Security Testing
Software Security Testingankitmehta21
 
Overview of Vulnerability Scanning.pptx
Overview of Vulnerability Scanning.pptxOverview of Vulnerability Scanning.pptx
Overview of Vulnerability Scanning.pptxAjayKumar73315
 
Basics of Network Security
Basics of Network SecurityBasics of Network Security
Basics of Network SecurityDushyant Singh
 
Security concerns regarding Vulnerabilities
Security concerns regarding VulnerabilitiesSecurity concerns regarding Vulnerabilities
Security concerns regarding VulnerabilitiesLearningwithRayYT
 
What is Remote Buffer Overflow Attack.pdf
What is Remote Buffer Overflow Attack.pdfWhat is Remote Buffer Overflow Attack.pdf
What is Remote Buffer Overflow Attack.pdfuzair
 
Bank One App Sec Training
Bank One App Sec TrainingBank One App Sec Training
Bank One App Sec TrainingMike Spaulding
 
maliciouse code malwere dan bentuk penyebarannya
maliciouse code malwere dan bentuk penyebarannyamaliciouse code malwere dan bentuk penyebarannya
maliciouse code malwere dan bentuk penyebarannyaSYYULIANISKOMMT
 
Secure Android Development
Secure Android DevelopmentSecure Android Development
Secure Android DevelopmentShaul Rosenzwieg
 
Secure programming with php
Secure programming with phpSecure programming with php
Secure programming with phpMohmad Feroz
 

Similar to Ns (20)

unit 2 -program security.pdf
unit 2 -program security.pdfunit 2 -program security.pdf
unit 2 -program security.pdf
 
presentation_security_1510578971_320573.pptx
presentation_security_1510578971_320573.pptxpresentation_security_1510578971_320573.pptx
presentation_security_1510578971_320573.pptx
 
CISSP Week 14
CISSP Week 14CISSP Week 14
CISSP Week 14
 
Testingfor Sw Security
Testingfor Sw SecurityTestingfor Sw Security
Testingfor Sw Security
 
CohenNancyPresentation.ppt
CohenNancyPresentation.pptCohenNancyPresentation.ppt
CohenNancyPresentation.ppt
 
Software security (vulnerabilities) and physical security
Software security (vulnerabilities) and physical securitySoftware security (vulnerabilities) and physical security
Software security (vulnerabilities) and physical security
 
Software Security (Vulnerabilities) And Physical Security
Software Security (Vulnerabilities) And Physical SecuritySoftware Security (Vulnerabilities) And Physical Security
Software Security (Vulnerabilities) And Physical Security
 
Portakal Teknoloji Otc Lyon Part 1
Portakal Teknoloji Otc  Lyon Part 1Portakal Teknoloji Otc  Lyon Part 1
Portakal Teknoloji Otc Lyon Part 1
 
Chapter 9 system penetration [compatibility mode]
Chapter 9 system penetration [compatibility mode]Chapter 9 system penetration [compatibility mode]
Chapter 9 system penetration [compatibility mode]
 
IT6701 Information Management - Unit II
IT6701 Information Management - Unit II   IT6701 Information Management - Unit II
IT6701 Information Management - Unit II
 
Software Security Testing
Software Security TestingSoftware Security Testing
Software Security Testing
 
Overview of Vulnerability Scanning.pptx
Overview of Vulnerability Scanning.pptxOverview of Vulnerability Scanning.pptx
Overview of Vulnerability Scanning.pptx
 
Active Testing
Active TestingActive Testing
Active Testing
 
Basics of Network Security
Basics of Network SecurityBasics of Network Security
Basics of Network Security
 
Security concerns regarding Vulnerabilities
Security concerns regarding VulnerabilitiesSecurity concerns regarding Vulnerabilities
Security concerns regarding Vulnerabilities
 
What is Remote Buffer Overflow Attack.pdf
What is Remote Buffer Overflow Attack.pdfWhat is Remote Buffer Overflow Attack.pdf
What is Remote Buffer Overflow Attack.pdf
 
Bank One App Sec Training
Bank One App Sec TrainingBank One App Sec Training
Bank One App Sec Training
 
maliciouse code malwere dan bentuk penyebarannya
maliciouse code malwere dan bentuk penyebarannyamaliciouse code malwere dan bentuk penyebarannya
maliciouse code malwere dan bentuk penyebarannya
 
Secure Android Development
Secure Android DevelopmentSecure Android Development
Secure Android Development
 
Secure programming with php
Secure programming with phpSecure programming with php
Secure programming with php
 

Recently uploaded

JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...anjaliyadav012327
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3JemimahLaneBuaron
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfchloefrazer622
 
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...Sapna Thakur
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDThiyagu K
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Celine George
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdfQucHHunhnh
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104misteraugie
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
The byproduct of sericulture in different industries.pptx
The byproduct of sericulture in different industries.pptxThe byproduct of sericulture in different industries.pptx
The byproduct of sericulture in different industries.pptxShobhayan Kirtania
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformChameera Dedduwage
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfJayanti Pande
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphThiyagu K
 

Recently uploaded (20)

JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdf
 
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
 
Advance Mobile Application Development class 07
Advance Mobile Application Development class 07Advance Mobile Application Development class 07
Advance Mobile Application Development class 07
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
The byproduct of sericulture in different industries.pptx
The byproduct of sericulture in different industries.pptxThe byproduct of sericulture in different industries.pptx
The byproduct of sericulture in different industries.pptx
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy Reform
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdf
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 

Ns

  • 2. 3.1. Secure Programs – Defining & Testing  Introduction  Judging S/w Security by Fixing Faults  Judging S/w Security by Testing Pgm Behavior  Judging S/w Security by Pgm Security Analysis  Types of Pgm Flaws 3.2. Non malicious Program Errors  Buffer overflows  Incomplete mediation  Time-of-check to time-of-use errors  Combinations of non malicious program flaws
  • 3. 3.3. Malicious Code 3.3.1. General-Purpose Malicious Code incl. Viruses  Introduction  Kinds of Malicious Code  How Viruses Work  Virus Signatures  Preventing Virus Infections  Seven Truths About Viruses  Case Studies  Virus Removal and System Recovery After Infection 3.3.2. Targeted Malicious Code  Trapdoors  Salami attack  Covert channels
  • 4. 3.4. Controls for Security  Introduction  Developmental controls for security  Operating System controls for security  Administrative controls for security  Conclusions
  • 5.  Program security – Our first step on how to apply security to computing  Protecting programs is the heart of computer security  All kinds of programs, from apps via OS, DBMS, networks  Issues: ◦ How to keep pgms free from flaws ◦ How to protect computing resources from pgms with flaws
  • 6.  Depends on who you ask ◦ user - fit for his task ◦ programmer - passes all her tests ◦ manager - conformance to all specs  Developmental criteria for program security include: ◦ Correctness of security & other requirements ◦ Correctness of implementation ◦ Correctness of testing
  • 7.  Error - may lead to a fault  Fault - cause for deviation from intended function  Failure - system malfunction caused by fault  Faults - seen by ”insiders” (e.g., programmers)  Failures - seen by „outsiders” (e.g., independent testers, users)
  • 8.  Error/fault/failure example: ◦ Programmer’s indexing error, leads to buffer overflow fault ◦ Buffer overflow fault causes system crash (a failure)  Two categories of faults w.r.t. duration ◦ Permanent faults ◦ Transient faults – can be much more difficult to diagnose
  • 9.  An approach to judge s/w security: penetrate and patch ◦ Red Team / Tiger Team tries to crack s/w. If you withstand the attack => security is good. Is this true? Rarely.  Too often developers try to quick-fix problems discovered by Tiger Team ◦ Quick patches often introduce new faults due to:  Pressure – causing narrow focus on fault, not context  Non-obvious side effects  System performance requirements not allowing for security overhead
  • 10.  A better approach to judging s/w security: testing pgm behaviour ◦ Compare behaviour vs. requirements ◦ Program security flaw = inappropriate behaviour caused by a pgm fault or failure ◦ Flaw detected as a fault or a failure
  • 11.  Important: If flaw detected as a failure (an effect), look for the underlying fault (the cause)  Recall: fault seen by insiders, failure – by outsiders  If possible, detect faults before they become failures
  • 12.  Any kind of fault/failure can cause a security incident ◦ Misunderstood requirements / error in coding / typing error ◦ In a single pgm / interaction of k pgms ◦ Intentional flaws or accidental (inadvertent) flaws
  • 13.  Therefore, we must consider security consequences for all kinds of detected faults/failures ◦ Even inadvertent faults / failures ◦ Inadvertent faults are the biggest source of security vulnerabilities exploited by attackers
  • 14.  Limitations of testing ◦ Can’t test exhaustively ◦ Testing checks what the pgm should do - Can’t often test what the pgm should not do  Complexity – malicious attacker’s best friend ◦ Too complex to model / to test ◦ Exponential # of pgm states / data combinations  Evolving technology ◦ New s/w technologies appear ◦ Security techniques catching up with s/w technologies
  • 15.  Taxonomy of pgm flaws: Flaws can be Intentional or Unintentional  Intentional ◦ Malicious ◦ Nonmalicious
  • 16.  Unintentional ◦ Validation error (incomplete or inconsistent) ◦ Domain error (variable value outside of its domain) ◦ Serialization and aliasing  serialization – e.g., in DBMSs or OSs  aliasing - one variable or some reference, when changed, has an indirect (usually unexpected) effect on some other data ◦ Inadequate ID and authentication (Section 4—on OSs) ◦ Boundary condition violation ◦ Other exploitable logic errors
  • 17.  Buffer overflows  Incomplete mediation  Time-of-check to time-of-use errors  Combinations of nonmalicious program flaws
  • 18.  Many languages require buffer size declaration ◦ C language statement: char sample[10]; ◦ Execute statement: sample[i] = ‘A’; where i=10 ◦ Out of bounds (0-9) subscript – buffer overflow occurs ◦ Some compilers don’t check for exceeding bounds  Similar problem caused by pointers. No reasonable way to define limits for pointers
  • 19.  Where does ‘A’ go? Depends on what is adjacent to ‘sample[10]’ ◦ Affects user’s data - overwrites user’s data ◦ Affects users code - changes user’s instruction ◦ Affects OS data - overwrites OS data ◦ Affects OS code - changes OS instruction  This is a case of aliasing
  • 20.  Implications of buffer overflow: Attacker can insert malicious data values/instruction codes into ”overflow space”  Suppose buffer overflow affects OS code area: ◦ Attacker code executed as if it were OS code  Attacker might need to experiment to see what happens when he inserts A into OS code area ◦ Can raise attacker’s privileges (to OS privilege level) when A is an appropriate instruction ◦ Attacker can gain full control of OS
  • 21.  Other types of overflow ◦ Buffer overflow affects a call stack area ◦ Web server attack similar to buffer overflow attack: pass very long string to web server  Buffer overflows still common ◦ Used by attackers  to crash systems  to exploit systems by taking over control Large # of vulnerabilities due to buffer overflows
  • 22.  Sensitive data are in exposed, uncontrolled condition  Example ◦ URL to be generated by client’s browser to access server, e.g.: http://www.things.com/ order/final&custID=101&part=555A&qy=20&price=10& ship=boat&shipcost=5&total=205 ◦ Instead, user edits URL directly, changing price and total cost as follows: http://www.things.com /order/final&custID=101&part=555A&qy=20&price=1& ship=boat&shipcost=5&total=25
  • 23.  Unchecked data are a serious vulnerability!  Possible solution: anticipate problems ◦ Don’t let client return a sensitive result (like total) that can be easily recomputed by server ◦ Use drop-down boxes / choice lists for data input ◦ Prevent user from editing input directly ◦ Check validity of data values received from client
  • 24.  Also known as synchronization flaw / serialization flaw  Synchronization - mediation with “bait and switch” in the middle  Non-computing example: ◦ Swindler shows buyer real Rolex watch (bait) ◦ After buyer pays, switches real Rolex to a forged one  In computing: ◦ Change of a resource (e.g., data) between time access checked and time access used
  • 25.  Prevention of TOCTTOU errors ◦ Be aware of time lags ◦ Use digital signatures and certificates to ”lock” data values after checking them
  • 26.  Malicious code or rogue pgm is written to exploit flaws in pgms  Malicious code can do anything a pgm can  Malicious code can change ◦ data ◦ other programs  Malicious code has been ”officially” defined by Cohen in 1984 but virus behaviour known since at least 1970
  • 27. TrapdoorsTrapdoors Trojan HorsesTrojan Horses BacteriBacteriaa Logic BombsLogic Bombs WormsWorms VirusViruseses X Files [cf. Barbara Edicott-Popovsky and Deborah Frincke, CSSE592/492, U. Washington]
  • 28.  Trojan horse - A computer program that appears to have a useful function, but also has a hidden and potentially malicious function that evades security mechanisms, sometimes by exploiting legitimate authorizations of a system entity that invokes the program
  • 29.  Virus - A hidden, self-replicating section of computer software, usually malicious logic, that propagates by infecting (i.e., inserting a copy of itself into and becoming part of) another program. A virus cannot run by itself; it requires that its host program be run to make the virus active. E.g. – Transient & Resident  Worm - A computer program that can run independently, can propagate a complete working version of itself onto other hosts on a network, and may consume computer resources destructively.
  • 30.  Bacteria or Rabbit - A specialized form of virus which does not attach to a specific file. Replicates without boundaries to exhaust resources.  Logic bomb - Malicious [program] logic that activates when specified conditions are met. Usually intended to cause denial of service or otherwise damage system resources.  Time bomb - activates when specified time occurs
  • 31.  Trapdoor / backdoor - A hidden computer flaw known to an intruder, or a hidden computer mechanism (usually software) installed by an intruder, who can activate the trap door to gain access to the computer without being blocked by security services or mechanisms.
  • 32.  Above terms not always used consistently, especially in popular press  Combinations of the above kinds even more confusing ◦ E.g., virus can be a time bomb - spreads like virus, ”explodes” when time occurs  Term ”virus” often used to refer to any kind of malicious code. When discussing malicious code, we’ll often say ”virus” for any malicious code
  • 33.  Attach  Append to program or e-mail  Executes with program  Surrounds program  Executes before and after program  Erases its tracks  Integrates or replaces program code  Gain control  Virus replaces target  Reside  In boot sector  Memory  Application program  Libraries
  • 34.  Detection  Virus signatures  Storage patterns  Execution patterns  Transmission patterns  Prevention  Don’t share executables  Use commercial software from reliable sources  Test new software on isolated computers  Open only safe attachments  Keep recoverable system image in safe place  Backup executable system file copies  Use virus detectors  Update virus detectors often
  • 35. Virus Effect How it is caused Attach to executable Modify file directory Write to executable program file Attach to data/control file Modify directory Rewrite data Append to data Append data to self Remain in memory Intercept interrupt by modifying interrupt handler address table Load self in non-transient memory area Infect disks Intercept interrupt Intercept OS call (to format disk, for example) Modify system file Modify ordinary executable program Conceal self Intercept system calls that would reveal self and falsify results Classify self as “hidden” file Spread self Infect boot sector Infect systems program Infect ordinary program Infect data ordinary program reads to control its executable Prevent deactivation Activate before deactivating program and block deactivation Store copy to reinfect after deactivation
  • 36.  Viruses can infect any platform  Viruses can modify “hidden” / “read only” files  Viruses can appear anywhere in system  Viruses spread anywhere sharing occurs  Viruses cannot remain in memory, but…  Viruses infect only software, not hardware, but…  Viruses can be benevolent, malevolent and benign
  • 37.  Case Study:  1988  Invaded VAX and Sun-3 computers running versions of Berkeley UNIX  Used their resources to attack still more computers  Within hours spread across the U.S  Infected hundreds / thousands of computers  Made many computers unusable
  • 38.  Trapdoors  Program stubs during testing  Intentionally or unintentionally left  Forgotten  Left for testing or maintenance  Left for covert access  Salami attack  Merges inconsequential pieces to get big results  Ex: deliberate diversion of fractional cents  Too difficult to audit
  • 39.  Covert Channels ◦ Programs that leak information  Trojan horse ◦ Discovery  Analyze system resources for patterns  Flow analysis from a program’s syntax (automated) ◦ Difficult to close  Not much documented  Potential damage is extreme
  • 40.  We have seen a lot of possible security flaws. How to prevent (some of) them?  Software engineering concentrates on developing and maintaining quality s/w ◦ We’ll take a look at some techniques useful specifically for developing/maintaining secure s/w  Three types of controls against pgm flaws: ◦ Developmental controls ◦ OS controls ◦ Administrative controls
  • 41.  Team of developers, each involved in ≥ 1 of stages: ◦ Requirement specification  Regular req. specs: ”do X”  Security req. specs: ”do X and nothing more” ◦ Design ◦ Implementation ◦ Testing ◦ Documenting at each stage ◦ Reviewing at each stage ◦ Managing system development through all stages ◦ Maintaining deployed system (updates, patches, new versions, etc.)
  • 42.  Fundamental principles of s/w engineering ◦ Modularity ◦ Encapsulation ◦ Information hiding
  • 43.  Modules should be: ◦ Single-purpose - logically/functionally ◦ Small - for a human to grasp ◦ Simple - for a human to grasp ◦ Independent – high cohesion, low coupling  High cohesion – highly focused on (single) purpose  Low coupling – free from interference from other modules  Modularity should improve correctness ◦ Fewer flaws => better security
  • 44.  Minimizing information sharing with other modules ◦ Limited interfaces reduce # of covert channels  Well documented interfaces  ”Hiding what should be hidden and showing what should be visible.”
  • 45.  Module is a black box ◦ Well defined function and I/O  Easy to know what module does but not how it does it  Reduces complexity, interactions, covert channels, ... => better security
  • 46.  Peer reviews  Hazard analysis  Testing  Good design  Risk prediction & mangement  Static analysis  Configuration management  Additional developmental controls
  • 47.  Three types of reviews ◦ Reviews  Informal  Team of reviewers  Gain consensus on solutions before development ◦ Walk-throughs  Developer walks team through code/document  Discover flaws in a single design document ◦ Inspection  Formalized and detailed  Statistical measures used
  • 48.  Systematic exposure of hazardous system states  Technique  Hazard lists  What-if scenarios  System-wide view (not just code)  Begins Day 1  Continues throughout SDLC
  • 49.  Testing phases: ◦ Module/component/unit, testing of indiv. modules ◦ Integration testing of interacting (sub)system modules ◦ (System) function testing checking against requirement specs ◦ (System) performance testing ◦ (System) acceptance testing – with customer against customer’s requirements — on seller’s or customer’s premises ◦ (System) installation testing after installation on customer’s system ◦ Regression testing after updates/changes to s/w
  • 50.  Types of testing ◦ Black Box testing – testers can’t examine code ◦ White Box / Clear box testing – testers can examine design and code, can see inside modules/system
  • 51.  Good design uses: ◦ Modularity / encapsulation / info hiding (as discussed above) ◦ Fault tolerance ◦ Consistent failure handling policies ◦ Design rationale and history ◦ Design patterns
  • 52.  Fault-tolerant approach: ◦ Anticipate faults  (car: anticipate having a flat tire) ◦ Active fault detection rather than passive fault detection  (e.g., by use of mutual suspicion: active input data checking) ◦ Use redundancy  (car: have a spare tire) ◦ Isolate damage ◦ Minimize disruption  (car: replace flat tire, continue your trip)
  • 53.  Fault-tolerant Example 1: Majority voting (using h/w redundancy) ◦ 3 processor running the same s/w ◦ E.g., in a spaceship ◦ Result accepted if results of ≥ 2 processors agree  Fault-tolerant Example 2: Recovery Block (using s/w redundancy) Primary Code e.g., Quick Sort Secondary Code e.g., Bubble Sort Acceptance Test Quick Sort – – new code (faster) Bubble Sort – – well-tested code
  • 54.  Using consistent failure handling policies  Each failure handled in one of three ways: ◦ Retrying  Restore previous state, redo service using different „path”  E.g., use secondary code instead of primary code ◦ Correcting  Restore previous state, correct it, run service using the same code as before ◦ Reporting  Restore previous state, report failure to error handler, don’t rerun service
  • 55.  Using design rationale and history ◦ Knowing it (incl. knowing design rationale and history for security mechanisms) helps developers modifying or maintaining system  Using design patterns ◦ Knowing it enables looking for patterns showing what works best in which situation
  • 56.  Value of Good Design ◦ Easy maintenance ◦ Understandability ◦ Reuse ◦ Correctness ◦ Better testing => translates into (saving) BIG bucks !
  • 57.  Predict and manage risks involved in system development and deployment ◦ Make plans to handle unwelcome events should they occur  Risk prediction/management are especially important for security ◦ Because unwelcome and rare events can have security consequences  Risk prediction/management helps to select proper security controls (e.g., proportional to risk)
  • 58.  Before system is up and running, examine its design and code to locate security flaws ◦ More than peer review  Examines ◦ Control flow structure (sequence in which instructions are executed, incl. iterations and loops) ◦ Data flow structure (trail of data) ◦ Data structures  Automated tools available
  • 59.  CM = Process of controlling system modifications during development and maintenance ◦ Offers security benefits by scrutinizing new/changed code  Problems with system modifications ◦ One change interfering with other change  E.g., neutralizing it ◦ Proliferation of different versions and releases  Older and newer  For different platforms  For different application environments
  • 60.  Learning from mistakes ◦ Avoiding such mistakes in the future enhances security  Proofs of program correctness ◦ Formal methods to verify pgm correctness ◦ Problems with practical use of pgm correctness proofs  Esp. for large pgms/systems ◦ Most successful for specific types of apps  E.g. for communication protocols & security policies  Even with all these developmental controls – still no security guarantees!
  • 61.  Developmental controls are not always used OR:  Even if used, not foolproof => Need other, complementary controls, incl. OS controls  Such OS controls can protect against some pgm flaws
  • 62.  Code rigorously developed is analyzed so we can trust that it does all and only what specs say  Trusted code establishes foundation upon which un-trusted code runs ◦ Trusted code establishes security baseline for the whole system  In particular, OS can be trusted s/w
  • 63.  Key characteristics determining if OS code is trusted ◦ Functional correctness  OS code consistent with specs ◦ Enforcement of integrity  OS keeps integrity of its data and other resources even if presented with flawed or unauthorized commands ◦ Limited privileges  OS minimizes access to secure data/resources  Trusted pgms must have ”need to access” and proper access rights to use resources protected by OS  Untrusted pgms can’t access resources protected by OS ◦ Appropriate confidence level  OS code examined and rated at appropriate trust level
  • 64.  Similar criteria used to establish if s/w other than OS can be trusted  Ways of increasing security if untrusted pgms present: ◦ Mutual suspicion ◦ Confinement ◦ Access log
  • 65.  Distrust other pgms – treat them as if they were incorrect or malicious ◦ Pgm protects its interface data  With data checks, etc.
  • 66.  OS can confine access to resources by suspected pgm  Example 1: strict compartmentalization ◦ Pgm can affect data and other pgms only within its compartment  Example 2: sandbox for untrusted pgms  Can limit spread of viruses
  • 67.  Records who/when/how (e.g., for how long) accessed/used which objects ◦ Events logged: logins/logouts, file accesses, pgm ecxecutions, device uses, failures, repeated unsuccessful commands (e.g., many repeated failed login attempts can indicate an attack)  Audit frequently for unusual events, suspicious patterns  Forensic measure not protective measure ◦ Forensics – investigation to find who broke law, policies, or rules
  • 68.  They prohibit or demand certain human behavior via policies, procedures, etc.  They include: ◦ Standards of program development ◦ Security audits ◦ Separation of duties
  • 69.  Capture experience and wisdom from previous projects  Facilitate building higher-quality s/w (incl. more secure)  They include: ◦ Design S&G – design tools, languages, methodologies ◦ S&G for documentation, language, and coding style ◦ Programming S&G - incl. reviews, audits ◦ Testing S&G ◦ Configuration management S&G
  • 70.  Check compliance with S&G  Scare potential dishonest programmer from including illegitimate code (e.g., a trapdoor)
  • 71.  Break sensitive tasks into ≥ 2 pieces to be performed by different people (learned from banks)  Example 1: modularity ◦ Different developers for cooperating modules  Example 2: independent testers ◦ Rather than developer testing her own code
  • 72.  Developmental / OS / administrative controls help produce/maintain higher-quality (also more secure) s/w  Art and science - no ”silver bullet” solutions  ”A good developer who truly understands security will incorporate security into all phases of development.”
  • 73. Control Purpose Benefit Develop- mental Limit mistakes Make malicious code difficult Produce better software Operating System Limit access to system Promotes safe sharing of info Adminis- trative Limit actions of people Improve usability, reusability and maintainability