3. Why do you need software
security requirements?
Traditionally, requirements are
about defining what something
can do or be.
Additionally, these objects can be
used for more than just their
intended purpose.
A hammer has to be capable of
driving nails.
A hammer can be used to break a
window.
A door lock needs to keep a door
closed until it’s unlocked with a
specific key.
A door lock can be picked.
A car needs to move travelers
from point A to point B along the
roads.
A car can be used to transport
stolen goods.
Similarly, software can be abused or made vulnerable.
4. Why do you need software
security requirements?
• Security vulnerabilities allow software to be
abused in ways that the developers never
intended.
• By building robust software security
requirements, you can lock down what your
software does so that it can only be used as
intended.
5. Should security requirements be
defined by the organization, or
by IT?
• Defining business security requirements is a
collaborative effort, involving the participation
of: architects, business analysts and regulatory
bodies.
• There is no black-and-white answer about
achieving the best possible security for your
software applications.
• Costs and benefits must be weighed.
6. Should security requirements be
defined by the organization, or
by IT?
• Architects must understand the business context in order to
choose the optimal security controls.
• For example, only organizational decision makers can define
which information should have limited access, and which
roles or individuals should be allowed to access it.
• Once an architect understands the kinds of restrictions the
organization wants to place on access to portions of its
information, the architect can then recommend the security
controls that will help ensure that the information remains
secure.
7. Should security requirements be
defined by the organization, or
by IT?
• Some security measures are mandated by
regulatory bodies.
• For example, to enforce the privacy aspects of
HIPAA (the Health Insurance Portability and
Accountability Act), architects need the
assistance of subject matter experts and
decision makers to understand how an
organization intends to comply with the law.
8. Should security requirements be
defined by the organization, or
by IT?
• HIPAA is used for electronic healthcare systems. Examples include:
• Only you or your personal representative has the
right to access your records.
• A health care provider may send copies of your
records to another provider or health plan only as
needed for treatment or payment or with your
permission.
• Your employer can ask you for a doctor’s note or
other health information if they need the
information for sick leave, workers’ compensation,
wellness programs, or health insurance.
• Your provider cannot give your employer the
information without your authorization unless other
laws require them to do so.
9. Should security requirements be
defined by the organization, or
by IT?
• Some security requirements are defined internally by IT.
• For example, many organizations have security
requirements related to sensitive software
development procedures, such as preventing fraud or
harm when code is migrated into production.
• Separation of duties would be an appropriate security
control for these requirements; it means that a person
who develops a piece of code cannot also move that
piece of code into the production environment.
10. Software Security Requirements
• Clarity about software security requirements is the
foundation of secure development.
• Not incorporating the core security services
(confidentiality, integrity, availability,
authentication, authorization and nonrepudiation)
in the requirements phase of a software
development project inevitably results in insecure
software.
• It is of vital importance that security requirements
are determined alongside the functional and
business requirements.
11. Core Security Services
• Measures the system's ability to protect data from those
who are not meant to have access, while still giving those
who are authorized access.
• Simple characteristic of security (for short: CIA):
• Confidentiality: Data is protected from unauthorized access.
• Integrity: Data is not subject to unauthorized manipulation.
• Availability: System and data is available for legitimate use.
• Other characteristics used to support CIA:
• Authentication: Verifies identities
• Nonrepudiation: Guarantees the sender of a message later can't
deny sending it.
• Authorization: Grants users privileges to perform a task (or
tasks)
12. Confidentiality
• Definition:
• Confidentiality measures are designed to prevent sensitive
information from reaching the wrong people, while making
sure that the right people can in fact get it.
• Access must be restricted to those authorized to view the
data in question.
• Data to be categorized according to the amount and type of
damage that could be done if it falls into unintended hands.
13. Difference Between Privacy and
Confidentiality
• Privacy (applies to
individuals) Limits the
access of the public.
• Confidentiality
(applies to
information), Prevents
information and
documents from
unauthorized access..
14. Ensuring Confidentiality
• Data encryption is a common method of
ensuring confidentiality.
• User IDs and passwords constitute a standard
procedure; two-factor authentication is
becoming the norm.
• Other options include biometric verification,
key fobs
16. Ensuring Confidentiality
• In addition, users can take precautions:
• To minimize the number of places where the
information appears
• And the number of times it is actually transmitted to
complete a required transaction.
• Extra measures might be taken in the case of extremely
sensitive documents, precautions such as:
• Storing only on disconnected storage devices
• Or in hard copy form only.
17. Integrity
• Integrity involves maintaining the consistency, accuracy, and
trustworthiness of data over its entire life cycle .
• Data must not be lost/ changed in transit.
• Steps must be taken to ensure that data cannot be altered by
unauthorized people (for example, in a breach of
confidentiality).
18. Data Integrity
• To achieve data integrity, rules are consistently and
routinely applied to all data entering the system, and
any relaxation of enforcement could cause errors in the
data.
• Implementing checks on the data as close as possible to
the source of input (such as human data entry), causes
less erroneous data to enter the system.
• Strict enforcement of data integrity rules results in lower
error rates, and time saved troubleshooting and tracing
erroneous data and the errors it causes to algorithms.
19. Data Integrity
• Data integrity also includes rules defining the
relations a piece of data can have, to other pieces of
data.
• For example, a Customer record being allowed to link to
purchased Products, but not to unrelated data such
as Corporate Assets.
• Data integrity often includes checks and correction
for invalid data, based on a fixed schema or a
predefined set of rules.
• An example being textual data entered where a date-time
value is required.
• Rules for data derivation are also applicable,
specifying how a data value is derived based on
algorithm, contributors and conditions.
20. Types of integrity constraints
• Data integrity is normally enforced in a database
system by a series of integrity constraints or
rules.
• Three types of integrity constraints are an
inherent part of the relational data model:
• Entity integrity,
• Referential integrity
• Domain integrity.
21. Types of integrity
constraints
• Entity integrity states that every table must have a primary key and
that the column or columns chosen to be the primary key should be
unique and not null.
• Referential integrity states that any foreign-key value can only be in
one of two states. The usual state of affairs is that the foreign-key
value refers to a primary key value of some table in the database.
• Domain integrity specifies that all columns in a relational database
must be declared upon a defined domain. The primary unit of data
in the relational data model is the data item. Such data items are
said to be non-decomposable or atomic. A domain is a set of values
of the same type. Domains are therefore pools of values from which
actual values appearing in the columns of a table are drawn.
• User-defined integrity refers to a set of rules specified by a user,
which do not belong to the entity, domain and referential integrity
categories.
22. Databases
• If a database supports these features, it is the responsibility of the
database to ensure data integrity as well as the consistency
model for the data storage and retrieval.
• If a database does not support these features, it is the
responsibility of the applications to ensure data integrity while the
database supports the consistency model for the data storage and
retrieval.
• Having a single, well-controlled, and well-defined data-integrity
system increases
• stability (one centralized system performs all data integrity
operations)
• performance (all data integrity operations are performed in the
same tier as the consistency model)
• re-usability (all applications benefit from a single centralized data
integrity system)
• maintainability (one centralized system for all data integrity
administration).
23. Ensuring Integrity
• These measures include:
• File permissions and User access controls.
• Version control may be used to prevent erroneous
changes or accidental deletion by authorized users
becoming a problem.
• In addition, some means must be in place to detect any
changes in data that might occur as a result of non-
human-caused events such as server crash.
• Some data might include checksums,
even cryptographic checksums, for verification of
integrity.
• Backups or redundancies must be available to restore
the affected data to its correct state.
24. Availability
• Availability is best ensured by rigorously
maintaining all hardware
• Performing hardware repairs immediately when
needed
• Maintaining a correctly functioning operating
system environment that is free of software
conflicts.
• It’s also important to keep current with all
necessary system upgrades.
25. Ensuring Availability
• Fast and adaptive disaster recovery is essential for the
worst case scenarios;
• Safeguards against data loss or interruptions in
connections must include unpredictable events such as
natural disasters and fire. To prevent data loss from
such occurrences, a backup copy may be stored in a
geographically-isolated location, perhaps even in a
fireproof, waterproof safe.
• Extra security software such as firewalls can guard
against downtime and unreachable data due to
malicious actions such as denial-of-service attacks and
network intrusions.
26. Ensuring Availability
(Firewalls)
• A firewall absolutely isolates your computer from
the Internet.
• It uses a "wall of code" that inspects each
individual "packet" of data
• As data arrives at either side of the firewall —
inbound to or outbound from your computer —
it determines whether it should be allowed to
pass or be blocked.
28. What Firewalls Do?
• Basically, firewalls need to be able to perform the following
tasks:
• Defend resources
• Validate access
• Manage and control network traffic
• Record and report on events
• Act as an intermediary
29. What is Denial-of-Service Attack?
• Any type of attack where the attackers attempt to
prevent legitimate users from accessing the service.
• In a DoS attack, the attacker usually (1) sends excessive
messages asking the network or server to (2) authenticate
requests that have invalid return addresses.
• The network or (3) server will not be able to find the return
address of the attacker when sending the authentication
approval, (4) causing the server to wait before closing the
connection.
• When the server closes the connection, (5) the attacker
sends more authentication messages with invalid return
addresses.
• Hence, the process of authentication and server wait will
begin again, (6) keeping the network or server busy.
31. What is Denial-of-Service
Attack?
• DoS attacks can cause the following problems:
• Ineffective services
• Inaccessible services
• Interruption of network traffic
• Connection interference
32. Security General Scenario
General Scenario
Source of stimulus Human or another system. May or may not have been already
identified.
Stimulus Unauthorized attempt to display, change or delete data, access
system services, change system's behavior or reduce
availability.
Artifacts System services, data within the system, a component or
resources of the system, data delivered to or from the system.
Environment Online or offline. With or without a firewall. Fully, partially or
not operational.
Response Stop unauthorized use. Logging. Recovering from the attack.
Response measure Time used to end the attack. Number of attacks detected. How
long it takes to recover from an attack. How much data was
vulnerable to an attack. Value of system/data compromised.
33. Categories of Security
Requirements
(1) Functional Security Requirements:
The software’s functional security requirements specify a security
function that the software must be able to deliver. Obviously, the
functional security requirements are a subset of the overall functional
requirements.
Examples:
• The software must validate all user input to ensure it does not
exceed the size specified for that type of input
• The server must authenticate every request accessing the
restricted Web pages.
• After authenticating the browser, the server must determine
whether that browser is authorized (i.e., has necessary privileges)
to access the requested restricted Web pages.
• The system must encrypt sensitive data transmitted over the
Internet between the server and the browser.
34. Categories of Security
Requirements
(2) Non-Functional Security Requirements
The non-functional security requirements specify a security quality or
attribute that the software must possess. There are 3 types of non-
functional security requirements:
(a) Security Property Requirements
The security property requirements specify the properties that
software must exhibit.
Examples:
• The software must remain resilient in the face of attacks.
• The behavior of the software must be correct and predictable.
• The software must be available and behave reliably even under
DOS attacks.
• The software must ensure the integrity of the customer account
information.
35. Categories of Security
Requirements
(b) Constraint/Negative Requirements
• Constraint/Negative requirements place constraints on software
functions in order to minimize the likelihood of non-secure
software behaviors, usually in terms of things to be avoided or
prevented.
• Constraint/negative requirements exist because software’s
functionality must not be allowed to behave in a way that could
lead to the software failing in an insecure state, or otherwise
becoming vulnerable to exploitation or compromise.
Examples:
• The server must not return a restricted web page to any browser
that it cannot authenticate.
• The server must not return a restricted web page to a user who is
not authorized to access it.
• The software must not accept overlong input data.
• The application must not accept invalid URLs.
36. Categories of Security
Requirements
(c) Security Assurance Requirements
• The security assurance requirements are rules, best practices, and
processes by which the software security functions will be built,
deployed, and operated.
• Security assurance requirements will NOT be translated into
elements of the software’s design, but into standards, guidelines,
or procedures for its development and operation processes.
Examples:
• The software must be built following SOA web service security
standards.
• The development processes must comply with SSE-CMM capability
level 3 or above.
37. Derived Requirements
• Derived requirements are inspired by the functional and
non-functional requirements.
• When a system has a user ID and PIN functional
requirement, a derived requirement may define the number
of PIN guesses before an account is locked out.
• For audit logs, a derived requirement may support the
integrity of the logs, such as log injection prevention.
38. Derived Requirements
• Derived requirements are tricky because these stem
from abuse cases.
• For every bit of functionality that is given to the user, that
functionality could be abused by an attacker.
• For Example:
• Login functionality can become password guessing attempts,
• Uploading files can open a system up to hosting malware,
• Accepting text can open the door to SQL injection.
39. Making Requirements
• Not only must the requirements engineer think like a
user and a customer, but they also have to think like an
attacker.
• Abuse cases are a way to think like an attacker: A use
case is flipped on its head and designers analyze how
the functionality can be abused.
• For Example:
• If a user is allowed to generate reports with sensitive
data, how might an unauthorized user gain access to
those reports and their sensitive data?