1. Security and vulnerability assessment analysis tool -
Microsoft Baseline Security Analyzer (MBSA) for Windows OS
Locate and launch MBSA CLI
Check computer for common security misconfigurations
MBSA will automatically select by default to scan WINDOWS
VM WINATCK01
While scanning WINDOWS VM WINATCK01
Security Assessment Report
2 Security updates are missing ACTION **Requires immediate
installation to protect computer
1 Update roll up is missing ACTION **Obtain and install latest
service pack or update roll up by using download link
Administrative Vulnerabilities
More than 2 Administrators were found on the computer,
ACTION **Keep number to a minimum because administrators
have complete control of the computer.
User accounts have non-expiring passwords ACTION
***Password should be changed regularly to prevent password
attacks
Windows firewall disabled and has exceptions configured
Great! Auto logon is disabled (Even if it is configured, provided
password is encrypted; not stored as text)
GREAT! Guest account is disabled on the computer.
GREAT! Anonymous access is restricted from the computer
ADMINISTRATIVE SYSTEM INFORMATION DANGER!
Logon success and logon failure auditing is not enabled.
ACTION ** Enable and turn on auditing for specific events
such as logon and logoff to watch for unauthorized access.
3 Shares are present ACTION ** Review list of shares and
remove any shares that are not needed.
GREAT! Internet explorer has secure settings for all users.
Following to be included in the SAR
a. Windows administrative vulnerabilities present are that more
than 2 Administrators were found on the computer. It is advised
to keep minimum number because administrators have complete
control of the computer.
b. Windows accounts were found to have non-expiring
passwords while passwords should be changed regularly to
prevent password attacks. One user account has blank or simple
password or could not be analyzed
c. Windows OS has two security updates missing and so
requires immediate installation to protect the computer. One
update roll up is missing which requires that latest service pack
should be obtained and installed or roll up updated using the
download link.
2.Security and vulnerability assessment analysis tool –
OpenVAS for Linux OS
Using the ifconfig command in Terminal to check the IP
Address assigned to your VM Linux machine.
eth0: (device name for Linux Ethernet cards), with IP Address
in this example is determined to be 172.21.20.185 The IP
address, 127.0.0.1, is the loopback address that points to the
localhost, or the computer that applications or commands are
being run from. This address will be used to access the OpenVas
application on the VM.
Using OpenVAS Web Interface which is running on port number
9392 and can be opened using the Mozilla Firefox browser.
After getting a security exception, on Adding Exception
Scan IP address by typing 127.0.0.1 next to the ‘Start Scan’
button, then click.
Status says Requested
Status percent changes to 94%
Status percent changes to 98%
Scan completed with severity as 5.0 Medium
Vulnerabilities:
1) Remote server configured to allow weak encryption and Mac
algorithms
2) possible to detect deprecated protocols that will allow
attackers to eavesdrop connection between service and clients to
get access to sensitive data transferred across secured
connection.
3) Host is installed with open SSL prone to information
disclosure vulnerability which may allow man-in-the middle
attackers gain access to the plain text data stream. Application
layer is impacted. Weak ciphers offered by service.
Scan management results
All Security information by severity level
Assets Management – host
Host Details
Configuration Scanner details
Extras – My settings
Extras – Performance
Administration – User details
Administration – Roles
Administration – CERT feed management
Following to be included in the SAR
a. No Linux administrative vulnerability was present as there
was only one Administrator found on the computer. It is advised
to keep minimum number because administrators have complete
control of the computer.
Vulnerabilities present were as follows:
1) Remote server configured to allow weak encryption and Mac
algorithms
2) Possible to detect deprecated protocols that will allow
attackers to eavesdrop connection between service and clients to
get access to sensitive data transferred across secured
connection.
3) Host is installed with open SSL prone to information
disclosure vulnerability which may allow man-in-the middle
attackers gain access to the plain text data stream. Application
layer is impacted. Weak ciphers offered by service.
b. Admin was the only user on the computer. Admin password
should be restricted access to or changed regularly to prevent
password attacks. No user was found to have blank or simple
password or could not be analyzed.
c. No Linux security updates
For explanation:
Multiple sources exist for ideas for theory development and
research in which many problem areas are identified within the
domain of nursing to expand theory. In the recent times, nursing
theory has originated from:
· Role Proliferation: Conceptualization of nursing as a set of
functions became the focus of investigation, with theory
evolving on how to prepare nurses for various roles.
· Basic Sciences: PhD education of nurses in sociology,
psychology, anthropology, and physiology introduced ideas
from other disciplines and stimulated new ways of looking at
nursing phenomena.
· Nursing Process: Examination of processes in nurse-patient
relationships and patient care triggered ideas and questions
related to problem solving, priority setting, and decision
making.
· Nursing Diagnosis: Labels were given to and a universal
nomenclature was developed for problems that fall within the
domain of nursing. These are actually conceptual statements
about the health status of patients and, therefore, the first step
in theory development.
· Concept Identification: Theory development encouraged
identification of concepts central to nursing. Identification of
relationships between concepts (propositions) defined a new
nursing perspective.
CYB 610 PROJECT 1 – SCREENSHOTS OF PASSWORD
CRACKING EXERCISE
Results on experiment with the password cracking tools; please
find answers to the questions below while findings are equally
shared in the project report.
1. Which tool, on which operating system was able to recover
passwords the quickest? Provide examples of the timing by your
experimental observations.
Ophcrack tool was able to recover passwords the quickest on
Windows 7 Enterprise operating system
Cain and Abel tool using BRUTE FORCE attack with NTLM
Hashes loaded
User
NT
Password
Time left
Time taken
Time start
Time stop
1
Xavier
xmen
blank
< 1 min
12.01am
12.01am
2
Wolverine
Not cracked
3.1e+10 years
> 1 hr
3
Shield
Not cracked
2.9e+10 years
> 1 hr
4
EarthBase
Not cracked
2.9e +10 years
> 1 hr
5
dbmsAdmin
Not cracked
1.3e +17 years
> 1 hr
6
Kirk
Not cracked
1.3e+17 years
> 1 hr
7
Mouse
Not cracked
3.0e+10 years
> 1 hr
8
Rudolph
Not cracked
1.3e+17 years
> 1 hr
9
Snoopy
Not cracked
3.0e+10 years
> 1 hr
10
Spock
Not cracked
3.0e+10 years
> 1 hr
11
Apollo
Not cracked
2.9e+10 years
> 1 hr
12
Chekov
Not cracked
2.9e+10 years
> 1 hr
13
Batman
Not cracked
2.9e+10 years
> 1 hr
Cain and Abel tool using Dictionary attack with NTLM Hashes
loaded
(common stop position 3456292)
User
NT
Password
Time spent
1
Xavier
Not cracked
2
Wolverine
Motorcycle (position 1747847)
< 1 min
3
Shield
Not cracked
< 1 min
4
EarthBase
Not cracked
< 1 min
5
dbmsAdmin
Not cracked
< 1 min
6
Kirk
Not cracked
< 1 min
7
Mouse
Not cracked
< 1 min
8
Rudolph
Not cracked
< 1 min
9
Snoopy
Not cracked
< 1 min
10
Spock
Not cracked
< 1 min
11
Apollo
Not cracked
< 1 min
12
Chekov
Not cracked
< 1 min
13
Batman
Not cracked
< 1 min
Ophcrack tool used for password cracking
User
LM
Password
1
Xavier
xmen
2
Wolverine
Not found
3
Shield
Not found
4
EarthBase
Not found
5
dbmsAdmin
Not found
6
Kirk
Not found
7
Mouse
Not found
8
Rudolph
Not found
9
Snoopy
Not found
10
Spock
Not found
11
Apollo
M00n
12
Chekov
Riza
13
Batman
Bats
Time elapsed was 17 secs
2.Which tool(s) provided estimates of how long it might take to
crack the passwords?
Cain and Abel too using Brute force attack
Cain and Abel provided time left in number of years, days or
minutes depending on the specified password length.
What was the longest amount of time it reported? 3.1e+10 years
and for which username? Wolverine
3. Compare the amount of time it took for three passwords that
you were able to recover.
· Cain and Abel tool: Xavier user (xmen password cracked using
brute force attack) = <1 min
· Cain and Abel tool: Wolverine user (motorcycle password
cracked using dictionary attack) = <1 min
· Ophcrack tool : Batman user (bats password cracked under 17
secs)
4. Compare the complexity of the passwords for those
discussed in the last question. What can you say about recovery
time relevant to complexity of these specific accounts?
Xavier user password (xmen) was easier and quicker to crack
using brute force attack due to its short password length, while
it allocated a long time to crack Wolverine user password
(motorcycle). Dictionary attack found Wolverine user password
(motorcycle) simple to attack because it possibly exists in its
wordlists.
Dictionary attack of Cain and Abel tool keeps a record position
on the wordlist which allows you to continue from the word
where you left off before you halt the attack. Brute force attack
takes the specified password length (e.g minimum of 1,
maximum of 16 or more) into consideration to run through
every possible combination which makes it CPU intensive and
extremely slow.
Ophcrack tool has proved to be quicker and more powerful than
Cain and Abel to crack up to four passwords for users; Xavier,
Apollo, Chekov and Batman. One would have expected
Ophcrack tool to succeed in cracking Wolverine password
(motorcycle). It shows that the longer and/or more complex a
password is, the longer it will take to crack.
It is advisable to use passphrase (thkGodisfri), long words or
complex words with all four types of character sets as they are
pretty secure.
5. What are the 4 types of character sets generally discussed
when forming strong passwords?
Lower case letters, upper case letters, numbers (0 through 9)
and special characters ($,*, #, ! and so on)
How many of the 4 sets should you use, as a minimum?
Upper case letters, Lower case letters, and numbers depending
on the company password policy
What general rules are typically stated for minimum password
length?
Minimum of eight characters in length
6.How often should password policies require users to change
their passwords? 90 days
7. Discuss the pros and cons of using the same username
accounts and passwords on multiple machines.
Pros : You would only need to remember one set of username
and password for multiple machines
Cons : Hackers can have access to other machines using the
same account credentials that have been compromised.
8. What are the ethical issues of using password cracker and
recovery tools?
Systems Administrators can use it to recover lost passwords and
regain access to machines
Users that either lost their passwords or corporate entities that
need to look into any compromise or vulnerability they might
have exposed themselves to, should be made aware of when and
why such password cracker, recovery tools may be run
Recovered or cracked passwords should not be disclosed to
third parties and there shouldn’t be a way to link passwords
back to the users
9.Are there any limitations, policies or regulations in their use
on local machines?
Consent of the system owner should be acquired before usage of
password recovery tools as there might be damages and / or loss
of data using the password cracking software.
Home networks?
As an individual you can build your own labs and explore
password cracking
Small business local networks? Intranets? Internets? Where
might customer data be stored?
We need to coordinate with clients to clearly define the scope of
password recovery or cracking efforts and abide by the related
rules of engagement.
It is illegal to launch password cracking tools on individual’s or
organization’s web sites through the internet without adequate
notice and/or consent.
It does not really matter how long or strong your password may
be,what matters is how is your confidential or protected data
(password) is stored
10.If you were using these tools for approved penetration
testing, how might you get the sponsor to provide guidance and
limitations to your test team?
The test team will request guidance and limitation from the
sponsor by scheduling for a kickoff meeting where the timing
and duration of the testing will be discussed, so that normal
business and everyday operations of the organization will not be
disrupted. Information will be gathered as much as possible by
identifying the targeted machines, systems and network,
operational requirements and the staff involved.
11.Discuss any legal issues in using these tools on home
networks in States, which have anti-wiretap communications
regulations. Who has to know about the tools being used in your
household?
Using these tools on home networks in States, which have anti-
wiretap communications regulations would involve obtaining
the relevant legal documents protecting against any legal
actions, should anything go wrong during tests. If it is a rented
apartment, the leasing office management needs to be informed.
Please, find screenshots of step by step exercise on password
cracking:
References
Password storage and hashing
● http://lifehacker.com/5919918/how-your-passwords-are-
stored-on-the-internet-and-whenyour-password-strength-doesnt-
matter
● http://www.darkreading.com/safely-storing-user-passwords-
hashing-vs-encrypting/a/did/1269374
● http://www.slashroot.in/how-are-passwords-stored-linux-
understanding-hashing-shadowutils
Click on Lab broker to allocate resources
Signing into the workspace with StudentFirst/[email protected]
Right on the desktop of VM WINATK01, click Lab Resource
then folder - Applications, locate and launch Cain.
After launch of Cain
Maximize screen and click Cracker tab
20 user accounts on the local machine used to populate the right
window.
1 hash cracked on Xavier user using brute force plus NTLM
hash
Brute force attack on Wolverine user
Results after using Cain and Abel application for both types
(brute force and dictionary) of attacks
Results after using Ophcrack application to crack all user
passwords
Click on Ophcrack -> click ‘crack’ then stop and take note of
time taken
Running Head: Risk Assessment Repot (RAR)
1
Risk Assessment Report (SAR)
5
Risk Assessment Report (RAR)
CHOICE OF ORGANIZATION IS UNIVERSITY OF
MARYLAND MEDICAL CENTER (UMMC) OR A
FICTITIUOS ORGANIZATION (BE CREATIVE)
· Define risk, remediation and security exploitation. (Cite )
· Read the attached documents on risk assessment resource for
the process in preparing the risk assessment
START TO READ:
Organizations perform risk assessments to ensure that they are
able to identify threats (including attackers, viruses, and
malware) to their information systems. According to the
National Institute of Standards and Technology (NIST),
Risk assessments address the potential adverse impacts to
organizational operations and assets, individuals, other
organizations, and the economic and national security interests
of the United States, arising from the operation and use of
information systems and the information processed, stored, and
transmitted by those systems (NIST, 2012).
When a risk assessment is completed, organizations rate risks at
different levels so that they can prioritize them and create
appropriate mitigation plans.
References
U.S. Department of Commerce, National Institute of Standards
and Technology (NIST). (2012). Information security: Guide for
conducting risk assessments (Special Publication 800-30).
Retrieved August 5, 2016, from
http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublicatio
n800-30r1.pdf
END OF READING:
· identify threats, vulnerabilities, risks, and likelihood of
exploitation and suggested remediation.
· Include the OPM OIG Final Audit Report findings and
recommendations as a possible source for methods to remediate
vulnerabilities.
· Research through the attached doc and cite
THREAT IDENTIFICATION
threat attacks on vulnerabilities
Potential hacking actors
types of remediation and mitigation techniques
IP address spoofing/cache poisoning attacks
denial of service attacks (DoS)
packet analysis/sniffing
session hijacking attacks
distributed denial of service attacks
· prepare a Risk Assessment Report (RAR) with information on
the threats, vulnerabilities, likelihood of exploitation of security
weaknesses, impact assessments for exploitation of security
weaknesses, remediation, and cost/benefit analyses of
remediation.
· Devise and include a high-level plan of action with interim
milestones (POAM), in a system methodology, to remedy your
findings.
· Summarize and include the results you obtained from the
vulnerability assessment tools (i.e., MBSA and OpenVas- proj 2
Lab) in RAR
Product I bought starts from herePURPOSE
The purpose of this risk assessment is to evaluate the adequacy
of the Amazon corporation’s security. This risk assessment
report provides a structured but qualitative assessment of the
operational environment for Amazon corporation’s . It
addresses issues of sensitivity, threats analysis ,
vulnerabilities analysis , risks analysis and safeguards applied
in Amazon corporation’s . The report and the assessment
recommends use of cost-effective safeguards in order to
mitigate threats as well as the associated exploitable
vulnerabilities in Amazon corporation’s.
THE ORGANISATION
The The Amazon corporation’s system environment is run as a
distributed client and server environment consisting of a
Microsoft Structured Query Language -SQL database built with
Powerful programming code. The Amazon corporation
contains SQL data files, Python application code, and
executable Java and Javascripts. The SQL production data
files, documented as consisting of SQL stored procedures and
SQL tables, reside on a CLOUD storage area network attached
to a HP server running on Windows XP and MS SQL 2000
operating systems. The Python application code resides on a
different IBM server running on KALI LINUX . Both servers
are housed in the main office Building 233 Data Center at the
CDC Clifton Road campus in Atlanta, GA.
The The Amazon corporation’s executables reside on a
fileserver running Windows 2000 and KALI LINUX or
occasaionally a local workstation is installed depending upon
the loads and jobs Requirements..
Users are physically distributed in multiple locations multiple
sales offices in Atlanta, Pittsburgh, Ft. Collins, Research
Triangle Park, Cincinnati, and San Juan. Their desktop
computers are physically connected to a wide area network -
WAN. Some users revealed that they usually connect via
secured dial-up and DSL connections using a powerful Citrix
server. Normally, a user should connect to an active
application server in their city that hosts the the Amazon
corporation’s application, and to the shared database server
located in Atlanta.
SCOPE
The scope of this risk assessment is to assess the system’s use
of resources and controls implemented and to report on plans
set to eliminate and manage vulnerabilities exploitable by
threats identified in this report whether internal and external to
Amazon. If not eliminated but exploited, these vulnerabilities
could possibly result in:
· Unauthorized disclosure of data as well as unauthorized
modification to the system, its data, or both and Denial of
service, denial of access to data, or both to authorized users.
This Risk Assessment Report project for Amazon corporation
evaluates the confidentiality which means protection from
unauthorized disclosure of system and data information,
integrity which means protection from improper modification
of information, and availability which means loss of system
access of the system.
Intrusion detection tools used in the methodology are MBSA
security analyzer in CLAB 699 Cyber Computing Lab,
OpenVAS security analyzer in CLAB 699 Cyber Computing
Lab, and Wire shark security analyzer. In conducting the
analysis the screenshots taken using each of the tools has been
looked at with a view to arriving at relevant conclusions.
Recommended security safeguards are meant to allow
management to make proper decisions about security-related
initiatives in Amazon .
METHODOLOGY
This risk assessment methodology for and approach Amazon
Corporation was conducted using the guidelines in NIST SP
800-30, Risk Management Guide for Information Technology
Systems and OPM OIG Final Audit Report findings and
recommendations. The assessment is very broad in its scope
and evaluates Amazon Corporation ‘s security vulnerabilities
affecting confidentiality, integrity, and availability. The
assessment also recommends a handful of appropriate security
safeguards, allowing the management to make knowledge-based
decisions on security-related initiative in Amazon Corporation
.
This initial risk assessment report provides an independent
review to help management at Amazon to determine what is
the appropriate level of security required for their system to
support the development of a stringent System Security Plan .
The accompanying review also provides the information
required by the Chief Information Security Officer (CISO) and
Designated Approving Authority DAA also known as the
Authorizing Official to assist in to making informed decision
about authorizing the system to operate. Intrusion detection
tools are used in the methodology and includes the MBSA
security analyzer, the OpenVAS security analyzer, and Wire
shark security analyzer.
DATA
The data collected using the MBSA and other tools reveals that
the following internal routines were done by MBSA and other
tools in the LAB 2 screenshots in CLAB 699 Cyber Computing
Lab and LAB 3 screenshots in CLAB 699 Cyber Computing Lab
given together with the question.
The MBSA security analyzer, the OpenVAS security analyzer
converted the raw scan data and particularly succeeded in
outputting the following vulnerabilities into risks based on the
following methodology in CLAB 699 Cyber Computing Lab:
· Categorizing vulnerabilities as evident the LAB 2 screenshots
· Compairing threat vectors the LAB 3 screenshots
· Assessing the probability of occurrence and possible impact
as evident in the LAB 2 screenshots
· Authentication assessment as evident the LAB 2 screenshots
· Determining authentication threat vectors as evident in the
LAB 3 screenshots
The MBSA security analyzer and the OpenVAS security also
had routines which communicated with green bone security
assessment centre especially to provide the automated
recommendation as evident in the LAB 2 in CLAB 699 Cyber
Computing Lab and LAB 3 screenshots in CLAB 699 Cyber
Computing Lab.
The green bone security assessment centre particularly
succeeded in doing the following as evident in output file.
Lists the risks and the associated recommended controls to
mitigate these risks. The management has the option of doing
the following in the corporation:
Accepting the risks and chosen recommended controls or
negotiating an alternative mitigation, while reserving the right
to override the green bone security assessment centre and
incorporate the proposed recommended control into the
Amazons Plan of Action and Milestones .
RESULTS
The following operational as well as managerial vulnerabilities
were identified in Amazon while using the project methodology:
Inadequate adherence and advocacy for existing security
controls. Inadequate adherence to management of changes to the
information systems infrastructure. Weak authentication
protocols; Inadequate adherence for life-cycle management of
the information systems; Inadequate adherence and advocacy
for configuration management and change management plan;
Inadequate adherence for and advocacy for implementing a
robust inventory of systems, for servers, for databases, and for
network devices; Inadequate adherence to and advocacy for
mature vulnerability scanning tools; Inadequate adherence to
and advocacy for valid authorizations for many systems,
Inadequate adherence to and advocacy for plans of action to
remedy the findings of previous audits.
Thefollowing attacks were identified in Amazon while using the
above project methodology.
IP address spoofing/cache poisoning attacks; denial of service
attacks (DoS) packet analysis/sniffing; session hijacking attacks
and distributed denial of service attacks
NIST SP 800-63 describes the classification of potential harm
and impact as follow as well as OPM OIG Final Audit Report
findings and recommendations:
Inconvenience, distress, or damage to standing or reputation;
Financial loss or agency liability; Harm to agency programs or
public interests; Unauthorized release of sensitive information ;
Personal safety and Civil or criminal violations
Potential Impact of Inconvenience, Distress, or Damage to
Standing or Reputation:
· Low— limited, short-term inconvenience, consisting of
distress or embarrassment to any party within Amazon.
· Moderate— serious short term or limited long-term
inconvenience, consisting distress or damage to the standing or
reputation of any party within Amazon.
High—severe or serious long-term inconvenience, consisting of
distress or damage to the standing or reputation of any party
within Amazon.
Potential Impact of Financial Loss
· Low— insignificant or inconsequential unrecoverable
financial loss to any party consisting of an insignificant or
inconsequential agency liability within Amazon.
· Moderate— a serious unrecoverable financial loss to any
party, consisting of a serious agency liability within Amazon.
· High—severe or catastrophic unrecoverable financial loss to
any party; consisting of catastrophic agency liability within
Amazon.
Potential Impact of Harm to Agency Programs or Public
Interests
· Low, a limited adverse effect on organizational operations or
assets, or public interests within Amazon.
· Moderate—a serious adverse effect on organizational
operations or assets, or public interests within Amazon..
High—a severe or catastrophic adverse effect on organizational
operations or assets, or public interests within Amazon.
Potential impact of Unauthorized Release of Sensitive
Information
· Low- a limited release of personal, U.S. government sensitive
or commercially sensitive information to unauthorized parties
resulting in a loss of confidentiality with a low impact
· Moderate— a release of personal, U.S. government sensitive
or commercially sensitive information to unauthorized parties
resulting in loss of confidentiality with a moderate impact.
High – a release of personal, U.S. government sensitive or
commercially sensitive information to unauthorized parties
resulting in loss of confidentiality with a high impact .
Potential impact to Personal Safety
Low— minor injury not requiring medical treatment within
Amazon.
Moderate— moderate risk of minor injury or limited risk of
injury requiring medical treatment within Amazon.
High—a risk of serious injury or death within Amazon.
Potential Impact of Civil or Criminal Violations
· Low— a risk of civil or criminal violations of a nature that
would not ordinarily be subject to enforcement efforts within
USA.
· Moderate— a risk of civil or criminal violations that may be
subject to enforcement efforts within USA.
· High—a risk of civil or criminal violations that are of special
importance to enforcement programs within USA.
CONCLUSIONS AND RECOMMENDATION
In the risk assessment, two issues came out that were striking
and which are resolved below. An employee was terminated and
his user ID was not removed from the system. This is
dependency failure kind of vulnerability and risk pair and has
an overall risk that is moderate.
The RECOMMENDED safeguard is to remove userID from the
system upon notification of termination
Secondly a VPN /Keyfob access does not meet certification and
accreditation level stipulated in NIST SP 800-63 .This is a kind
of vulnerability that touches on inconvenience, standing and
reputation and and has an overall risk that is moderate.
The RECOMMENDED safeguard is to migrate all remote
authentication roles to CDC or any other approved authority.
This Risk Assessment report for the organization identifies risks
of the operations especially in those domains which fails to
meet the minimum requirements and for which appropriate
countermeasures have yet to be implemented. The RAR also
determines the Probability of occurrence and issues
countermeasures aimed at mitigating the identified risks in an
endeavor to provide an appropriate level-of-protection and to
satisfy all the minimum requirements imposed on the
organization’s policy document.
The system security policy requirements are satisfied now with
the exception of those specific areas identified in this report.
The countermeasure recommended in this report adds to the
additional security controls needed to meet policies and to
effectively manage the security risk to the organization and its
operating environment. Finally, the Certification Official and
the Authorizing Officials must determine whether the totality of
the protection mechanisms approximate a sufficient level of
security, and/or are adequate for the protection of this system
and its resources and information. The Risk Assessment Results
supplies critical information and should be carefully reviewed
by the AO prior to making a final accreditation decision.
References
National Institute of Standards and Technology (NIST) (2014).
Assessing security and privacy
controls in federal information systems and organizations.
NIST Special Publication 800-53A Revision 4. Retrieved from
http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.8
00-53Ar4.pdf
National Institute of Standards and Technology (NIST) (2010).
Guide for applying the risk
management framework to federal information systems. NIST
Special Publication 800-37 Revision 1. Retrieved from
http://csrc.nist.gov/publications/nistpubs/800-37-rev1/sp800-37-
rev1-final.pdf
.
Running Head: Security Assessment Repot (SAR)
1
Security Assessment Report (SAR)
27
Intentionally left blank
Security Assessment Report (SAR)
CHOICE OF ORGANIZATION IS UNIVERSITY OF
MARYLAND MEDICAL CENTER (UMMC) OR A
FICTITIUOS ORGANIZATION (BE CREATIVE)
Introduction
· Research into OPM security breach.
· What prompts this assessment exercise in our choice of
organization? “but we have a bit of an emergency. There's been
a security breach at the Office of Personnel Management. need
to make sure it doesn't happen again.
· What were the hackers able to do? OPM OIG report and found
that the hackers were able to gain access through compromised
credentials
· How could it have been averted? A) security breach could
have been prevented, if the Office of Personnel Management,
or OPM, had abided by previous
auditing reports and security findings.b) access to the
databases could have been prevented by implementing
various encryption schemas and c) could have been identified
after running regularly scheduled scans of the systems.
Organization
· Describe the background of your organization, including the
purpose, organizational structure,
· Diagram of the network system that includes LAN, WAN, and
systems (use the OPM systems model of LAN side networks),
the intra-network, and WAN side networks, the inter-net.
· Identify the boundaries that separate the inner networks from
the outside networks.
· include a description of how these platforms are implemented
in your organization: common computing platforms, cloud
computing, distributed computing, centralized computing,
secure programming fundamentals (cite references)
Threats Identification
Start Reading: Impact of Threats
The main threats to information system (IS) security are
physical events such as natural disasters, employees and
consultants, suppliers and vendors, e-mail attachments and
viruses, and intruders.
Physical events such as fires, earthquakes, and hurricanes can
cause damage to IT systems. The cost of this damage is not
restricted to the costs of repairs or new hardware and software.
Even a seemingly simple incident such as a short circuit can
have a ripple effect and cost thousands of dollars in lost
earnings.
Employees and consultants; In terms of severity of impact,
employees and consultants working within the organization can
cause the worst damage. Insiders have the most detailed
knowledge of how the information systems are being used. They
know what data is valuable and how to get it without creating
tracks.
Suppliers and vendors; Organizations cannot avoid exchanging
information with vendors, suppliers, business partners, and
customers. However, the granting of access rights to any IS or
network, if not done at the proper level—that is, at the least
level of privilege—can leave the IS or network open to attack.
Someone with an authorized connection to the network can
enter and probe further into the network to sniff proprietary or
confidential data. In this case, Fluffy Toys was obliged to give
one supplier better terms and is losing another supplier
altogether.
Email attachments and viruses; Viruses can corrupt an IS and
completely disrupt the normal functioning of a business. Most
virus infections spread through e-mail attachments and
downloads. E-mail attachments can also be used to implant
malware or malicious programs in order to steal data or even
money. Trojan horses, logic bombs, and trapdoors are types of
malicious code that can corrupt an IS and destroy or
compromise information.
Intruders; Corporate espionage agents are typically hired by
competitors to obtain confidential or proprietary information,
such as patents, intellectual property, and trademarks. Hackers
are usually out to get any information they can lay their hands
on. Regardless of the identity of the intruder, the damage to the
organization can be substantial if confidential information is
lost.
Equally read STEP 2 attachments
End reading:
· From above, identify insider threats that are a risk to your
organization.
· differentiate between the external threats to the system and the
insider threats. Identify where these threats can occur in the
previously created diagrams.
· Define threat intelligence, and explain what kind of threat
intelligence is known about the OPM breach.
· Relate the OPM threat intelligence to your organization. How
likely is it that a similar attack will occur at your organization?
· Mention security communication, message and protocols, or
security data transport methods used such as (TCP/IP), SSL, and
others in your organization.
· Describe the purpose and function of firewalls for
organization network systems, and how they address the threats
and vulnerabilities you have identified.
· Describe the value of using access control, database
transaction and firewall log files.
· Describe the purpose and function of encryption, as it relates
to files and databases and other information assets on the
organization's networks.
Scope
Methodology
Data
· Share and relate the data from evaluating the strength of
passwords used by employees (password cracking tools –
ophcrack and Cain/Abel -test data –project lab1), run scans
using security and vulnerability assessment analysis tools such
as MBSA, OpenVAS, (data from project lab2) Nmap, or
NESSUS (proj lab 3 attached), network traffic sampled and
scanned using Wireshark (proj lab 3 attached) on either the
Linux or Windows systems. Wireshark allows you to inspect all
OSI Layers of traffic information.
READ NOTES about other network security tools and relate
their use at your organization
· Network Mapper (Nmap) is a security scanner that is used to
discover hosts and services on a computer network. Based on
network conditions, it sends out packets with specific
information to the target host device and then evaluates the
responses. Thus, it creates a graphical network map illustration.
· Nessus is a UNIX vulnerability assessment tool. It was a free
and open-source vulnerability scanner until 2005. Nessus'
features include remote and local security checks, client/server
architecture with a GTK graphical interface, and an embedded
scripting language for plugins.
· Netcat is a feature-rich network debugging and exploration
tool. It is designed to be a reliable back-end tool for other
programs. It reads and writes data across TCP and UDP network
connections.
· GFI LanGuard is an all-in-one package. It is a network
security and vulnerability scanner that helps to scan for, detect,
assess, and correct problems on the network.
· Snort is a free and open-source responsive IDS. It performs
real-time traffic analysis and packet logging on IP networks.
The program can also be used to detect probes and attacks.
Snort can be configured in three main modes—sniffer, packet
logger, and network intrusion detector. In the sniffer mode, it
reads network packets. In the packet logger mode, it logs the
packets onto a disk; and in the network intrusion detector mode,
it logs traffic. In this mode, it also analyzes the traffic against a
set of rules and acts accordingly.
· Fport is a free tool that scans the system and maintains a
record of the ports that are being opened by various programs.
System administrators can examine the Fport record to find out
if there are any alien programs that could cause damage. Any
nonstandard port, typically opened up by a virus or Trojan,
serves as an indicator of system compromise.
· PortPeeker is a freeware security tool used for capturing
network traffic using TCP, UDP, or Internet Control Message
Protocol (ICMP).
· SuperScan is a tool used to evaluate a computer's security. It
can be used to test for unauthorized open ports on the network.
Hackers also use this tool to check for insecure ports in order to
gain access to a system. SuperScan helps to identify open ports
and the services running on them. It also runs queries such as
Whois and ping.
End of reading
Results
Network Security Issues:
· Provide an analysis of the strength of passwords used by the
employees in your organization, emphasizing weak and
vulnerable passwords as a security issue for your organization.
(from the use of password cracking tools to crack weak and
vulnerable passwords – proj lab 1)
·
Findings
Reflect any weaknesses found in the network and information
system diagrams previously created and the security issues the
following may cause:
· Some critical security practices, such as lack of diligence to
security controls and
· lack of management of changes to the information systems
infrastructure were cited as contributors to the massive data
breach in the OPM Office of the Inspector General's
(OIG) Final Audit Report, which can be found in open source
searches. (research and cite)
· weak authentication mechanisms;
· lack of a plan for life-cycle management of the information
systems;
· lack of a configuration management and change management
plan;
· lack of inventory of systems, servers, databases, and network
devices;
· lack of mature vulnerability scanning tools;
· lack of valid authorizations for many systems, and
· lack of plans of action to remedy the findings of previous
audits.(POAM)
· determine the role of firewalls and encryption, and auditing –
RDBMS that could assist in protecting information and
monitoring the confidentiality, integrity, and availability of the
information in the information systems.
· In the previously created Wireshark files,(see proj lab 3
attached) identify if any databases had been accessed. What are
the IP addresses associated with that activity?
· Provide analyses of the scans and any recommendation for
remediation, if needed.
· Identify any suspicious activity and formulate the steps in an
incidence response that could have been, or should be, enacted.
· Include the responsible parties that would provide that
incidence response and any follow-up activity. Include this in
the SAR. Please note that some scanning tools are designed to
be undetectable.
· While running the scan and observing network activity with
Wireshark, attempt to determine the detection of the scan in
progress. If you cannot identify the scan as it is occurring,
indicate this in your SAR.
Product I bought starts from here
INTRODUCTION
OPERATING SYSTEMS
This is an interface that sits between a user and hardware
resources. Basically it is a software that has among others the
following modules: File management modules, memory
management module, process management modules , input and
output and control module and peripheral device control
modules. This description applies to all operating systems.
ROLE OF USERS.
In order to appreciate the role of users it must be recognized
that an operating system provides users the services to execute
the programs in a convenient way .So the operating system
interacts by users when the users play the roles of asking the
operating system to do each of the following:
Role of direct program execution using threads and Parallel
programming routines.
Role of I/O operation request by writing to external devices and
reading from the same.
Role of File system manipulation by creating directories.
Role of requesting communication by stopping some running
processes or issuing interrupts and signals
Role of requesting program verification by getting error
detection and flagging of errors especially by parsers and
debuggers and compilers which are part of the operating
systems.
Role of protection especially by using locks for drives or
documents so that unauthorized personnel may not access the
the resources for malicious use.
Role of resource allocation where a programming uses system
calls to do some routines instead of programming his own and
these routines operate in critical regions to avoid deadlock and
starvation issues. Furthermore users get interactivity with
operating systems which is defined as the ability of users to
interact with a computer through each of the following:
Providing an interface to interact with the system. Secondly
managing input devices that takes from the user such as
keyboard and managing outputs to the user. Multitasking is also
another feature that allows users to interact with the system. In
Multitasking is enabled by threads and Symmetric
multiprocessing (SMP).The role of schedulers can’t be ignored
in multitasking and the Operating system Handles multitasking
in the way it can handle operations in a time slice.
KERNEL AND OPERATING SYSTEM APPLICATIONS
The kernel is the core of the operating system and executes such
privileged instructions over the management of the following
resources:
· Memory Management: Which algorithms avoids memory leak
and buffer overflow attacks and allow shadow paging and using
virtual pages and pointers efficiently?
· Processor Management: Which strategies when applied reduce
idle time and avoids starvation and deadlock over processor
time for competing resources?
· Device Management: Which techniques reduce disk access
times, disk seek times, disk write and disk read for internal
head disk drives and external storage. How do routines respond
to beeps, alarms, keypress,enter and other keyboard commands.
File Management: Which is the most efficient structures for file
system to minimize search times and also increase performance
even when in distributed environments?
Security regards and concerns: How can passwords and other
techniques be integrated to improve overall system vulnerability
to protect data, information and resources? .
Control towards the system performance: How do the OS
regulate delays between request for a service and response
from the requested entity.
Jobs and processes accounting: How do we know the times and
resources each user wanted so as to form models that can be
used for learning and estimating resources requirements.
Coordination among softwares as well as among users :How do
the communication take place between compilers .assemblers,
debuggers ,linkers, loaders and interpreters about which
component is using which resource and system users.
TYPES OF OPERATING SYSTEMS
· Batch operating system: There is lack of direct interaction
between the user and the computer so the user prepares his job
on punch cards and gives it to computer operator much like
calling customer care centre nowadays. To increase processing
batches of jobs are prepared meaning they have similar
processing cycle and runt at one time. It was the initial
generation of computing system.:
Time-sharing operating systems: This second generation OS
mostly in Unix/Linux allows many people located at various
terminals to use a particular computer at the same time.
Processor’s time is shared among multiple users simultaneously
so the use of the term timesharing is allowed. In such
distributed computing environments, Processors are connected
and they use message passing systems to communicate and
because of conditions such as global starvation and global
deadlocks, additional layer of software called middleware is
used and use of cohorts and elect ions algorithms justified.
A real-time system :
A real time system is defined as a data processing system in
which time intervals required to process and respond to in a
particular input is so small such that it controls the rest of the
environment. Theses systems are called real time because an
input is processed against the clock and response given within
restricted time. Systems such as temperature regulating systems
and air traffic control are real time as work goes as per the
clock.
VULNERABILITIES.
A vulnerability is a weaknesses in the design or operation of a
system that can be used by a threat or else a threat agent with
the consequences of causing undesirable impact especially in
the areas of computer information confidentiality ,data and
information integrity or service availability. This could also be
a phenomenon created by errors in configuration which makes
your current network or hosts on your computer network be
exposed to malicious attacks from either local or remote users.
Vulnerability attacks affect the following important subsystems
in a system:
Firewalls in a VLAN or WAN.See diagram for these in DMZ
diagram included .
Application servers in networked system architectures. See
diagram for these in DMZ diagram included.
Web servers in networked system architectures. See diagram for
these in DMZ diagram included.
Operating systems discussed in chapter one .
Fire suppression systems .In the diagram below which includes
the architecture I have discussed and investigated on .the
internet is supplied from ISP(Internet Service Provider)with the
icon of a cloud. This creates the WAN(Wide Area Network
)component mentioned previously .
The firewall separates the backend of the WAN architecture
from End and is the Orange Book –In-The-Shelf icon Depicting
library arrangement. Beneath the firewall is the switch
appearing as a box with slots for determining in which segment
of sub network a packet belongs to. Similar symbols will
indicate such switching and forwarding services.
The devices with the network addresses appended are the
routers for determining the shortest route a packet should be
placed on so as to reach destination. The DMZ is acronym for
demilitarized zone in a network.
The computers which individuals use are the bottom nodes of
the structures. This structure is a non-binary tree structure and
at each level we can identify the services being done like
securing connections and sessions as per TCP/IP protocols,
routing as per TCP/IP protocols, switching and forwarding as
per TCP/IP protocols and encapsulation as well as encoding and
decoding of packet data and packet formats as per TCP/IP
protocols.
Level I of the tree: Connections and connectionless services
integrated with Encoding and Encapsulation-Creating sessions
and connections by incorporated transaction monitors.
Level 2 of the tree: Routing services.-Use of routing algorithms
like OSPF.
Level 3 of the tree: Switching and forwarding services. Use of
address (MAC ) tables .
Level 4 of the tree: De-encapsulation and decoding services-
The presentation layer.
Physical layer and data link layer are covered in transport media
Cables in the diagram.
WINDOWS VULNERABILITY.
A threat is a force that is adversarial that directly can cause
harm to availability, integrity or confidentiality of a computer
information system including all the subsystems .A threat agent
is an element that provides delivery mechanisms for a threat
while an entity that initiates the launch of a threat is referred to
as a threat actor.
Threat actors are normally made more active through forces of
too much curiosity, or huge monetary gain without work or a
big political leverage or any form of social activism and lastly
by revenge. .
Windows vulnerabilities summarized:
Audit compromise: Reported in Windows 8, Internal procedures
were not followed in generating audited results so a compromise
was programmed at Bells Lab.
Logic bomb:Planted at the National institute for Carnegie
research on Windows 1998 and discovered by programmers, for
source see references
Communication failure: Occurred on a hacked distributed air
system for FAA (FEDERAL AVIATION AUTHORITIES) and
reported to Microsoft databases also happened with NASA, for
source see references.
Cyber brute force: Reported in windows 2000 in Stanford
Laboratories with brute force attack on encrypted message. This
is trying all possible combination of passwords, for source see
references.
Cyber disclosure: Reported in windows 8 in which Microsoft
databases reported complaints about Hacked Dept of State for
Immigration Services, for source see references.
INTRUSION METHODS IN WINDOWS:
Stealth port scans: This is an advanced technique in intrusion
when port scanning Can’t be detected by auditing tools.
Normally, by observing frequent attempts to connect, in which
no data is available , detecting intrusion is easy. In stealth port
scans, ports scan are done at a very low rate such that it is hard
for auditing tools to identify connections requests or malicious
attempt to intrude into computer systems. Occurred on a hacked
distributed Military control system for DOD and reported to
Microsoft databases also happened with NASA.
CGI attacks: Common gateway interface is an interface between
client side computing and server side computing. Cyber
criminals who are good programmers can break into computer
systems even without the usual login capabilities .They just
bypass this. : Reported in windows XP in which Microsoft
databases reported complaints about Hacked Department of
NSA for Security Services.
SMB probes: A server message block works as an application
layer protocol that functions by providing permissions to files,
ports, processes and so on. A probe into SMB can check for
shared entities that are available on the systems. If a
cybercriminal uses an SMB Probe ,he can detect which files or
ports are shared on the system. Cyber brute force: Reported in
windows 10 in Carnegie Research laboratories with brute force
attack on encrypted message. This is trying all possible
combination of passwords.
LINUX VULNERABILITY ATTACKS
A threat actor might purposefully launch a threat using an agent
.A threat actor could be for instance be a trusted employee who
commits an unintentional human error like a Trusted employee
who clicks on an email designed to be a phishing email then the
email downloads a malware.
Scavenging: Reported in Variant of UNIX in which company
databases reported complaints about Oracle security files, for
source see references.
Software Tampering: Reported in Ubuntu .Internal command
controls overtasked to work differently in a Publishing firm in
UK, for source see references.
Theft: Planned at the National institute for Carnegie research
and hard disks stolen by programmers, for source see references
Time and state: Occurred on a hacked distributed medical
dosage injection system in Maldives a race condition occurred,
for source see references
Transportation accidents: Reported in KALI Linux in MIT
when some computers in transportation were damaged for
source see references.
LINUX INTRUSION DETECTION:
OS fingerprinting attempts:In OS finger printing, the OS
details of a target computer are looked after and the attacker
goes for the same. Information looked after includes the vendor
name,underlying OS,OS generation device type and such.
Occurred on a hacked distributed Airport clearance system in
Panama , for source see references.
Buffer overflows: In buffer overflow attacks, the inputs
provided to a program overruns the buffer’s capacity and spills
over to overwrite data stored at neighbouring memory locations.
Reported in Ubuntu .Program routines overflowed buffer
memory by design in a Cargo clearance firm in France, for
source see references.
Covert port scans: This is an advanced technique in intrusion
when port scanning Can’t be detected by auditing tools.
Normally, by observing frequent attempts to connect, in which
no data is available, detecting intrusion is easy. In stealth port
scans, ports scan are done at a very low rate such that it is hard
for auditing tools to identify connections requests or malicious
attempt to intrude into computer systems. Occurred on a hacked
distributed Guests control system for US Embassy in Russia,
for source see references.
CGI attacks: Common gateway interface is an interface between
client side computing and server side computing. Cyber
criminals who are good programmers can break into computer
systems even without the usual login capabilities .They just
bypass this. : Reported in Variant of Ubuntu in which
databases reported complaints about Hacked Credit cards
systems for Payment Services, for source see references.
MAC VULNERABILITY ATTACKS
Equipment failure: Reported in MAC OS in which MAC ‘S
databases reported complaints about IPhone malfunctioning , for
source see references.
Hardware Tampering: Reported in MAC Tablets .Internal
design procedures were not followed in manufacturing the apple
devices, for source see references.
Malicious Softwares: Discovered at the Payroll system using
MAC system by programmers in department of labour, for
source see references
Phishing attacks: Occurred on a hacked distributed National
Data Services system and reported to company .Done by rogue
employees.
Resource exhaustion: Reported in apple IPHONE 7 in which
databases reported complaints about Hacked Dept of Taxation
Services, for source see references.
MOBILE DEVICES VULNERABILITY ATTACKS
Date Entry Error: Reported in windows 7 devices in which
Microsoft mobile databases reported complaints about illegal
login for Department of social welfare, for source see
references.
Denial of Service: Reported in Windows 8 phones .Internal
routines overloaded in MIT’S MOBILE Lab, for source see
references.
Earthquake: Hurricanes and earthquakes in China and Japan
destroy tablets at home and in office, for source see references.
Espionage: Occurred on a hacked facial recognition system for
FBI and reported to Android databases , for source see
references.
Floods: Reported in parts of South America and Central Asia
flooding homes and destroying mobile devices .Control
Analysis
The objective of risks control analysis is to assess the status of
the management ‘s operational ,managerial and technical
capabilities in the security system which may be either
preventive or detective in nature. Assessment of this form is
required in order to assess the likelihood that the identified
vulnerabilities could be exploited and the impact that such
exploitation could have on the organization security strength.
The impact of a threat event could be described in terms of
mission impact that occur because of loss of data or compromise
.Mission impact could be degraded if data integrity, data
availability and data confidentiality are compromised.Risk
Level Determination
The guidelines in NIST SP 800-30 stipulate the risk ratings as a
mathematical function of the likelihood and of impact. These
values are calculated in the table drawn below using probability
scale as decimal values.
RISK MITIGATION.
When the risks have all been identified and risk levels
determined, recommendations or countermeasures are drawn to
mitigate or eliminate the risks .The goal is to reduce the risk to
an acceptable level as considered by management just before
system accreditation can be granted. The countermeasures draw
their arguments from the following authoritative sources.
The effectiveness of the recommended options like system
compatibility.
Legislation and regulations in place
The strength of organization policy
Overall Operational impact
Safety and reliability of the system in consideration.
A sample table to be filled for the mitigation plan is outlined
below.
ACCEPTABLE RISKS.
According to the observations listed in the discussion section of
this research paper,11 vulnerabilities were regarded as having
low risk ratings,15 as having moderate risk rating and 7 as
having a high risk rating .This observations leads us to comment
that the overall level of risk for the organization is Moderate.
Among the 33 total number of vulnerabilities identified, 49 %
are considered unacceptable because serious harm could result
with the consequence of affecting the operations of the
organization. Therefore immediate mandatory countermeasures
needs to be implemented so as to mitigate the risk brought about
by these threats and resources should be made available so as to
reduce the risk level to acceptable level.
Of the identified vulnerabilities 51% are considered acceptable
to the system because only minor problems may result from
these risks and recommended countermeasures have also been
provided to be implemented so as to reduce or eliminate risks.
PURPOSE
The purpose of this security assessment report is to evaluate
the adequacy of the Amazon corporation’s security. This
security assessment report provides a structured but
qualitative assessment of the operational environment for
Amazon corporation’s . It addresses issues of
svulnerabilities, threats analysis , Firewalls aqnd encryption ,
risks remediation and safeguards applied in Amazon
corporation’s . The report and the assessment recommends use
of cost-effective safeguards in order to mitigate threats as well
as the associated exploitable vulnerabilities in Amazon
corporation’s.
ORGANISATION
The Amazon corporation’s system environment is run as a
distributed client and server environment consisting of a
Microsoft Structured Query Language -SQL database built with
Powerful programming code. The Amazon corporation
contains SQL data files, Python application code, and
executable Java and Javascripts. The SQL production data
files, documented as consisting of SQL stored procedures and
SQL tables, reside on a CLOUD storage area network attached
to a HP server running on Windows XP and MS SQL 2000
operating systems. The Python application code resides on a
different IBM server running on KALI LINUX . Both servers
are housed in the main office Building 233 Data Center at the
CDC Clifton Road campus in Atlanta, GA.
The The Amazon corporation’s executables reside on a
fileserver running Windows 2000 and KALI LINUX or
occasaionally a local workstation iis installed depending upon
the loads and jobs Requirements..
SCOPE
The scope of this project includes the carrying out of the nine
steps outlined in the project namely.
· Conducting independent research on organization structure
and identifying the boundaries that seperate the inner network
from outside network in Amazon Corporation.
· Describing threats to the Amazon Corporation whose
background is detailed above.
· A review of the information on TCP/IP especially identifying
security in communications and data transport in Amazon
Corporation.
· An analysis of the strength of passwords used in by
employees in Amazon corporation
· Determining the role of firewalls and encryption as well as
auditing that affects integrity, confidentiality and availability
in Amazon Corporation.
· Determining the various threats known to the Amazon’s
network architecture and IT assets.
· Providing an analysis of the scans and the recommendations
for remediation if needed in Amazon as well as designing and
formulating steps in an incidence response that should be in
place in Amazon Corporation.
METHODOLOGY
In order to successfully carry out the security assessment, I
used the resources provided in for CLAB 699 Cyber Computing
Lab workspace which is a collection of automated Security scan
tools and network analyzers. The use of automated methodology
proved efficient for the project since dialog boxes communicate
with the experimenter and navigation tools aid in browsing
various categories of services that can be automated.
These tools run various scans are designed to scan and report on
the appropriateness of the following security concerns
especially in Amazon. Security of Software and hardware
components, Security of firmware components, Security of
governance policies and and effectiveness and implementation
of security c Security of firmware components controls. The
automated monitoring and the automated assessment of the
computer infrastructure and its components, its policies, and its
processes should also account for changes made to security
system.
In using the automated tools in CLAB 699 Cyber Computing
Lab the following security vulnerabilities were looked for using
each of the MBSA tools, OpenVMS tools , Nmap tools , or
NESSUS tools in CLAB 699 Cyber Computing Lab.
SECURITY AREAS LEADING TO DISCOVERY OF
VULNERABILITIES BEING ADDRESSED WHILE
SCANNING WINDOWS USING MBSA AND LINUX USING
OpenVMS.
Whether the Amazon ever been compromised internally or
externally as in CLAB 699 Cyber Computing Lab screenshots 3
Listing of all IP address blocks and domain names registered to
Amazon as in CLAB 699 Cyber Computing Lab screen shots 3
Whether the Amazon uses a local firewall, a local intrusion
detection system and a local intrusion prevention system as in
CLAB 699 Cyber Computing Lab screen shots 2 and 3.
Whether Amazon uses a host based or network based IDS or a
combination of both as in in CLAB 699 Cyber Computing Lab
screen shots 2.
Whether the Amazon uses DMZ networks and whether it uses
remote access services and what type as in CLAB 699 Cyber
Computing Lab screen shots 3.
Whether Amazon uses site to site virtual private network
tunnels and modems for connections.
What antivirus application do you use and is it standalone or
managed client server architecture as in CLAB 699 Cyber
Computing Lab screen shots 2.
SECURITY AREAS LEADING TO DISCOVERY OF
VULNERABILITIES BEING ADDRESSED WHILE
SCANNING NETWORK USING WIRESHARK.
What services does Amazon expose to the internet like web
database FTP, SSH?
What type of authentication do you use for web services e.g.
pubcookies, windows integrated or http access.
DATA
The following are the results generated using wire shark as in
CLAB 699 Cyber Computing Lab .The above vulnerabilities are
investigated
And any suspicious activity in the network flagged accordingly
as highlighted in the screen shots 3.
The vulnerabilities being investigated in this project include the
following, remotely exploitable vulnerabilities; patch level
vulnerabilities; unnecessary services; weakness of encryption
and weakness of authentication.
The following are the results generated using wire shark as in
CLAB 699 Cyber Computing Lab .The above vulnerabilities are
investigated
From the screen shots obtained from screen shots 3 it enables
the analyst to answer the question of whether patching is up to
date; whether unnecessary services are running and whether
antivirus and malware are up to date.
A look at the data in Amazon clearly indicates that the
vulnerabilities include but not limited to the following
:Remotely exploitable vulnerabilities; patch level
vulnerabilities; unnecessary services; weakness of encryption
and weakness of authentication ;access right vulnerabilities and
vulnerabilities from failing to use best practices.
Consequently the following kind of threat was identified as
likely to have been carried in Amazon corporation’s network
infrastructure :
IP address spoofing/cache poisoning attacks related to patch
level vulnerabilities
Denial of service attacks (DoS) related to remotely exploitable
vulnerabilities
Packet analysis/sniffing related to vulnerabilities from
failing to use best practices.
Session hijacking attacks related to weakness of authentication
Distributed denial of service attacks related to weakness of
encryption
In using RDBMS Amazon corporation has had instances of
SQL Injection affecting database transactions mechanisms in
RDBMS.
· Cross Site Scripting affecting database transactions
mechanisms in RDBMS.
· Cross Site Request Forgery affecting access control
mechanisms in RDBMS.
· Improper data sanitization affecting access control
mechanisms in RDBMS
· Buffer overflows attacks affecting firewall log file
mechanisms.
RESULTS
When the results of the Amazons’ security assessment report
were provided to the Remediation committee, the committee’s
initial activity was to review and prioritize each weakness
identified for Amazon Corporation. The committee then
identified and disclosed vulnerabilities which did apply to
Amazon corporation as it is configured and hosted by the
CISCO Infrastructure. The justification for disclosing these
findings is documented in the system’s remediation spreadsheet.
With the support of the System stakeholders in Amazon, the
database and system administrators in Amazon, and application
support staff in Amazon, a Corrective Action Plan was
developed to identify findings that could otherwise be corrected
immediately and then to schedule the remaining findings with
an appropriate Plan of Action and Milestones
(POA&M)designed for Amazon corporation. There are three
stages of remediation.
Stage 1: Comprehensive analysis of the reported weaknesses
Stage 2: Generation of a comprehensive corrective Action Plan
to address each weakness
Stage 3: Disclosure of findings and creation of (POA&M) for
remaining items of risk
High risk findings were identified for immediate correction.
Immediate correction of moderate risk findings is preferred by
stakeholders, but near term scheduling was accepted for funding
or complex implementation issues. Because of the minimal
impact of Low vulnerabilities, the remediation team prioritized
each of these vulnerabilities in coordination with the
appropriate stakeholders and identified and scheduled
remediation activities as part of routine system maintenance.
For those findings in this Amazon report that required a more
complex corrective action, the stakeholders, and System
Administrators collaborated in the development of a CAP and
POA&M.
The Amazon system’s RAR and SSP were updated to reflect the
weaknesses that were remediated prior to the presentation of the
final Certification and Accreditation package to the
Accrediting authority. The remediation team continues to track
and report on the status of correction activities for the POA&M
items on a periodic basis after the presentation of the final C&A
package to the AO.
FINDINGS
The following is an action plan adopted by Amazon Corporation
.This is because non conforming Controls were identified during
the project in Amazon. AS technology continues to change the
specific non conforming controls in Amazon requires to be
reevaluated as detailed below in the conclusions section of this
report.
False positives must be identified within the SAR along with
the supporting clean scan report that does not have to be
identified in the (POA AND M).
Each item on the proposed POA &M should have a unique
identifier matched with a finding in SAR.
All high and critical risk findings must be remediated prior to
receiving a provisional authorization.
Moderate findings shall have a mitigation date within 90 days
of provisional authorization date
CONCLUSIONS AND RECOMEDATIONS
This Risk Assessment report for the organization identifies risks
of the operations especially in those domains which fails to
meet the minimum requirements and for which appropriate
countermeasures have yet to be implemented. The RAR also
determines the Probability of occurrence and issues
countermeasures aimed at mitigating the identified risks in an
endeavor to provide an appropriate level-of-protection and to
satisfy all the minimum requirements imposed on the
organization’s policy document.
The system security policy requirements are satisfied now with
the exception of those specific areas identified in this report.
The countermeasure recommended in this report adds to the
additional security controls needed to meet policies and to
effectively manage the security risk to the organization and its
operating environment. Finally, the Certification Official and
the Authorizing Officials must determine whether the totality of
the protection mechanisms approximate a sufficient level of
security, and/or are adequate for the protection of this system
and its resources and information. The Risk Assessment Results
supplies critical information and should be carefully reviewed
by the AO prior to making a final accreditation decision.
.
References
National Institute of Standards and Technology (NIST) (2014).
Assessing security and privacy
controls in federal information systems and organizations.
NIST Special Publication 800-53A Revision 4. Retrieved from
http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.8
00-53Ar4.pdf
National Institute of Standards and Technology (NIST) (2010).
Guide for applying the risk
management framework to federal information systems. NIST
Special Publication 800-37 Revision 1. Retrieved from
http://csrc.nist.gov/publications/nistpubs/800-37-rev1/sp800-37-
rev1-final.pdf
.
Vetting the Security of Mobile Applications by Steve
Quirolgico, Jeffrey Voas, Tom Karygiannis,
Christoph Michael, and Karen Scarfone comprises public
domain material from the National Institute of
Standards and Technology, U.S. Department of Commerce.
Vetting the Security of
Mobile Applications
Steve Quirolgico
Jeffrey Voas
Tom Karygiannis
Christoph Michael
Karen Scarfone
This publication is available free of charge from:
http://dx.doi.org/10.6028/NIST.SP.800-163
C O M P U T E R S E C U R I T Y
http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.8
00-163.pdf
Vetting the Security of Mobile Applications by Steve
Quirolgico, Jeffrey Voas, Tom Karygiannis,
Christoph Michael, and Karen Scarfone comprises public
domain material from the National Institute of
Standards and Technology, U.S. Department of Commerce.
Vetting the Security of
Mobile Applications
Steve Quirolgico, Jeffrey Voas, Tom Karygiannis
Computer Security Division
Information Technology Laboratory
Christoph Michael
Leidos
Reston, VA
Karen Scarfone
Scarfone Cybersecurity
Clifton, VA
This publication is available free of charge from:
http://dx.doi.org/10.6028/NIST.SP.800-163
January 2015
U.S. Department of Commerce
Penny Pritzker, Secretary
National Institute of Standards and Technology
Willie May, Acting Under Secretary of Commerce for Standards
and Technology and Acting Director
http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.8
00-163.pdf
Authority
This publication has been developed by NIST in accordance
with its statutory responsibilities under the
Federal Information Security Management Act of 2002
(FISMA), 44 U.S.C. § 3541 et seq., Public Law
107-347. NIST is responsible for developing information
security standards and guidelines, including
minimum requirements for federal information systems, but
such standards and guidelines shall not apply
to national security systems without the express approval of
appropriate federal officials exercising policy
authority over such systems. This guideline is consistent
with the requirements of the Office of
Management and Budget (OMB) Circular A-130, Section 8b(3),
Securing Agency Information Systems, as
analyzed in Circular A-130, Appendix IV: Analysis of Key
Sections. Supplemental information is
provided in Circular A-130, Appendix III, Security of Federal
Automated Information Resources.
Nothing in this publication should be taken to contradict the
standards and guidelines made mandatory
and binding on federal agencies by the Secretary of Commerce
under statutory authority. Nor should
these guidelines be interpreted as altering or superseding the
existing authorities of the Secretary of
Commerce, Director of the OMB, or any other federal official.
This publication may be used by
nongovernmental organizations on a voluntary basis and is not
subject to copyright in the United States.
Attribution would, however, be appreciated by NIST.
National Institute of Standards and Technology Special
Publication 800-163
Natl. Inst. Stand. Technol. Spec. Publ. 800-163, 44 pages
(January 2015)
CODEN: NSPUE2
This publication is available free of charge from:
http://dx.doi.org/10.6028/NIST.SP.800-163
Certain commercial entities, equipment, or materials may be
identified in this document in order to describe an
experimental procedure or concept adequately. Such
identification is not intended to imply recommendation or
endorsement by NIST, nor is it intended to imply that the
entities, materials, or equipment are necessarily the best
available for the purpose.
There may be references in this publication to other
publications currently under development by NIST in
accordance with its assigned statutory responsibilities. The
information in this publication, including concepts and
methodologies, may be used by federal agencies even before the
completion of such companion publications. Thus,
until each publication is completed, current requirements,
guidelines, and procedures, where they exist, remain
operative. For planning and transition purposes, federal
agencies may wish to closely follow the development of
these new publications by NIST.
Organizations are encouraged to review all draft publications
during public comment periods and provide feedback
to NIST. All NIST Computer Security Division publications,
other than the ones noted above, are available at
http://csrc.nist.gov/publications.
Comments on this publication may be submitted to:
National Institute of Standards and Technology
Attn: Computer Security Division, Information Technology
Laboratory
100 Bureau Drive (Mail Stop 8930) Gaithersburg, MD 20899-
8930
[email protected]
ii
Reports on Computer Systems Technology
The Information Technology Laboratory (ITL) at the National
Institute of Standards and Technology
(NIST) promotes the U.S. economy and public welfare by
providing technical leadership for the Nation’s
measurement and standards infrastructure. ITL develops tests,
test methods, reference data, proof of
concept implementations, and technical analyses to advance the
development and productive use of
information technology. ITL’s responsibilities include the
development of management, administrative,
technical, and physical standards and guidelines for the cost-
effective security and privacy of other than
national security-related information in federal information
systems. The Special Publication 800-series
reports on ITL’s research, guidelines, and outreach efforts in
information system security, and its
collaborative activities with industry, government, and
academic organizations.
Abstract
The purpose of this document is to help organizations (1)
understand the process for vetting the security
of mobile applications, (2) plan for the implementation of an
app vetting process, (3) develop app security
requirements, (4) understand the types of app vulnerabilities
and the testing methods used to detect those
vulnerabilities, and (5) determine if an app is acceptable for
deployment on the organization's mobile
devices.
Keywords
app vetting; apps; malware; mobile devices; security
requirements; security vetting; smartphones;
software assurance; software security; software testing;
software vetting
Acknowledgments
Special thanks to the National Security Agency’s Center for
Assured Software for providing Appendices
B and C which describe Android and iOS app vulnerabilities.
Also, thanks to all the organizations and
individuals who made contributions to the publication including
Robert Martin of The MITRE
Corporation, Paul Black and Irena Bojanova of NIST, the
Department of Homeland Security (DHS), and
the Department of Justice (DOJ). Finally, special thanks to the
Defense Advanced Research Projects
Agency (DARPA) Transformative Applications (TransApps)
program for funding NIST research in
mobile app security vetting.
Trademarks
All registered trademarks belong to their respective
organizations.
3
Table of Contents
1 Introduction
...............................................................................................
....................... 1
1.1 Traditional vs. Mobile Application Security Issues
.....................................................1
1.2 App Vetting Basics
...............................................................................................
.....2
1.3 Purpose and Scope
...............................................................................................
....3
1.4 Audience
...............................................................................................
....................3
1.5 Document Structure
...............................................................................................
...4
2 App Vetting Process
...............................................................................................
......... 5
2.1 App Testing
...............................................................................................
................5
2.2 App Approval/Rejection
.............................................................................................5
2.3 Planning
...............................................................................................
.....................6
2.3.1 Developing Security Requirements
............................................................... 6
2.3.2 Understanding Vetting Limitations
................................................................. 9
2.3.3 Budget and
Staffing...................................................................................
...10
3 App Testing
...............................................................................................
......................11
3.1 General Requirements
............................................................................................1
1
3.1.1 Enabling Authorized Functionality
................................................................12
3.1.2 Preventing Unauthorized Functionality
.........................................................12
3.1.3 Limiting Permissions
....................................................................................13
3.1.4 Protecting Sensitive Data
.............................................................................14
3.1.5 Securing App Code Dependencies
..............................................................15
3.1.6 Testing App
Updates...................................................................................
.16
3.2 Testing Approaches
...............................................................................................
.16
3.2.1 Correctness Testing
.....................................................................................16
3.2.2 Source Code Versus Binary
Code................................................................17
3.2.3 Static Versus Dynamic Analysis
...................................................................17
3.2.4 Automated Tools
............................................................................... ...........18
4
3.3 Sharing Results
...............................................................................................
........20
4 App Approval/Rejection
............................................................................. ..................
...21
4.1 Report and Risk Auditing
.........................................................................................21
4.2 Organization-Specific Vetting Criteria
......................................................................22
4.3 Final
Approval/Rejection..................................................................
........................23
Appendix A—
Recommendations....................................................................
......................24
A.1 Planning
...............................................................................................
...................24
A.2 App Testing
...............................................................................................
..............24
A.3 App Approval/Rejection
...........................................................................................25
Appendix B— Android App Vulnerability
Types...................................................................26
Appendix C— iOS App Vulnerability Types
..........................................................................29
Appendix D— Glossary
...............................................................................................
...........32
Appendix E— Acronyms and
Abbreviations.........................................................................
33
Appendix F—
References..............................................................................
.........................34
List of Figures and Tables
Figure 1. An app vetting process and its related actors.
...................................................... 6
Figure 2. Organizational app security requirements.
............................................................ 8
Table 1. Android Vulnerabilities, A Level
..............................................................................26
Table 2. Android Vulnerabilities by
Level..............................................................................27
Table 3. iOS Vulnerability Descriptions, A Level
..................................................................29
Table 4. iOS Vulnerabilities by Level
.....................................................................................30
5
Executive Summary
Recently, organizations have begun to deploy mobile
applications (or apps) to facilitate their business
processes. Such apps have increased productivity by providing
an unprecedented level of connectivity
between employees, vendors, and customers, real-time
information sharing, unrestricted mobility, and
improved functionality. Despite the benefits of mobile apps,
however, the use of apps can potentially lead
to serious security issues. This is so because, like traditional
enterprise applications, apps may contain
software vulnerabilities that are susceptible to attack. Such
vulnerabilities may be exploited by an attacker
to gain unauthorized access to an organization’s information
technology resources or the user’s personal
data.
To help mitigate the risks associated with app vulnerabilities,
organizations should develop security
requirements that specify, for example, how data used by an app
should be secured, the environment in
which an app will be deployed, and the acceptable level of risk
for an app. To help ensure that an app
conforms to such requirements, a process for evaluating the
security of apps should be performed. We
refer to this process as an app vetting process. An app vetting
process is a sequence of activities that aims
to determine if an app conforms to an organization’s security
requirements. An app vetting process
comprises two main activities: app testing and app
approval/rejection. The app testing activity involves
the testing of an app for software vulnerabilities by services,
tools, and humans to derive vulnerability
reports and risk assessments. The app approval/rejection
activity involves the evaluation of these reports
and risk assessments, along with additional criteria, to
determine the app's conformance with
organizational security requirements and ultimately, the
approval or rejection of the app for deployment
on the organization's mobile devices.
Before using an app vetting process, an organization must first
plan for its implementation. To facilitate
the planning of an app vetting process implementation, this
document:
-sensitive requirements that
make up an organization's app security
requirements.
es a questionnaire for identifying the security needs
and expectations of an organization that
are necessary to develop the organization's app security
requirements.
With respect to the app testing activity of an app vetting
process, this document describes:
files during testing.
unauthorized functionality, protecting
sensitive data, and securing app code dependencies.
code vs. binary code, static vs. dynamic
analysis, and automated tools for testing apps.
of test
results.
With respect to the app approval/rejection activity of an app
vetting process, this document describes:
-specific vetting criteria that may be used to help
determine an app's conformance to
context-sensitive security requirements.
6
NIST SP 800-163 Vetting the Security of Mobile Applications
1 Introduction
When deploying a new technology, an organization should be
aware of the potential security impact it
may have on the organization’s IT resources, data, and users.
While new technologies may offer the
promise of productivity gains and new capabilities, they may
also present new risks. Thus, it is important
for an organization’s IT professionals and users to be fully
aware of these risks and either develop plans
to mitigate them or accept their consequences.
Recently, there has been a paradigm shift where organizations
have begun to deploy new mobile
technologies to facilitate their business processes. Such
technologies have increased productivity by
providing (1) an unprecedented level of connectivity between
employees, vendors, and customers; (2)
real-time information sharing; (3) unrestricted mobility; and (4)
improved functionality. These mobile
technologies comprise mobile devices (e.g., smartphones and
tablets) and related mobile applications (or
apps) that provide mission-specific capabilities needed by users
to perform their duties within the
organization (e.g., sales, distribution, and marketing). Despite
the benefits of mobile apps, however, the
use of apps can potentially lead to serious security risks. This is
so because, like traditional enterprise
applications, apps may contain software vulnerabilities that are
susceptible to attack. Such vulnerabilities
may be exploited by an attacker to steal information or control a
user's device.
To help mitigate the risks associated with software
vulnerabilities, organizations should employ software
assurance processes. Software assurance refers to “the level of
confidence that software is free from
vulnerabilities, either intentionally designed into the software
or accidentally inserted at any time during
its life cycle, and that the software functions in the intended
manner” [1]. The software assurance process
includes “the planned and systematic set of activities that
ensures that software processes and products
conform to requirements, standards, and procedures” [2]. A
number of government and industry legacy
software assurance standards exist that are primarily directed at
the process for developing applications
that require a high level of assurance (e.g., space flight,
automotive systems, and critical defense
systems).1 Although considerable progress has been made in the
past decades in the area of software
assurance, and research and development efforts have resulted
in a growing market of software assurance
tools and services, the state of practice for many today still
includes manual activities that are time-
consuming, costly, and difficult to quantify and make
repeatable. The advent of mobile computing adds
new challenges because it does not necessarily support
traditional software assurance techniques.
1.1 Traditional vs. Mobile Application Security Issues
The economic model of the rapidly evolving app marketplace
challenges the traditional software
development process. App developers are attracted by the
opportunities to reach a market of millions of
users very quickly. However, such developers may have little
experience building quality software that is
secure and do not have the budgetary resources or motivation to
conduct extensive testing. Rather than
performing comprehensive software tests on their code
before making it available to the public,
developers often release apps that contain functionality flaws
and/or security-relevant weaknesses. That
can leave an app, the user’s device, and the user’s network
vulnerable to exploitation by attackers.
Developers and users of these apps often tolerate buggy,
unreliable, and insecure code in exchange for the
low cost. In addition, app developers typically update their apps
much more frequently than traditional
applications.
1 Examples of these software assurance standards include DO-
178B, Software Considerations in Airborne Systems and
Equipment Certification [3], IEC 61508 Functional Safety of
Electrical/Electronic/Programmable Electronic Safety-related
System [4], and ISO 26262 Road vehicles -- Functional safety
[5].
1
NIST SP 800-163 Vetting the Security of Mobile Applications
Organizations that once spent considerable resources to
develop in-house applications are taking
advantage of inexpensive third-party apps and web services to
improve their organization’s productivity.
Increasingly, business processes are conducted on mobile
devices. This contrasts with the traditional
information infrastructure where the average employee uses
only a handful of applications and web-based
enterprise databases. Mobile devices provide access to
potentially millions of apps for a user to choose
from. This trend challenges the traditional mechanisms of
enterprise IT security software where software
exists within a tightly controlled environment and is uniform
throughout the organization.
Another major difference between apps and enterprise
applications is that unlike a desktop computing
system, far more precise and continuous device location
information, physical sensor data, personal health
metrics, and pictures and audio about a user can be exposed
through apps. Apps can sense and store
information including user personally identifiable information
(PII). Although many apps are advertised
as being free to consumers, the hidden cost of these apps may
be selling the user’s profile to marketing
companies or online advertising agencies.
There are also differences between the network capabilities of
traditional computers that enterprise
applications run on and mobile devices that apps run on. Unlike
traditional computers that connect to the
network via Ethernet, mobile devices have access to a
wide variety of network services including
Wireless Fidelity (Wi-Fi), 2G/3G, and 4G/Long Term Evolution
(LTE). This is in addition to short-range
data connectivity provided by services such as Bluetooth and
Near Field Communications (NFC). All of
these mechanisms of data transmission are typically available to
apps that run on a mobile device and can
be vectors for remote exploits. Further, mobile devices are not
physically protected to the same extent as
desktop or laptop computers and can therefore allow an attacker
to more easily (physically) acquire a lost
or stolen device.
1.2 App Vetting Basics
To provide software assurance for apps, organizations should
develop security requirements that specify,
for example, how data used by an app should be secured, the
environment in which an app will be
deployed, and the acceptable level of risk for an app (see [6] for
more information). To help ensure that
an app conforms to such requirements, a process for evaluating
the security of apps should be performed.
We refer to this process as an app vetting process. An app
vetting process is a sequence of activities that
aims to determine if an app conforms to the organization’s
security requirements.2 This process is
performed on an app after the app has been developed and
released for distribution but prior to its
deployment on an organization's mobile device. Thus, an app
vetting process is distinguished from
software assurance processes that may occur during the software
development life cycle of an app. Note
that an app vetting process typically involves analysis of an
app’s compiled, binary representation but can
also involve analysis of the app’s source code if it is available.
The software distribution model that has arisen with mobile
computing presents a number of software
assurance challenges, but it also creates opportunities. An app
vetting process acknowledges the concept
that someone other than the software vendor is entitled to
evaluate the software’s behavior, allowing
organizations to evaluate software in the context of their own
security policies, planned use, and risk
tolerance. However, distancing a developer from an
organization’s app vetting process can also make
those activities less effective with respect to improving the
secure behavior of the app.
2 The app vetting process can also be used to assess other app
issues including reliability, performance, and accessibility but
is
primarily intended to assess security-related issues.
2
NIST SP 800-163 Vetting the Security of Mobile Applications
App stores may perform app vetting processes to verify
compliance with their own requirements.
However, because each app store has its own unique, and not
always transparent, requirements and
vetting processes, it is necessary to consult current agreements
and documentation for a particular app
store to assess its practices. Organizations should not assume
that an app has been fully vetted and
conforms to their security requirements simply because it is
available through an official app store. Third-
party assessments that carry a moniker of “approved by” or
“certified by” without providing details of
which tests are performed, what the findings were, or how apps
are scored or rated, do not provide a
reliable indication of software assurance. These assessments
are also unlikely to take organization-
specific requirements and recommendations into account, such
as federal-specific cryptography
requirements.
Although a “one size fits all” approach to app vetting is not
plausible, the basic findings from one app
vetting effort may be reusable by others. Leveraging another
organization’s findings for an app should be
contemplated to avoid duplicating work and wasting scarce app
vetting resources. With appropriate
standards for scoping, managing, licensing, and recording the
findings from software assurance activities
in consistent ways, app vetting results can be looked at
collectively, and common problems and solutions
may be applicable across the industry or with similar
organizations.
An app vetting process should be included as part of the
organization's overall security strategy. If the
organization has a formal process for establishing and
documenting security requirements, an app vetting
process should be able to import those requirements created by
the process. Bug and incident reports
created by the organization could also be imported by an app
vetting process, as could public advisories
concerning vulnerabilities in commercial and open source apps
and libraries, and conversely, results from
app vetting processes may get stored in the organization’s bug
tracking system.
1.3 Purpose and Scope
The purpose of this document is to help organizations (1)
understand the app vetting process, (2) plan for
the implementation of an app vetting process, (3) develop app
security requirements, (4) understand the
types of app vulnerabilities and the testing methods used to
detect those vulnerabilities, and (5) determine
if an app is acceptable for deployment on the organization's
mobile devices. Ultimately, the acceptance of
an app depends on the organization’s security requirements
which, in turn, depend on the environment in
which the app is deployed, the context in which it will be used,
and the underlying mobile technologies
used to run the app. This document highlights those elements
that are particularly important to be
considered before apps are approved as “fit-for-use.”
Note that this document does not address the security of the
underlying mobile platform and operating
system, which are addressed in other publications [6, 7, 8].
While these are important characteristics for
organizations to understand and consider in selecting mobile
devices, this document is focused on how to
vet apps after the choice of platform has been made. Similarly,
discussion surrounding the security of web
services used to support back-end processing for apps is out of
scope for this document.
1.4 Audience
This document is intended for organizations that plan to
implement an app vetting process or leverage
existing app vetting results from other organizations. It is also
intended for developers that are interested
in understanding the types of software vulnerabilities that may
arise in their apps during the app’s
software development life cycle.
3
NIST SP 800-163 Vetting the Security of Mobile Applications
1.5 Document Structure
The remainder of this document is organized into the following
sections and appendices:
including recommendations for planning the
implementation of an app vetting process.
process. Organizations interested in
leveraging only existing security reports and risk assessments
from other organizations can skip this
section.
app vetting process.
app testing and app approval/rejection
activities of an app vetting process as well as the planning of an
app vetting process implementation.
-specific
vulnerabilities for apps running on the
Android and iOS operating systems, respectively.
efines selected terms used in this document.
in this document.
4
NIST SP 800-163 Vetting the Security of Mobile Applications
2 App Vetting Process
An app vetting process comprises a sequence of two main
activities: app testing and app
approval/rejection. In this section, we provide an overview of
these two activities as well as provide
recommendations for planning the implementation of an app
vetting process.
2.1 App Testing
An app vetting process begins when an app is submitted by an
administrator to one or more analyzers for
testing. An administrator is a member of the organization who is
responsible for deploying, maintaining,
and securing the organization's mobile devices as well as
ensuring that deployed devices and their
installed apps conform to the organization's security
requirements. Apps that are submitted by an
administrator for testing will typically be acquired from an app
store or an app developer, each of which
may be internal or external to the organization. Note that the
activities surrounding an administrator's
acquisition of an app are not part of an app vetting process.
An analyzer is a service, tool, or human that tests an app for
specific software vulnerabilities and may be
internal or external to the organization. When an app is received
by an analyzer, the analyzer may
perform some preprocessing of the app prior to testing the app.
Preprocessing of an app may be used to
determine the app's suitability for testing and may involve
ensuring that the app decompiles correctly,
extracting meta-data from the app (e.g., app name and version
number) and storing the app in a local
database.
After an app has been received and preprocessed by an analyzer,
the analyzer then tests the app for the
presence of software vulnerabilities. Such testing may include a
wide variety of tests including static and
dynamic analyses and may be performed in an automated
or manual fashion. Note that the tests
performed by an analyzer are not organization-specific but
instead are aimed at identifying software
vulnerabilities that may be common across different apps. After
testing an app, an analyzer generates a
report that identifies detected software vulnerabilities. In
addition, the analyzer generates a risk
assessment that estimates the likelihood that a detected
vulnerability will be exploited and the impact that
the detected vulnerability may have on the app or its related
device or network. Risk assessments are
typically represented as ordinal values indicating the severity of
the risk (e.g., low-, moderate-, and high-
risk).
2.2 App Approval/Rejection
After the report and risk assessment are generated by an
analyzer, they are made available to one or more
auditors of the organization. An auditor is a member of the
organization who inspects reports and risk
assessments from one or more analyzers to ensure that an app
meets the security requirements of the
organization. The auditor also evaluates additional criteria to
determine if the app violates any
organization-specific security requirements that could not
be ascertained by the analyzers. After
evaluating all reports, risk assessments, and additional criteria,
the auditor then collates this information
into a single report and risk assessment and derives a
recommendation for approving or rejecting the app
based on the overall security posture of the app. This
recommendation is then made available to an
approver. An approver is a high-level member of the
organization responsible for determining which apps
will be deployed on the organization's mobile devices. An
approver uses the recommendations provided
by one or more auditors that describe the security posture of the
app as well as other non-security-related
criteria to determine the organization's official approval or
rejection of an app. If an app is approved by
the approver, the administrator is then permitted to deploy the
app on the organization's mobile devices.
If, however, the app is rejected, the organization will follow
specified procedures for identifying a
5
NIST SP 800-163 Vetting the Security of Mobile Applications
suitable alternative app or rectifying issues with the problematic
app. Figure 1 shows an app vetting
process and its related actors.
Figure 1. An app vetting process and its related actors.
Note that an app vetting process can be performed manually or
through systems that provide semi-
automated management of the app testing and app
approval/rejection activities [9]. Further note that
although app vetting processes may vary among organizations,
organizations should strive to implement a
process that is repeatable, efficient, consistent, and that limits
errors (e.g., false positive and false negative
results).
2.3 Planning
Before an organization can implement an app vetting process, it
is necessary for the organization to first
(1) develop app security requirements, (2) understand the
limitations of app vetting, and (3) procure a
budget and staff for supporting the app vetting
process. Recommendations for planning the
implementation of an app vetting process are described in
Appendix A.1.
2.3.1 Developing Security Requirements
App security requirements state an organization’s expectations
for app security and drive the evaluation
process. When possible, tailoring app security assessments to
the organization’s unique mission
requirements, policies, risk tolerance, and available security
countermeasures is more cost-effective in
time and resources. Such assessments could minimize the risk of
attackers exploiting behaviors not taken
into account during vetting, since foreseeing all future
insecure behaviors during deployment is
unrealistic. Unfortunately, it is not always possible to get
security requirements from app end users; the
6
NIST SP 800-163 Vetting the Security of Mobile Applications
user base may be too broad, may not be able to state their
security requirements concretely enough for
testing, or may not have articulated any security requirements
or expectations of secure behavior.
An organization's app security requirements comprise two types
of requirements: general and context-
sensitive. A general requirement is an app security requirement
that specifies a software characteristic or
behavior that an app should exhibit in order to be considered
secure. For an app, the satisfaction or
violation of a general requirement is determined by analyzers
that test the app for software vulnerabilities
during the app testing activity of an app vetting process. If an
analyzer detects a software vulnerability in
an app, the app is considered to be in violation of a
general requirement. Examples of general
requirements include Apps must prevent unauthorized
functionality and Apps must protect sensitive data.
We describe general requirements in Section 3.1.
A context-sensitive requirement is an app security requirement
that specifies how apps should be used by
the organization to ensure the organization's security posture.
For an app, the satisfaction or violation of a
context-sensitive requirement is not based on the presence or
absence of a software vulnerability and thus
cannot be determined by analyzers, but instead must be
determined by an auditor who uses organization-
specific vetting criteria for the app during the app
approval/rejection activity of an app vetting process.
Such criteria may include the app's intended set of users or
intended deployment environment. If an
auditor determines that the organization-specific vetting criteria
for an app conflicts with a context-
sensitive requirement, the app is considered to be in violation of
the context-sensitive requirement.
Examples of context-sensitive requirements include Apps that
access a network must not be used in a
sensitive compartmented information facility (SCIF) and Apps
that record audio or video must only be
used by classified personnel. We describe organization-
specific vetting criteria in Section 4.2. The
relationship between the organization's app security
requirements and general and context-sensitive
requirements is shown in Figure 2.
Here, an organization's app security requirements comprise a
subset of general requirements and a subset
of context-sensitive requirements. The shaded areas represent
an organization's app security requirements
that are applied to one or more apps. During the app
approval/rejection activity of an app vetting process,
the auditor reviews reports and risk assessments generated by
analyzers to determine an app's satisfaction
or violation of general requirements as well as evaluates
organization-specific vetting criteria to
determine the app's satisfaction or violation of context-sensitive
requirements.
Developing app security requirements involves identifying the
security needs and expectations of the
organization and identifying both general and context-sensitive
requirements that address those needs and
expectations. For example, if an organization has a need to
ensure that apps do not leak PII, the general
requirement Apps must not leak PII should be defined. If the
organization has an additional need to ensure
that apps that record audio or video must not be used in a SCIF,
then the context-sensitive requirement
Apps that record audio or video must not be used in a SCIF
should be used. After deriving general
requirements, an organization should explore available
analyzers that test for the satisfaction or violation
of those requirements. In this case, for example, an analyzer
that tests an app for the leakage of PII should
be used. Similarly, for context-sensitive requirements,
an auditor should identify appropriate
organization-specific vetting criteria (e.g., the features or
purpose of the app and the intended deployment
environment) to determine if the app records audio or video and
if the app will be used in a SCIF.
The following questionnaire may be used by organizations to
help identify their specific security needs in
order to develop their app security requirements.
7
NIST SP 800-163 Vetting the Security of Mobile Applications
Figure 2. Organizational app security requirements.
needs, secure behavior expectations, and/or
risk management needs? For example, what assets in the
organization must be protected, and what
events must be avoided? What is the impact if the assets are
compromised or the undesired events
occur?
nces should an app not be used?
store, if known?
concern that mobile devices will be used as a
springboard to attack those assets?
organization concerned about? For example:
o What information about personnel would harm the
organization as a whole if it were disclosed?
o Is there a danger of espionage?
o Is there a danger of malware designed to disrupt operations at
critical moments?
o What kinds of entities might profit from attacking the
organization?
8
NIST SP 800-163 Vetting the Security of Mobile Applications
devices carried by personnel connect to
public providers, a communication infrastructure owned by the
organization, or both at different
times? How secure is the organization's wireless infrastructure
if it has one?
app vetting process expected to evaluate
other aspects of mobile device security such as the operating
system (OS), firmware, hardware, and
communications? Is the app vetting process meant only to
perform evaluations or does it play a
broader role in the organization’s approach to mobile security?
Is the vetting process part of a larger
security infrastructure, and if so, what are the other
components? Note that only software-related
vetting is in the scope of this document, but if the organization
expects more than this, the vetting
process designers should be aware of it.
expected to handle, and how much variation is
permissible in the time needed to process individual apps? High
volume combined with low
funding might necessitate compromises in the quality of the
evaluation. Some organizations may
believe that a fully automated assessment pipeline provides
more risk reduction than it actually does
and therefore expect unrealistically low day-to-day expenses.
Organizations should perform a risk analysis [10] to understand
and document the potential security risks
in order to help identify appropriate security requirements.
Some types of vulnerabilities or weaknesses
discovered in apps may be mitigated by other security controls
included in the enterprise mobile device
architecture. With respect to these other controls, organizations
should review and document the mobile
device hardware and operating system security controls (e.g.,
encrypted file systems), to identify which
security and privacy requirements can be addressed by the
mobile device itself. In addition, mobile
enterprise security technologies such as Mobile Device
Management (MDM) solutions should also be
reviewed to identify which security requirements can be
addressed by these technologies.
2.3.2 Understanding Vetting Limitations
As with any software assurance process, there is no guarantee
that even the most thorough vetting process
will uncover all potential vulnerabilities. Organizations should
be made aware that although app security
assessments should generally improve the security posture of
the organization, the degree to which they
do so may not be easily or immediately ascertained.
Organizations should also be made aware of what the
vetting process does and does not provide in terms of security.
Organizations should also be educated on the value of humans
in security assessment processes and
ensure that their app vetting does not rely solely on automated
tests. Security analysis is primarily a
human-driven process [11, 12]; automated tools by themselves
cannot address many of the contextual and
nuanced interdependencies that underlie software security. The
most obvious reason for this is that fully
understanding software behavior is one of the classic impossible
problems of computer science [13], and
in fact current technology has not even reached the limits of
what is theoretically possible. Complex,
multifaceted software architectures cannot be fully analyzed by
automated means.
A further problem is that current software analysis tools do not
inherently understand what software has
to do to behave in a secure manner in a particular context. For
example, failure to encrypt data transmitted
to the cloud may not be a security issue if the transmission is
tunneled through a virtual private network
(VPN). Even if the security requirements for an app have been
correctly predicted and are completely
understood, there is no current technology for unambiguously
translating human-readable requirements
into a form that can be understood by machines.
For these reasons, security analysis requires humans (e.g.,
auditors) in the loop, and by extension the
quality of the outcome depends, among other things, on the
level of human effort and level of expertise
9
NIST SP 800-163 Vetting the Security of Mobile Applications
available for an evaluation. Auditors should be familiar with
standard processes and best practices for
software security assessment (e.g., see [11, 14, 15, 16]). A
robust app vetting process should use multiple
assessment tools and processes, as well as human interaction, to
be successful; reliance on only a single
tool, even with human interaction, is a significant risk because
of the inherent limitations of each tool.
2.3.3 Budget and Staffing
App software assurance activity costs should be included in
project budgets and should not be an
afterthought. Such costs may include licensing costs for
analyzers and salaries for auditors, approvers,
and administrators. Organizations that hire contractors to
develop apps should specify that app assessment
costs be included as part of the app development process. Note,
however, that for apps developed in-
house, attempting to implement app assessments solely at the
end of the development effort will lead to
increased costs and lengthened project timelines. It is strongly
recommended to identify potential
vulnerabilities or weaknesses during the development process
when they can still be addressed by the
original developers.
To provide an optimal app vetting process implementation, it is
critical for the organization to hire
personnel with appropriate expertise. For example,
organizations should hire auditors experienced in
software security and information assurance as well as
administrators experienced in mobile security.
10
NIST SP 800-163 Vetting the Security of Mobile Applications
3 App Testing
In the app testing activity, an app is sent by an administrator to
one or more analyzers. After receiving an
app, an analyzer assumes control of the app in order to
preprocess it prior to testing. This initial part of the
app testing activity introduces a number of security-related
issues, but such issues pertain more to the
security of the app file itself rather than the vulnerabilities that
might exist in the app. Regardless, such
issues should be considered by the organization for
ensuring the integrity of the app, protecting
intellectual property, and ensuring compliance with licensing
and other agreements. Most of these issues
are related to unauthorized access to the app.
If an app is submitted electronically by an administrator to an
analyzer over a network, issues may arise if
the transmission is made over unencrypted channels, potentially
allowing unauthorized access to the app
by an attacker. After an app is received, an analyzer will often
store an app on a local file system,
database, or repository. If the app is stored on an unsecured
machine, unauthorized users could access the
app, leading to potential licensing violations (e.g., if an
unauthorized user copies a licensed app for their
personal use) and integrity issues (e.g., if the app is modified or
replaced with another app by an attacker).
In addition, violations of intellectual property may also arise if
unauthorized users access the source code
of the app. Even if source code for an app is not provided by an
administrator, an analyzer will often be
required to decompile the (binary) app prior to testing. In both
cases, the source or decompiled code will
often be stored on a machine by the analyzer. If that machine is
unsecured, the app's code may be
accessible to unauthorized users, potentially allowing a
violation of intellectual property. Note that
organizations should review an app’s End User License
Agreement (EULA) regarding the sending of an
app to a third-party for analysis as well as the reverse
engineering of the app’s code by the organization or
a third-party to ensure that the organization is not in violation
of the EULA.
If an analyzer is external to the organization, then the
responsibility of ensuring the security of the app file
falls on the analyzer. Thus, the best that an organization can do
to ensure the integrity of apps, the
protection of intellectual property, and the compliance with
licensing agreements is to understand the
security implications surrounding the transmission, storage, and
processing of an app by the specific
analyzer.
After an app has been preprocessed by an analyzer, the analyzer
will then test the app for software
vulnerabilities that violate one or more general app security
requirements. In this section, we describe
these general requirements and the testing approaches used to
detect vulnerabilities that violate these
requirements. Descriptions of specific Android and iOS app
vulnerabilities are provided in Appendices B
and C. In addition, recommendations related to the app testing
activity are described in Appendix A.2. For
more information on security testing and assessment tools and
techniques that may be helpful for
performing app testing, see [17].
3.1 General Requirements
A general requirement is an app security requirement that
specifies a software characteristic or behavior
that an app should exhibit in order to be considered secure.
General app security requirements include:
described; all buttons, menu items, and
other interfaces must work. Error conditions must be handled
gracefully, such as when a service or
function is unavailable (e.g., disabled, unreachable, etc.).
functionality, such as data exfiltration
performed by malware, must not be supported.
11
NIST SP 800-163 Vetting the Security of Mobile Applications
ermissions: Apps should have only the minimum
permissions necessary and should only
grant other applications the necessary permissions.
transmit sensitive data should protect the
confidentiality and integrity of this data. This category includes
preserving privacy, such as asking
permission to use personal information and using it only for
authorized purposes.
dependencies, such as on libraries, in a
reasonable manner and not for malicious reasons.
to identify any new weaknesses.
Well-written security requirements are most useful to auditors
when they can easily be translated into
specific evaluation activities. The various processes concerning
the elicitation, management, and tracking
of requirements are collectively known as requirements
engineering [18, 19], and this is a relatively
mature activity with tools support. Presently, there is no
methodology for driving all requirements down
to the level where they could be checked completely3 by
automatic software analysis, while ensuring that
the full scope of the original requirements is preserved. Thus,
the best we can do may be to document the
process in such a way that human reviews can find gaps and
concentrate on the types of vulnerabilities, or
weaknesses, that fall into those gaps.
3.1.1 Enabling Authorized Functionality
An important part of confirming authorized functionality is
testing the mobile user interface (UI) display,
which can vary greatly for different device screen sizes and
resolutions. The rendering of images or
position of buttons may not be correct due to these differences.
If applicable, the UI should be viewed in
both portrait and landscape mode. If the page allows for data
entry, the virtual keyboard should be
invoked to confirm it displays and works as expected. Analyzers
should also test any device physical
sensors used by the app such as Global Positioning System
(GPS), front/back cameras, video, microphone
for voice recognition, accelerometers (or gyroscopes) for
motion and orientation-sensing, and
communication between devices (e.g., phone “bumping”).
Telephony functionality encompass a wide variety of method
calls that an app can use if given the proper
permissions, including making phone calls, transmitting Short
Message Service (SMS) messages, and
retrieving unique phone identifier information. Most, if not all
of these telephony events are sensitive, and
some of them provide access to PII. Many apps make use of the
unique phone identifier information in
order to keep track of users instead of using a
username/password scheme. Another set of sensitive calls
can give an app access to the phone number. Many legitimate
apps use the phone number of the device as
a unique identifier, but this is a bad coding practice and can
lead to loss of PII. To make matters worse,
many of the carrier companies do not take any caution in
protecting this information. Any app that makes
use of the phone call or SMS messages that is not clearly stated
in the EULA or app description as
intending to use them should be immediate cause for suspicion.
3.1.2 Preventing Unauthorized Functionality
Some apps (as well as their libraries) are malicious and
intentionally perform functionality that is not
disclosed to the user and violates most expectations of secure
behavior. This undocumented functionality
may include exfiltrating confidential information or PII to a
third party, defrauding the user by sending
premium SMS messages (premium SMS messages are meant to
be used to pay for products or services),
3 In the mathematical sense of “completeness,” completeness
means that no violations of the requirement will be overlooked.
12
NIST SP 800-163 Vetting the Security of Mobile Applications
or tracking users’ locations without their knowledge. Other
forms of malicious functionality include
injection of fake websites into the victim’s browser in order to
collect sensitive information, acting as a
starting point for attacks on other devices, and generally
disrupting or denying operation. Another
example is the use of banner ads that may be presented in a
manner which causes the user to
unintentionally select ads that may attempt to deceive the user.
These types of behaviors support phishing
attacks and may not be detected by mobile antivirus or software
vulnerability scanners as they are served
dynamically and not available for inspection prior to the
installation of the app.
Open source and commercially available malware detection and
analysis tools can identify both known
and new forms of malware. These tools can be incorporated as
part of an organization’s enterprise mobile
device management (MDM) solution, organization’s app store,
or app vetting process. Note that even if
these tools have not detected known malware in an app,
one should not assume that malicious
functionality is not present. It is essentially impossible for a
vetting process to guarantee that a piece of
software is free from malicious functionality. Mobile devices
may have some built-in protections against
malware, for example app sandboxing and user approval in
order to access subsystems such as the GPS or
text messaging. However, sandboxing only makes it more
difficult for apps to interfere with one another
or with the operating system; it does not prevent many types of
malicious functionality.
A final category of unauthorized functionality to consider is
communication with disreputable websites,
domains, servers, etc. If the app is observed to communicate
with sites known to harbor malicious
advertising, spam, phishing attacks, or other malware, this is a
strong indication of unauthorized
functionality.
3.1.3 Limiting Permissions
Some apps have permissions that are not consistent with
EULAs, app permissions, app descriptions, in-
program notifications, or other expected behaviors and
would not be considered to exhibit secure
behavior. An example is a wallpaper app that collects and stores
sensitive information, such as passwords
or PII, or accesses the camera and microphone. Although these
apps might not have malicious intent—
they may just be poorly designed—their excessive permissions
still expose the user to threats that stem
from the mismanagement of sensitive data and the loss or theft
of a device. Similarly, some apps have
permissions assigned that they don’t actually use. Moreover,
users routinely reflexively grant whatever
access permissions a newly installed app requests, should their
mobile device OS perform permission
requests. It is important to note that identifying apps that have
excessive permissions is a manual process.
It involves a subjective decision from an auditor that certain
permissions are not appropriate and that the
behavior of the app is insecure.
Excessive permissions can manifest in other ways including:
) and removable storage: File I/O can be
a security risk, especially when the
I/O happens on a removable or unencrypted portion of the file
system. Regarding the app’s own
behavior, file scanning or access to files that are not part of an
app’s own directory could be an
indicator of malicious activity or bad coding practice. Files
written to external storage, such as a
removable Secure Digital (SD) card, may be readable and
writeable by other apps that may have been
granted different permissions, thus placing data written to
unprotected storage at risk.
lower-level command line programs,
which may allow access to low-level structures, such as the root
directory, or may allow access to
sensitive commands. These programs potentially allow a
malicious app access to various system
resources and information (e.g., finding out the running
processes on a device). Although the mobile
operating system typically offers protection against directly
accessing resources beyond what is
13
NIST SP 800-163 Vetting the Security of Mobile Applications
available to the user account that the app is running under, this
opens up the potential for privilege
elevation attacks.
only uses designated APIs from the vendor-
provided software development kit (SDK)
and uses them properly; no other API calls are permitted.4 For
the permitted APIs, the analyzer
should note where the APIs are transferring data to and from the
app.
3.1.4 Protecting Sensitive Data
Many apps collect, store, and/or transmit sensitive data, such as
financial data (e.g., credit card numbers),
personal data (e.g., social security numbers), and login
credentials (e.g., passwords). When implemented
and used properly, cryptography can help maintain the
confidentiality and integrity of this data, even if a
protected mobile device is lost or stolen and falls into an
attacker’s hands. However, only certain
cryptographic implementations have been shown to be
sufficiently strong, and developers who create
their own implementations typically expose their apps to
exploitation through man-in-the-middle attacks
and other forms.
Federal Information Processing Standard (FIPS) 140-2
precludes the use of unvalidated cryptography for
the cryptographic protection of sensitive data within federal
systems. Unvalidated cryptography is viewed
by NIST as providing no protection to the data. A list of FIPS
140-2-validated cryptographic modules and
an explanation of the applicability of the Cryptographic Module
Validation Program (CMVP) can be
found on the CMVP website [20]. Analyzers should reference
the CMVP-provided information to
determine if each app is using properly validated cryptographic
modules and approved implementations
of those modules.
It is not sufficient to use appropriate cryptographic
algorithms and modules; cryptography
implementations must also be maintained in accordance with
approved practices. Cryptographic key
management is often performed improperly, leading to
exploitable weaknesses. For example, the presence
of hard-coded cryptographic keying material such as keys,
initialization vectors, etc. is an indication that
cryptography has not been properly implemented by an app and
might be exploited by an attacker to
compromise data or network resources. Guidelines for proper
key management techniques can be found
in [21]. Another example of a common cryptographic
implementation problem is the failure to properly
validate digital certificates, leaving communications protected
by these certificates subject to man-in-the-
middle attacks.
Privacy considerations need to be taken into account for apps
that handle PII, including mobile-specific
personal information like location data and pictures taken by
onboard cameras, as well as the broadcast
ID of the device. This needs to be dealt with in the UI as well as
in the portions of the apps that
manipulate this data. For example, an analyzer can verify that
the app complies with privacy standards by
masking characters of any sensitive data within the page
display, but audit logs should also be reviewed
when possible for appropriate handling of this type of
information. Another important privacy
consideration is that sensitive data should not be disclosed
without prior notification to the user by a
prompt or a license agreement.
Another concern with protecting sensitive data is data leakage.
For example, data is often leaked through
unauthorized network connections. Apps can use network
connections for legitimate reasons, such as
4 The existence of an API raises the possibility of malicious
use. Even if the APIs are used properly, they may pose risks
because of covert channels, unintended access to other APIs,
access to data exceeding original design, and the execution of
actions outside of the app’s normal operating parameters.
14
NIST SP 800-163 Vetting the Security of Mobile Applications
retrieving content, including configuration files, from a variety
of external systems. Apps can receive and
process information from external sources and also send
information out, potentially exfiltrating data
from the mobile device without the user’s knowledge. Analyzers
should consider not only cellular and
Wi-Fi usage, but also Bluetooth, NFC, and other forms of
networking. Another common source of data
leakage is shared system-level logs, where multiple apps log
their security events to a single log file.
Some of these log entries may contain sensitive information,
either recorded inadvertently or maliciously,
so that other apps can retrieve it. Analyzers should examine app
logs to look for signs of sensitive data
leakage.
3.1.5 Securing App Code Dependencies
In general, an app must not use any unsafe coding or building
practices. Specifically, an app should
properly use other bodies of code, such as libraries, only when
needed, and not to attempt to obfuscate
malicious activity.5 Examples include:
calls are typically calls to a
library function that has already been
loaded into memory. These methods provide a way for an app to
reuse code that was written in a
different language. These calls, however, can provide a level of
obfuscation that impacts the ability to
perform analysis of the app.
third-party libraries and classes that are
loaded by the app at run time. Third-party libraries and classes
can be closed source, can have self-
modifying code, or can execute unknown server-side code.
Legitimate uses for loaded libraries and
classes might be for the use of a cryptographic library or a
graphics API. Malicious apps can use
library and class loading as a method to avoid detection. From a
vetting perspective, libraries and
classes are also a concern because they introduce outside code
to an app without the direct control of
the developer. Libraries and classes can have their own known
vulnerabilities, so an app using such
libraries or classes could expose a known vulnerability on an
external interface. Tools are available
that can list libraries, and these libraries can then be searched
for in vulnerability databases to
determine if they have known weaknesses.
ehavior: When apps execute, they exhibit a variety
of dynamic behaviors. Not all of these
operating behaviors are a result of user input. Executing apps
may also receive inputs from data
stored on the device. The key point here is the need to know
where data used by an app originates
from and knowing whether and how it gets sanitized. It is
critical to recognize that data downloaded
from an external source is particularly dangerous as a potential
exploit vector unless it is clear how
the app prevents insecure behaviors resulting from data from a
source not trusted by the organization
using the app. Data downloaded from external sources should be
treated as potential exploits unless it
can be shown that the app is able to thwart any negative
consequences from externally supplied data,
especially data from untrusted sources. This is nearly
impossible, thus requiring some level of risk-
tolerance mitigation.
-Application Communications: Apps that communicate
with each other can provide useful
capabilities and productivity improvements, but these inter-
application communications can also
present a security risk. For example, in Android platforms,
inter-application communications are
allowed but regulated by what Android calls “intents.” An
intent can be used to start an app
component in a different app or the same app that is sending the
intent. Intents can also be used to
5 There are both benign and malicious reasons for obfuscating
code (e.g., protecting intellectual property and concealing
malware, respectively). Code obfuscation makes static analysis
difficult, but dynamic analysis can still be effective.
15
NIST SP 800-163 Vetting the Security of Mobile Applications
interact and request resources from the operating system. In
general, analyzers should be aware of
situations where an app is using nonhuman entities to
make API calls to other devices, to
communicate with third-party services, or to otherwise interact
with other systems.
3.1.6 Testing App Updates
To provide long-term assurance of the software throughout its
life cycle, all apps, as well as their updates,
should go through a software assurance vetting process,
because each new version of an app can
introduce new unintentional weaknesses or unreliable code.
Apps should be vetted before being released
to the community of users within the organization. The purpose
is to validate that the app adheres to a
predefined acceptable level of security risk and identify whether
the developers have introduced any
latent weaknesses that could make the IT infrastructure
vulnerable.
It is increasingly common for mobile devices to be capable of
and be configured to automatically
download app updates. Mobile devices are also generally
capable of downloading apps of the user’s
choice from app stores. Ideally, each app and app update should
be vetted before it is downloaded onto
any of the organization’s mobile devices. If unrestricted app
and app update downloads are permitted, this
can bypass the vetting process. There are several options to
work around this, depending on the mobile
device platform and the degree to which the mobile devices are
managed by the organization. Possible
options include disabling all automatic updates, only permitting
use of an organization-provided app
store, preventing the installation of updates through the use of
application whitelisting software, and
enabling MDM mobile application management features that
make app vetting a precondition for
allowing users to download apps and app updates. Further
discussion of this is out of scope because it is
largely addressed operationally as part of OS security practices.
3.2 Testing Approaches
To detect the satisfaction or violation of a general requirement,
an analyzer tests an app for the presence
of software vulnerabilities. Such testing may involve (1)
correctness testing, (2) analysis of the app's
source code or binary code, (3) the use of static or dynamic
analysis, and (4) manual or automatic testing
of the app.
3.2.1 Correctness Testing
One approach for testing an app is software correctness testing
[22]. Software correctness testing is the
process of executing a program with the intent of finding errors.
Although software correctness testing is
aimed primarily at improving quality assurance, verifying and
validating described functionality, or
estimating reliability, it can also help to reveal potential
security vulnerabilities since such vulnerabilities
often have a negative effect on the quality, functionality, and
reliability of the software. For example,
software that crashes or exhibits unexpected behavior is often
indicative of a security flaw. One of the
advantages of software correctness testing is that it is
traditionally based on specifications of the specific
software to be tested. These specifications can then be
transformed into requirements that specify how the
software is expected to behave under test. This is distinguished
from security assessment approaches that
often require the tester to derive requirements themselves; often
such requirements are largely based on
security requirements that are considered to be common across
many different software artifacts and may
not test for vulnerabilities that are unique to the software under
test. Nonetheless, because of the tight
coupling between security and the quality, functionality, and
reliability of software, it is recommended
that software correctness testing be performed when possible.
16
NIST SP 800-163 Vetting the Security of Mobile Applications
3.2.2 Source Code Versus Binary Code
A major factor in performing app testing is whether source code
is available. Typically, apps downloaded
from an app store do not provide access to source code. When
source code is available, such as in the case
of an open source app, a variety of tools can be used to analyze
it. The goals of performing a source code
review are to find vulnerabilities in the source code and to
verify the results of analyzers. The current
practice is that these tasks are performed manually by a secure
code reviewer who reads through the
contents of source code files. Even with automated aids, the
analysis is labor-intensive. Benefits to using
automated static analysis tools include introducing consistency
between different reviews and making
review of large codebases possible. Reviewers should
generally use automated static analysis tools
whether they are conducting an automated or a manual review,
and they should express their findings in
terms of Common Weakness Enumeration (CWE)
identifiers or some other widely accepted
nomenclature. Performing a secure code review requires
software development and domain-specific
knowledge in the area of application security. Organizations
should ensure that the individuals performing
source code reviews have the necessary skills and expertise.
Note that organizations that intend to develop
apps in-house should also refer to guidance on secure
programming techniques and software quality
assurance processes to appropriately address the entire software
development life cycle [11, 23].
When source code is not available, binary code can be vetted
instead. In the context of apps, the term
“binary code” can refer to either byte-code or machine code.
For example, Android apps are compiled to
byte-code that is executed on a virtual machine, similar to the
Java Virtual Machine (JVM), but they can
also come with custom libraries that are provided in the form of
machine code, that is, code executed
directly on the mobile device's CPU. Android binary apps
include byte-code that can be analyzed without
hardware support using emulated and virtual environments.
3.2.3 Static Versus Dynamic Analysis
Analysis tools are often characterized as being either static or
dynamic.6 Static analysis examines the app
source code and binary code, and attempts to reason over all
possible behaviors that might arise at
runtime. It provides a level of assurance that analysis results are
an accurate description of the program’s
behavior regardless of the input or execution environment.
Dynamic analysis operates by executing a
program using a set of input use cases and analyzing the
program’s runtime behavior. In some cases, the
enumeration of input test cases is large, resulting in lengthy
processing times. However, methods such as
combinatorial testing can reduce the number of dynamic input
test case combinations, reducing the
amount of time needed to derive analysis results [25]. Still,
dynamic analysis is unlikely to provide 100
percent code coverage [26]. Organizations should consider the
technical trade-off differences between
what static and dynamic tools offer and balance their usage
given the organization’s software assurance
goals.
Static analysis requires that binary code be reverse engineered
when source is not available, which is
relatively easy for byte-code,7 but can be difficult for machine
code. Many commercial static analysis
tools already support byte-code, as do a number of open source
and academic tools.8 For machine code, it
is especially hard to track the flow of control across many
functions and to track data flow through
6 For mobile devices, there are analysis tools that label
themselves as performing behavioral testing. Behavioral testing
(also
known as behavioral analysis) is a form of static and dynamic
testing that attempts to detect malicious or risky behavior,
such as the oft-cited example of a flashlight app that accesses a
contact list. [24] This publication assumes that any mention
of static or dynamic testing also includes behavioral testing as a
subset of its capabilities.
7 The ASM framework [27] is a commonly used framework for
byte-code analysis.
8 Such as [28, 29, 30, 31].
17
NIST SP 800-163 Vetting the Security of Mobile Applications
variables, since most variables are stored in anonymous
memory locations that can be accessed in
different ways. The most common way to reverse engineer
machine code is to use a disassembler or a
decompiler that tries to recover the original source code. These
techniques are especially useful if the
purpose of reverse engineering is to allow humans to examine
the code, since the outputs are in a form
that can be understood by humans with appropriate skills. But
even the best disassemblers make mistakes
[32], and some of those mistakes can be corrected with formal
static analysis. If the code is being reverse
engineered for static analysis, then it is often preferable to
disassemble the machine code directly to a
form that the static analyzer understands rather than creating
human-readable code as an intermediate
byproduct. A static analysis tool that is aimed at machine code
is likely to automate this process.
For dynamic testing (as opposed to static analysis), the most
important requirement is to be able to see the
workings of the code as it is being executed. There are two
primary ways to obtain this information. First,
an executing app can be connected to a remote debugger, and
second, the code can be run on an emulator
that has debugging capabilities built into it. Running the code
on a physical device allows the analyzer to
select the exact characteristics of the device on which the app is
intended to be used and can provide a
more accurate view about how the app will be behave. On the
other hand, an emulator provides more
control, especially when the emulator is open source and can be
modified by the evaluator to capture
whatever information is needed. Although emulators can
simulate different devices, they do not simulate
all of them, and the simulation may not be completely accurate.
Note that malware increasingly detects
the use of emulators as a testing platform and changes its
behavior accordingly to avoid detection.
Therefore, it is recommended that analyzers use a combination
of emulated and physical mobile devices
so as to avoid false negatives from malware that employs anti-
detection techniques.
Useful information can be gleaned by observing an app’s
behavior even without knowing the purposes of
individual functions. For example, an analyzer can observe how
the app interacts with its external
resources, recording the services it requests from the operating
system and the permissions it exercises.
Note that although many of the device capabilities used by an
app may be inferred by an analyzer (e.g.,
access to a device's camera will be required of a camera app),
an app may be permitted access to
additional device capabilities that are beyond the scope of its
described functionality (e.g., a camera app
accessing the device's network). Moreover, if the behavior of
the app is observed for specific inputs, the
evaluator can ask whether the capabilities being exercised make
sense in the context of those particular
inputs. For example, a calendar app may legitimately have
permission to send calendar data across the
network in order to sync across multiple devices, but if the user
has merely asked for a list of the day’s
appointments and the app sends data that is not simply part of
the handshaking process needed to retrieve
data, the analyzer might investigate what data is being sent and
for what purpose.
3.2.4 Automated Tools
In most cases, analyzers used by an app vetting process will
consist of automated tools that test an app for
software vulnerabilities and generate a report and risk
assessment. Classes of automated tools include:
use of a computer to
view how the app will display on a
specific device without the use of an actual device. Because the
tool provides access to the UI, the
app features can also be tested. However, interaction with the
actual device hardware features such as
a camera or accelerometer cannot be simulated and requires an
actual device.
analyzer to view and access an actual device
from a computer. This allows the testing of most device
features that do not require physical
movement such as using a video camera or making a phone call.
Remote debuggers are a common
way of examining and understanding apps.
18
NIST SP 800-163 Vetting the Security of Mobile Applications
utomated Testing: Basic mobile functionality testing lends
itself well to automated testing. There
are several tools available that allow creation of automated
scripts to remotely run regression test
cases on specific devices and operating systems.
o User Interface-Driven Testing: If the expected results being
verified are UI-specific, pixel
verification can be used. This method takes a “screen shot” of a
specified number of pixels from a
page of the app and verifies that it displays as expected (pixel
by pixel).
o Data-Driven Testing: Data-driven verification uses labels or
text to identify the section of the
app page to verify. For example, it will verify the presence of a
“Back” button, regardless of
where on the page it displays. Data-driven automated test cases
are less brittle and allow for some
page design change without rewriting the script.
o Fuzzing: Fuzzing normally refers to the automated generation
of test inputs, either randomly or
based on configuration information describing data format.
Fault injection tools may also be
referred to as “fuzzers.” This category of test tools is not
necessarily orthogonal to the others, but
its emphasis is on fast and automatic generation of many test
scenarios.
o Network-Level Testing: Fuzzers, penetration test tools, and
human-driven network simulations
can help determine how the app interacts with the outside
world. It may be useful to run the app
in a simulated environment during network-level testing so that
its response to network events
can be observed more closely.
to tools that automate the repetition of
tests after they have been engineered by humans. This makes it
useful for tests that need to be
repeated often (e.g., tests that will be used for many apps or
executed many times, with variations, for
a single app).
o Static Analysis Tools: These tools generally analyze the
behavior of software (either source or
binary code) to find software vulnerabilities, though—if
appropriate specifications are available—
static analysis can also be used to evaluate correctness. Some
static analysis tools operate by
looking for syntax embodying a known class of potential
vulnerabilities and then analyzing the
behavior of the program to determine whether the weaknesses
can be exploited. Static tools
should encompass the app itself, but also the data it uses to the
extent that this is possible; keep in
mind that the true behavior of the app may depend critically on
external resources. 9
o Metrics Tools: These tools measure aspects of software not
directly related to software behavior
but useful in estimating auxiliary information such as the effort
of code evaluation. Metrics can
also give indirect indications about the quality of the design and
development process.
Commercial automated app testing tools have overlapping or
complementary capabilities. For example,
one tool may be based on techniques that find integer overflows
[34] reliably while another tool may be
better at finding weaknesses related to command injection
attacks [35]. Finding the right set of tools to
evaluate a requirement can be a challenging task because of the
varied capabilities of diverse commercial
9 The Software Assurance Metrics and Tool Evaluation
(SAMATE) test suite [33] provides a baseline for evaluating
static
analysis tools. Tool vendors may evaluate their own tools with
SAMATE, so it is to be expected that, over time, the tools
will eventually perform well on precisely the SAMATE
specified tests. Still, the SAMATE test suite can help determine
if
the tool is meant to do what analyzers thought, and the
SAMATE test suite is continually evolving
19
NIST SP 800-163 Vetting the Security of Mobile Applications
and open source tools, and because it can be challenging to
determine the capabilities of a given tool.
Constraints on time and money may also prevent the evaluator
from assembling the best possible
evaluation process, especially when the best process would
involve extensive human analysis. Tools that
provide a high-level risk rating should provide transparency on
how the score was derived and which tests
were performed. Tools that provide low-level code analysis
reports should help analyzers understand how
the tool’s findings may impact their security. Therefore, it is
important to understand and quantify what
each tool and process can do, and to use that knowledge to
characterize what the complete evaluation
process does or does not provide.
In most cases, organizations will need to use multiple
automated tools to meet their testing requirements.
Using multiple tools can provide improved coverage of software
vulnerability detection as well as provide
validation of test results for those tools that provide
overlapping tests (i.e., if two or more tools test for the
same vulnerability).
3.3 Sharing Results
The sharing of an organization's findings for an app can greatly
reduce the duplication and cost of app
vetting efforts for other organizations. Information sharing
within the software assurance community is
vital and can help analyzers benefit from the collective efforts
of security professionals around the world.
The National Vulnerability Database (NVD) [36] is the U.S.
government repository of standards-based
vulnerability management data represented using the Security
Content Automation Protocol (SCAP) [37].
This data enables automation of vulnerability management,
security measurement, and compliance. The
NVD includes databases of security checklists, security-
related software flaws, misconfigurations,
product names, and impact metrics. SCAP is a suite of
specifications that standardize the format and
nomenclature by which security software products communicate
software flaw and security configuration
information. SCAP is a multipurpose protocol that supports
automated vulnerability checking, technical
control compliance activities, and security measurement. Goals
for the development of SCAP include
standardizing system security management, promoting
interoperability of security products, and fostering
the use of standard expressions of security content. The CWE
[38] and Common Attack Pattern
Enumeration and Classification (CAPEC) [39] collections can
provide a useful list of weaknesses and
attack approaches to drive a binary or live system penetration
test. Classifying and expressing software
vulnerabilities is an ongoing and developing effort in the
software assurance community, as is how to
prioritize among the various weaknesses that can be in an app
[40] so that an organization can know that
those that pose the most danger to the app, given its intended
use/mission, are addressed by the vetting
activity given the difference in the effectiveness and
coverage of the various available tools and
techniques.
20
NIST SP 800-163 Vetting the Security of Mobile Applications
4 App Approval/Rejection
After an app has been tested by an analyzer, the analyzer
generates a report that identifies detected
software vulnerabilities and a risk assessment that expresses the
estimated level of risk associated with
using the app. The report and risk assessment are then made
available to one or more auditors of the
organization. In this section, we describe issues surrounding the
auditing of report and risk assessments
by an auditor, the organization-specific vetting criteria used by
auditors to determine if an app meets the
organization’s context-sensitive requirements, and issues
surrounding the final decision by an approver to
approve or reject an app. Recommendations related to the app
approval/rejection activity are described in
Appendix A.3
4.1 Report and Risk Auditing
An auditor inspects reports and risk assessments from one or
more analyzers to ensure that the app meets
the organization's general security requirements. To accomplish
this, the auditor must be intimately
familiar with the organization's general security requirements as
well as with the reports and risk
assessments from analyzers. After inspecting analyzers' reports
and risk assessments against general app
security requirements, the auditor also assesses the app with
respect to context-sensitive requirements
using organization-specific vetting criteria. After assessing the
app's satisfaction or violation of both
general and context-specific security requirements, the auditor
then generates a recommendation for
approving or rejecting the app for deployment based on the
app's security posture and makes this
available to the organization's approver.
One of the main issues related to report and risk auditing stems
from the difficulty in collating and
interpreting different reports and risk assessments from multiple
analyzers due to the wide variety of
security-related definitions, semantics, nomenclature, and
metrics. For example, an analyzer may classify
the estimated risk for using an app as low, moderate, high, or
severe risk while another analyzer classifies
estimated risk as pass, warning, or fail. Note that it is in the
best interest of analyzers to provide
assessments that are relatively intuitive and easy to interpret by
auditors. Otherwise, organizations will
likely select other analyzers that generate reports and
assessments that their auditors can understand.
While some standards exist for expressing risk assessment (e.g.,
the Common Vulnerability Scoring
System [41]) and vulnerability reports (e.g., Common
Vulnerability Reporting Framework [42]), the
current adoption of these standards by analyzers is low. To
address this issue, it is recommended that an
organization identify analyzers that leverage vulnerability
reporting and risk assessment standards. If this
is not possible, it is recommended that the organization provide
sufficient training to auditors on both the
security requirements of the organization as well as the
interpretation of reports and risk assessments
generated by analyzers.
The methods used by an auditor to derive a recommendation for
approving or rejecting an app is based on
a number of factors including the auditor's confidence in the
analyzers' assessments and the perceived
level of risk from one or more risk assessments as well as the
auditor's understanding of the organization's
risk threshold, the organization's general and context-sensitive
security requirements, and the reports and
risk assessments from analyzers. In some cases, an organization
will not have any context-sensitive
requirements and thus, auditors will evaluate the security
posture of the app based solely on reports and
risk assessments from analyzers. In such cases, if risk
assessments from all analyzers indicate a low
security risk associated with using an app, an auditor may
adequately justify the approval of the app based
on those risk assessments (assuming that the auditor has
confidence in the analyzers' assessments).
Conversely, if one or more analyzers indicate a high security
risk with using an app, an auditor may
adequately justify the rejection of the app based on those risk
assessments. Cases where moderate, but no
high, levels of risk are assessed for an app by one or more
analyzers will require additional evaluation by
the auditor in order to adequately justify a recommendation for
approving or rejecting the app. To
21
NIST SP 800-163 Vetting the Security of Mobile Applications
increase the likelihood of an appropriate recommendation, it is
recommended that the organization use
multiple auditors.
4.2 Organization-Specific Vetting Criteria
As described in Section 2.3.1, a context-sensitive
requirement is an app security requirement that
specifies how apps should be used by the organization to ensure
that organization's security posture. For
an app, the satisfaction or violation of context-sensitive
requirements cannot be determined by analyzers
but instead must be determined by an auditor using
organization-specific vetting criteria. Such criteria
include:
ments, security
policies, privacy policies, acceptable use
policies, and social media guidelines that are applicable to the
organization.
organization, developer’s reputation, date
received, marketplace/app store consumer reviews, etc.
collected, stored, and/or transmitted by the app.
business processes.
configuration on which the app will be
deployed.
the app (e.g., general public use vs.
sensitive military environment).
binaries or packages.10
o User Guide: When available, the app’s user guide assists
testing by specifying the expected
functionality and expected behaviors. This is simply a statement
from the developer describing
what they claim their app does and how it does it.
o Test Plans: Reviewing the developer’s test plans may help
focus app vetting by identifying any
areas that have not been tested or were tested inadequately. A
developer could opt to submit a test
oracle in certain situations to demonstrate its internal test
effort.
o Testing Results: Code review results and other testing
results will indicate which security
standards were followed. For example, if an application threat
model was created, this should be
submitted. This will list weaknesses that were identified and
should have been addressed during
design and coding of the app.
10 The level of assurance provided by digital signatures varies
widely. For example, one organization might have stringent
digital signature requirements that provide a high degree of
trust, while another organization might allow self-signed
certificates to be used, which do not provide any level of trust.
22
NIST SP 800-163 Vetting the Security of Mobile Applications
o Service-Level Agreement: If an app was developed for an
organization by a third party, a
Service-Level Agreement (SLA) may have been included as part
of the vendor contract. This
contract should require the app to be compatible with the
organization’s security policy.
Some information can be gleaned from app documentation in
some cases, but even if documentation does
exist, it might lack technical clarity and/or use jargon specific
to the circle of users who would normally
purchase the app. Since the documentation for different apps
will be structured in different ways, it may
also be time-consuming to find the information for evaluation.
Therefore, a standardized questionnaire
might be appropriate for determining the software’s purpose and
assessing an app developer’s efforts to
address security weaknesses. Such questionnaires aim to
identify software quality issues and security
weaknesses by helping developers address questions from end
users/adopters about their software
development processes. For example, developers can use the
Department of Homeland Security (DHS)
Custom Software Questionnaire [43] to answer questions such
as Does your software validate inputs from
untrusted resources? and What threat assumptions were made
when designing protections for your
software? Another useful question (not included in the DHS
questionnaire) is Does your app access a
network application programming interface (API)? Note that
such questionnaires are feasible to use only
in certain circumstances (e.g., when source code is available
and when developers can answer questions).
Known flaws in app design and coding may be reported in
publicly accessible vulnerability databases,
such as NVD.11 Before conducting the full vetting process for
a publicly available app, auditors should
check one or more vulnerability databases to determine if there
are known flaws in the corresponding
version of the app. If one or more serious flaws have already
been discovered, this alone might be
sufficient grounds to reject the version of the app for
organizational use, thus allowing the rest of the
vetting process to be skipped. However, in most cases such
flaws will not be known, and the full vetting
process will be needed. This is so because there are many forms
of vulnerabilities other than known flaws
in app design and coding. Identifying these weaknesses
necessitates first defining the app requirements,
so that deviations from these requirements can be flagged as
weaknesses.
4.3 Final Approval/Rejection
Ultimately, an organization's decision to approve or reject an
app for deployment rests on the decision of
the organization's approver. After an approver receives
recommendations from one or more auditors that
describe the security posture of the app, the approver will
consider this information as well as additional
non-security-related criteria including budgetary, policy, and
mission-specific criteria to render the
organization’s official decision to approve or reject the app.
After this decision is made, the organization
should then follow procedures for handling the approval or
rejection of the app. These procedures must be
specified by the organization prior to implementing an app
vetting process. If an app is approved,
procedures must be defined that specify how the approved app
should be processed and ultimately
deployed onto the organization's devices. For example, a
procedure may specify the steps needed to
digitally sign an app before being submitted to the administrator
for deployment onto the organization's
devices. Similarly, if an app is rejected, a procedure may
specify the steps needed to identify an
alternative app or to resolve detected vulnerabilities with the
app developer. Procedures that define the
steps associated with approving or rejecting an app should be
included in the organization's app security
policies.
11 Vulnerability databases generally reference vulnerabilities
by their Common Vulnerabilities and Exposures (CVE)
identifier. For more information on CVE, see [44].
23
NIST SP 800-163 Vetting the Security of Mobile Applications
Appendix A—Recommendations
This appendix describes recommendations for vetting the
security of mobile applications. These
recommendations are described within the context of the
planning of an app vetting process
implementation as well as the app testing and app
approval/rejection activities.
A.1 Planning
potential security impact of mobile apps
on the organization’s computing, networking, and data
resources).
bile device hardware and
operating system security controls, for
example an encrypted file system, and identify which security
and privacy requirements can be
addressed by the mobile device itself.
rity technologies,
such as Mobile Device Management
(MDM) solutions, and identify which security and privacy
requirements can be addressed by these
technologies.
understand what threats are mitigated
through the technical and operational controls. Identify
potential security and privacy risks that are
not mitigated through these technical and operational controls.
identifying general and context-sensitive
requirements.
and the value of human involvement in
an app vetting process.
process.
rsonnel, particularly auditors, with appropriate
expertise.
A.2 App Testing
understand the security implications
surrounding the integrity, intellectual property, and licensing
issues when submitting an app to an
analyzer.
encrypted channel (e.g., SSL) and that apps are
stored on a secure machine at the analyzer's location. In
addition, ensure that only authorized users
have access to that machine.
organization.
determining the satisfaction or violation of
general app security requirements.
that app updates are tested.
24
NIST SP 800-163 Vetting the Security of Mobile Applications
A.3 App Approval/Rejection
risk assessment methodology, or that
provides intuitive and easy-to-interpret reports and risk
assessments.
organization's security requirements and
interpretation of analyzer reports and risk assessments. Use
multiple auditors to increase likelihood of
appropriate recommendations.
-specific vetting criteria necessary for
vetting context-sensitive app security
requirements.
ing lists, and other publicly
available security vulnerability reporting
repositories to keep abreast of new developments that may
impact the security of mobile devices and
mobile apps.
25
NIST SP 800-163 Vetting the Security of Mobile Applications
Appendix B—Android App Vulnerability Types
This appendix identifies vulnerabilities specific to apps running
on Android mobile devices. The scope of
this appendix includes app vulnerabilities for Android-based
mobile devices running apps written in Java.
The scope does not include vulnerabilities in the mobile
platform hardware and communications
networks. Although some of the vulnerabilities described below
are common across mobile device
environments, this appendix focuses only on Android-specific
vulnerabilities.
The vulnerabilities in this appendix are broken into three
hierarchical levels, A, B, and C. The A level is
referred to as the vulnerability class and is the broadest
description for the vulnerabilities specified under
that level. The B level is referred to as the sub-class and
attempts to narrow down the scope of the
vulnerability class into a smaller, common group of
vulnerabilities. The C level specifies the individual
vulnerabilities that have been identified. The purpose of this
hierarchy is to guide the reader to finding the
type of vulnerability they are looking for as quickly as possible.
The A level general categories of Android app vulnerabilities
are listed below:
Table 1. Android Vulnerabilities, A Level
Type Description Negative Consequence
Incorrect
Permissions
Permissions allow accessing controlled
functionality such as the camera or GPS and
are requested in the program. Permissions
can be implicitly granted to an app without
the user’s consent.
An app with too many permissions may perform
unintended functions outside the scope of the
app’s intended functionality. Additionally, the
permissions are vulnerable to hijacking by
another app. If too few permissions are granted,
the app will not be able to perform the functions
required.
Exposed
Communications
Internal communications protocols are the
means by which an app passes messages
internally within the device, either to itself or
to other apps. External communications
allow information to leave the device.
Exposed internal communications allow apps to
gather unintended information and inject new
information. Exposed external communication
(data network, Wi-Fi, Bluetooth, NFC, etc.) leave
information open to disclosure or man-in-the-
middle attacks.
Potentially
Dangerous
Functionality
Controlled functionality that accesses
system-critical resources or the user’s
personal information. This functionality can
be invoked through API calls or hard coded
into an app.
Unintended functions could be performed outside
the scope of the app’s functionality.
App Collusion Two or more apps passing information to
each other in order to increase the
capabilities of one or both apps beyond their
declared scope.
Collusion can allow apps to obtain data that was
unintended such as a gaming app obtaining
access to the user’s contact list.
Obfuscation Functionality or control flows that are hidden
or obscured from the user. For the purposes
of this appendix, obfuscation was defined as
three criteria: external library calls,
reflection, and native code usage.
1. External libraries can contain unexpected
and/or malicious functionality.
2. Reflective calls can obscure the control flow of
an app and/or subvert permissions within an app.
3. Native code (code written in languages other
than Java in Android) can perform unexpected
and/or malicious functionality.
Excessive Power
Consumption
Excessive functions or unintended apps
running on a device which intentionally or
unintentionally drain the battery.
Shortened battery life could affect the ability to
perform mission-critical functions.
26
NIST SP 800-163 Vetting the Security of Mobile Applications
Type Description Negative Consequence
Traditional
Software
Vulnerabilities
All vulnerabilities associated with traditional
Java code including: Authentication and
Access Control, Buffer Handling, Control
Flow Management, Encryption and
Randomness, Error Handling, File Handling,
Information Leaks, Initialization and
Shutdown, Injection, Malicious Logic,
Number Handling, and Pointer and
Reference Handling.
Common consequences include unexpected
outputs, resource exhaustion, denial of service,
etc.
The table below shows the hierarchy of Android app
vulnerabilities from A level to C level.
Table 2. Android Vulnerabilities by Level
Level A Level B Level C
Permissions Over Granting Over Granting in Code
Over Granting in API
Under Granting Under Granting in Code
Under Granting in API
Developer Created Permissions Developer Created in Code
Developer Created in API
Implicit Permission Granted through API
Granted through Other Permissions
Granted through Grandfathering
Exposed Communications External Communications Bluetooth
GPS
Network/Data Communications
NFC Access
Internal Communications Unprotected Intents
Unprotected Activities
Unprotected Services
Unprotected Content Providers
Unprotected Broadcast Receivers
Debug Flag
27
NIST SP 800-163 Vetting the Security of Mobile Applications
Potentially Dangerous
Functionality
Direct Addressing Memory Access
Internet Access
Potentially Dangerous API Cost Sensitive APIs
Personal Information APIs
Device Management APIs
Privilege Escalation Altering File Privileges
Accessing Super User/Root
App Collusion Content Provider/Intents Unprotected Content
Providers
Permission Protected Content Providers
Pending Intents
Broadcast Receiver Broadcast Receiver for Critical Messages
Data Creation/Changes/Deletion Creation/Changes/Deletion to
File Resources
Creation/Changes/Deletion to Database
Resources
Number of Services Excessive Checks for Service State
Obfuscation Library Calls Use of Potentially Dangerous
Libraries
Potentially Malicious Libraries Packaged but
Not Used
Native Code Detection
Reflection
Packed Code
Excessive Power Consumption CPU Usage
I/O
28
NIST SP 800-163 Vetting the Security of Mobile Applications
Appendix C—iOS App Vulnerability Types
This appendix identifies and defines the various types of
vulnerabilities that are specific to apps running
on mobile devices utilizing the Apple iOS operating system.
The scope does not include vulnerabilities in
the mobile platform hardware and communications networks.
Although some of the vulnerabilities
described below are common across mobile device
environments, this appendix focuses on iOS-specific
vulnerabilities.
The vulnerabilities in this appendix are broken into three
hierarchical levels, A, B, and C. The A level is
referred to as the vulnerability class and is the broadest
description for the vulnerabilities specified under
that level. The B level is referred to as the sub-class and
attempts to narrow down the scope of the
vulnerability class into a smaller, common group of
vulnerabilities. The C level specifies the individual
vulnerabilities that have been identified. The purpose of this
hierarchy is to guide the reader to finding the
type of vulnerability they are looking for as quickly as possible.
The A level general categories of iOS app vulnerabilities are
listed below:
Table 3. iOS Vulnerability Descriptions, A Level
Type Description Negative Consequence
Privacy Similar to Android Permissions, iOS privacy
settings allow for user-controlled app access
to sensitive information. This includes:
contacts, Calendar information, tasks,
reminders, photos, and Bluetooth access.
iOS lacks the ability to create shared information
and protect it. All paths of information sharing are
controlled by the iOS app framework and may
not be extended. Unlike Android, these
permissions may be modified later for individual
permissions and apps.
Exposed
Communication-
Internal and
External
Internal communications protocols allow
apps to process information and
communicate with other apps. External
communications allow information to leave
the device.
Exposed internal communications allow apps to
gather unintended information and inject new
information. Exposed external communication
(data network, Wi-Fi, Bluetooth, etc.) leave
information open to disclosure or man-in-the-
middle attacks.
Potentially
Dangerous
Functionality
Controlled functionality that accesses
system-critical resources or the user’s
personal information. This functionality can
be invoked through API calls or hard coded
into an app.
Unintended functions could be performed outside
the scope of the app’s functionality.
App Collusion Two or more apps passing information to
each other in order to increase the
capabilities of one or both apps beyond their
declared scope.
Collusion can allow apps to obtain data that was
unintended such as a gaming app obtaining
access to the user’s contact list.
Obfuscation Functionality or control flow that is hidden or
obscured from the user. For the purposes of
this appendix, obfuscation was defined as
three criteria: external library calls,
reflection, and packed code.
1. External libraries can contain unexpected
and/or malicious functionality.
2. Reflective calls can obscure the control flow of
an app and/or subvert permissions within an app.
3. Packed code prevents code reverse
engineering and can be used to hide malware.
Excessive Power
Consumption
Excessive functions or unintended apps
running on a device which intentionally or
unintentionally drain the battery.
Shortened battery life could affect the ability to
perform mission-critical functions.
29
NIST SP 800-163 Vetting the Security of Mobile Applications
Type Description Negative Consequence
Traditional
Software
Vulnerabilities
All vulnerabilities associated with Objective
C and others. This includes: Authentication
and Access Control, Buffer Handling,
Control Flow Management, Encryption and
Randomness, Error Handling, File Handling,
Information Leaks, Initialization and
Shutdown, Injection, Malicious Logic,
Number Handling and Pointer and
Reference Handling.
Common consequences include unexpected
outputs, resource exhaustion, denial of service,
etc.
The table below shows the hierarchy of iOS app vulnerabilities
from A level to C level.
Table 4. iOS Vulnerabilities by Level
Level A Level B Level C
Privacy Sensitive Information Contacts
Calendar Information
Tasks
Reminders
Photos
Bluetooth Access
Exposed Communications External Communications Telephony
Bluetooth
GPS
SMS/MMS
Network/Data Communications
Internal Communications Abusing Protocol Handlers
Potentially Dangerous Functionality Direct Memory Mapping
Memory Access
File System Access
Potentially Dangerous API Cost Sensitive APIs
Device Management APIs
Personal Information APIs
App Collusion Data Change Changes to Shared File Resources
Changes to Shared Database Resources
Changes to Shared Content Providers
Data Creation/Deletion Creation/Deletion to Shared File
Resources
Obfuscation Number of Services Excessive Checks for Service
State
Native Code Potentially Malicious Libraries Packaged but
not Used
Use of Potentially Dangerous Libraries
Reflection Identification
Class Introspection
Library Calls Constructor Introspection
Field Introspection
Method Introspection
30
NIST SP 800-163 Vetting the Security of Mobile Applications
Packed Code
Excessive Power Consumption CPU Usage
I/O
31
NIST SP 800-163 Vetting the Security of Mobile Applications
Appendix D—Glossary
Selected terms used in this publication are defined below.
Administrator A member of the organization who is responsible
for deploying, maintaining, and
securing the organization's mobile devices as well as ensuring
that deployed devices
and their installed apps conform to the organization's security
requirements.
Analyzer A service, tool, or human that tests an app for specific
software vulnerabilities.
App Security
Requirement
A requirement that ensures the security of an app. A general
requirement is an app
security requirement that defines software characteristics or
behavior that an app must
exhibit to be considered secure. A context-sensitive requirement
is an app security
requirement that is specific to the organization. An
organization's app security
requirements comprise a subset of general and context-sensitive
requirements.
App Vetting Process The process of verifying that an app meets
an organization's security requirements. An
app vetting process comprises app testing and app
approval/rejection activities.
Approver A member of the organization that determines the
organization's official approval or
rejection of an app.
Auditor A member of the organization who inspects reports and
risk assessments from one or
more analyzers as well as organization-specific criteria to
ensure that an app meets the
security requirements of the organization.
Dynamic Analysis Detecting software vulnerabilities by
executing an app using a set of input use cases
and analyzing the app’s runtime behavior.
Fault Injection Testing Attempting to artificially cause an error
with an app during execution by forcing it to
experience corrupt data or corrupt internal states to see how
robust it is against these
simulated failures.
Functionality Testing Verifying that an app’s user interface,
content, and features perform and display as
designed.
Personally Identifiable
Information
Any information about an individual that can be used to
distinguish or trace an
individual's identify and any other information that is linked or
linkable to an
individual [45].
Risk Assessment A value that defines an analyzer's estimated
level of security risk for using an app.
Risk assessments are typically based on the likelihood that a
detected vulnerability
will be exploited and the impact that the detected vulnerability
may have on the app or
its related device or network. Risk assessments are typically
represented as categories
(e.g., low-, moderate-, and high-risk).
Static Analysis Detecting software vulnerabilities by examining
the app source code and binary and
attempting to reason over all possible behaviors that might arise
at runtime.
Software Assurance The level of confidence that software is
free from vulnerabilities, either intentionally
designed into the software or accidentally inserted at any time
during its life cycle and
that the software functions in the intended manner.
Software Correctness
Testing
The process of executing a program with the intent of finding
errors and is aimed
primarily at improving quality assurance, verifying and
validating described
functionality, or estimating reliability.
Software Vulnerability A security flaw, glitch, or weakness
found in software that can be exploited by an
attacker.
32
NIST SP 800-163 Vetting the Security of Mobile Applications
Appendix E—Acronyms and Abbreviations
Selected acronyms and abbreviations used in this publication
are defined below.
3G 3rd Generation
API Application Programming Interface
CAPEC Common Attack Pattern Enumeration and Classification
CMVP Cryptographic Module Validation Program
CPU Central Processing Unit
CVE Common Vulnerabilities and Exposures
CWE Common Weakness Enumeration
DHS Department of Homeland Security
DoD Department of Defense
DOJ Department of Justice
EULA End User License Agreement
FIPS Federal Information Processing Standard
FISMA Federal Information Security Management Act
GPS Global Positioning System
I/O Input/Output
IT Information Technology
ITL Information Technology Laboratory
JVM Java Virtual Machine
LTE Long-Term Evolution
MDM Mobile Device Management
NFC Near Field Communications
NIST National Institute of Standards and Technology
NVD National Vulnerability Database
OMB Office of Management and Budget
OS Operating System
PII Personally Identifiable Information
QoS Quality of Service
SAMATE Software Assurance Metrics And Tool Evaluation
SCAP Security Content Automation Protocol
SD Secure Digital
SDK Software Development Kit
SLA Service-Level Agreement
SMS Short Message Service
SP Special Publication
UI User Interface
VPN Virtual Private Network
Wi-Fi Wireless Fidelity
33
NIST SP 800-163 Vetting the Security of Mobile Applications
Appendix F—References
References for this publication are listed below.
[1] Committee on National Security Systems (CNSS)
Instruction 4009, National Information
Assurance Glossary, April 2010.
https://www.cnss.gov/CNSS/issuances/Instructions.cfm
(accessed 12/4/14).
[2] National Aeronautics and Space Administration (NASA)
NASA-STD-8739.8 w/Change 1,
Standard for Software Assurance, July 28, 2004 (revised May 5,
2005).
http://www.hq.nasa.gov/office/codeq/doctree/87398.htm
(accessed 12/4/14).
[3] Radio Technical Commission for Aeronautics (RTCA),
RTCA/DO-178B, Software
Considerations in Airborne Systems and Equipment
Certification, December 1, 1992 (errata
issued March 26, 1999).
[4] International Electrotechnical Commission (IEC), IEC
61508, Functional Safety of
Electrical/Electronic/Programmable Electronic Safety-related
Systems, edition 2.0 (7 parts),
April 2010.
[5] International Organization for Standardization (ISO), ISO
26262, Road vehicles -- Functional
safety (10 parts), 2011.
[6] M. Souppaya and K. Scarfone, Guidelines for Managing the
Security of Mobile Devices in the
Enterprise, NIST Special Publication (SP) 800-124 Revision 1,
June 2013.
http://dx.doi.org/10.6028/NIST.SP.800-124r1.
[7] National Information Assurance Partnership, Protection
Profile for Mobile Device Fundamentals
Version 2.0, September 17, 2014. https://www.niap-
ccevs.org/pp/PP_MD_v2.0/ (accessed
12/4/14).
[8] Joint Task Force Transformation Initiative, Security and
Privacy Controls for Federal
Information Systems and Organizations, NIST Special
Publication (SP) 800-53 Revision 4, April
2013 (updated January 15, 2014).
http://dx.doi.org/10.6028/NIST.SP.800-53r4.
[9] NIST, AppVet Mobile App Vetting System [Web site],
http://csrc.nist.gov/projects/appvet/
(accessed 12/4/14).
[10] Joint Task Force Transformation Initiative, Guide for
Conducting Risk Assessments, NIST
Special Publication (SP) 800-30 Revision 1, September 2012.
http://csrc.nist.gov/publications/nistpubs/800-30-
rev1/sp800_30_r1.pdf (accessed 12/4/14).
[11] G. McGraw, Software Security: Building Security In,
Upper Saddle River, New Jersey: Addison-
Wesley Professional, 2006.
[12] M. Dowd, J. McDonald and J. Schuh, The Art of Software
Security Assessment - Identifying and
Preventing Software Vulnerabilities, Upper Saddle River, New
Jersey: Addison-Wesley
Professional, 2006.
34
NIST SP 800-163 Vetting the Security of Mobile Applications
[13] H.G. Rice, “Classes of Recursively Enumerable Sets and
Their Decision Problems,” Transactions
of the American Mathematical Society 74, pp. 358-366, 1953.
http://dx.doi.org/10.1090/S0002-
9947-1953-0053041-6.
[14] J.H. Allen, S. Barnum, R.J. Ellison, G. McGraw, and N.R.
Mead, Software Security Engineering:
a Guide for Project Managers, Upper Saddle River, New Jersey:
Addison-Wesley, 2008.
[15] Microsoft Corporation, The STRIDE Threat Model [Web
site], http://msdn.microsoft.com/en-
us/library/ee823878%28v=cs.20%29.aspx, 2005 (accessed
12/4/14).
[16] Trike - open source threat modeling methodology and tool
[Web site], http://www.octotrike.org
(accessed 12/4/14).
[17] K. Scarfone, M. Souppaya, A. Cody and A. Orebaugh,
Technical Guide to Information Security
Testing and Assessment, NIST Special Publication (SP) 800-
115, September 2008.
http://csrc.nist.gov/publications/nistpubs/800-115/SP800-
115.pdf (accessed 12/4/14).
[18] D. C. Gause and G. M. Weinberg, Exploring Requirements:
Quality Before Design, New York:
Dorset House Publishing, 1989.
[19] B. Nuseibeh and S. Easterbrook, “Requirements
engineering: a roadmap,” Proceedings of the
Conference on The Future of Software Engineering (ICSE '00),
Limerick, Ireland, June 7-9, 2000,
New York: ACM, pp. 35-46.
http://dx.doi.org/10.1145/336512.336523.
[20] NIST, Cryptographic Module Validation Program (CMVP)
[Web site],
http://csrc.nist.gov/groups/STM/cmvp/ (accessed 12/4/14).
[21] E. Barker and Q. Dang, Recommendation for Key
Management, Part 3: Application-Specific Key
Management Guidance, NIST Special Publication (SP) 800-57
Part 3 Revision 1 (DRAFT), May
2014. http://csrc.nist.gov/publications/drafts/800-
57pt3_r1/sp800_57_pt3_r1_draft.pdf (accessed
12/4/14).
[22] M. Pezzè and M. Young, Software Testing and Analysis:
Process, Principles and Techniques,
Hoboken, New Jersey: John Wiley & Sons, Inc., 2008.
[23] G.G. Schulmeyer, ed., Handbook of Software Quality
Assurance, 4th Edition, Norwood,
Massachusetts: Artech House, Inc., 2008.
[24] B.B. Agarwal, S.P. Tayal and M. Gupta, Software
Engineering and Testing, Sudbury,
Massachusetts: Jones and Bartlett Publishers, 2010.
[25] J.R. Maximoff, M.D. Trela, D.R. Kuhn and R. Kacker, A
Method for Analyzing System State-
space Coverage within a t-Wise Testing Framework, 4th Annual
IEEE International Systems
Conference, April 5-8, 2010, San Diego, California, pp. 598-
603.
http://dx.doi.org/10.1109/SYSTEMS.2010.5482481.
[26] G.J. Myers, The Art of Software Testing, second edition,
Hoboken, New Jersey: John Wiley &
Sons, Inc., 2004.
[27] OW2 Consortium, ASM - Java bytecode manipulation and
analysis framework [Web site],
http://asm.ow2.org/ (accessed 12/4/14).
35
NIST SP 800-163 Vetting the Security of Mobile Applications
[28] H. Chen, T. Zou and D. Wang, “Data-flow based
vulnerability analysis and java bytecode,” 7th
WSEAS International Conference on Applied Computer Science
(ACS’07), Venice, Italy,
November 21-23, 2007, pp. 201-207. http://www.wseas.us/e-
library/conferences/2007venice/papers/570-602.pdf (accessed
12/4/14).
[29] University of Maryland, FindBugs: Find Bugs in Java
Programs - Static analysis to look for bugs
in Java code [Web site], http://findbugs.sourceforge.net/
(accessed 12/4/14).
[30] R.A. Shah, Vulnerability Assessment of Java Bytecode,
Master’s Thesis, Auburn University,
December 16, 2005. http://hdl.handle.net/10415/203.
[31] G. Zhao, H. Chen and D. Wang, “Data-Flow Based
Analysis of Java Bytecode Vulnerability,” In
Proceedings of the Ninth International Conference on Web-Age
Information Management (WAIM
'08), Zhanjiajie, China, July 20-22, 2008, Washington, DC:
IEEE Computer Society, pp. 647-653.
http://doi.ieeecomputersociety.org/10.1109/WAIM.2008.99.
[32] G. Balakrishnan, WYSINWYX: What You See Is Not What
You eXecute, Ph.D. diss. [and Tech.
Rep. TR-1603], Computer Sciences Department, University of
Wisconsin-Madison, Madison,
Wisconsin, August 2007.
http://research.cs.wisc.edu/wpis/papers/balakrishnan_thesis.pdf
(accessed 12/4/14).
[33] NIST, SAMATE: Software Assurance Metrics And Tool
Evaluation [Web site],
http://samate.nist.gov/Main_Page.html (accessed 12/4/14).
[34] The MITRE Corporation, Common Weakness Enumeration
CWE-190: Integer Overflow or
Wraparound [Web site],
http://cwe.mitre.org/data/definitions/190.html (accessed
12/4/14).
[35] The MITRE Corporation, Common Weakness Enumeration
CWE-77: Improper Neutralization of
Special Elements used in a Command ('Command Injection')
[Web site],
http://cwe.mitre.org/data/definitions/77.html (accessed
12/4/14).
[36] NIST, National Vulnerability Database (NVD) [Web site],
http://nvd.nist.gov/ (accessed 12/4/14).
[37] NIST, Security Content Automation Protocol (SCAP) [Web
site], http://scap.nist.gov/ (accessed
12/4/14).
[38] The MITRE Corporation, Common Weakness Enumeration
[Web site], http://cwe.mitre.org/
(accessed 12/4/14).
[39] The MITRE Corporation, Common Attack Pattern
Enumeration and Classification [Web site],
https://capec.mitre.org/ (accessed 12/4/14).
[40] R.A. Martin and S.M. Christey, “The Software Industry’s
‘Clean Water Act’ Alternative,” IEEE
Security & Privacy10(3), pp. 24-31, May-June 2012.
http://dx.doi.org/10.1109/MSP.2012.3.
[41] Forum of Incident Response and Security Teams (FIRST),
Common Vulnerability Scoring System
(CVSS-SIG) [Web site], http://www.first.org/cvss (accessed
12/4/14).
[42] Industry Consortium for Advancement of Security on the
Internet (ICASI), The Common
Vulnerability Reporting Framework (CVRF) [Web site],
http://www.icasi.org/cvrf (accessed
12/4/14).
36
NIST SP 800-163 Vetting the Security of Mobile Applications
[43] U.S. Department of Homeland Security, Software & Supply
Chain Assurance – Community
Resources and Information Clearinghouse (CRIC):
Questionnaires [Web site],
https://buildsecurityin.us-cert.gov/swa/forums-and-working-
groups/acquisition-and-
outsourcing/resources#ques (accessed 12/4/14).
[44] The MITRE Corporation, Common Vulnerabilities and
Exposures (CVE) [Web site],
https://cve.mitre.org/ (accessed 12/4/14).
[45] E. McCallister, T. Grance and K. Scarfone, Guide to
Protecting the Confidentiality of Personally
Identifiable Information (PII), NIST Special Publication (SP)
800-122, April 2010.
http://csrc.nist.gov/publications/nistpubs/800-122/sp800-
122.pdf (accessed 12/4/14).
37
Attribution
Trusted Computing Strengthens Cloud Authentication by Eghbal
Ghazizadeh, Mazdak Zamani, Jamalul-lail Ab
Manan, and Mojtaba Alizadeh from The Scientific World
Journal is available under a Creative Commons 3.0
Unported license. Copyright © 2014 Eghbal Ghazizadeh et al.
Hindawi Publishing Corporation
Volume 2014, Article ID 260187, 17 pages
http://dx.doi.org/10.1155/2014/260187
Research Article
Trusted Computing Strengthens Cloud Authentication
Eghbal Ghazizadeh,1 Mazdak Zamani,1 Jamalul-lail Ab
Manan,2 and Mojtaba Alizadeh1
1 Universiti Teknologi Malaysia, 54100 Kuala Lumpur,
Malaysia
2 MIMOS Berhad, Technology Park Malaysia, 57000 Kuala
Lumpur, Malaysia
Correspondence should be addressed to Eghbal Ghazizadeh;
[email protected]
Received 28 September 2013; Accepted 19 December 2013;
Published 18 February 2014
Academic Editors: J. Shu and F. Yu
Copyright © 2014 Eghbal Ghazizadeh et al. This is an open
access article distributed under the Creative Commons
Attribution
License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is
properly
cited.
Cloud computing is a new generation of technology which is
designed to provide the commercial necessities, solve the IT
management issues, and run the appropriate applications.
Another entry on the list of cloud functions which has been
handled
internally is Identity Access Management (IAM). Companies
encounter IAM as security challenges while adopting more
technologies became apparent. Trust Multi-tenancy and trusted
computing based on a Trusted Platform Module (TPM) are great
technologies for solving the trust and security concerns in the
cloud identity environment. Single sign-on (SSO) and OpenID
have
been released to solve security and privacy problems for cloud
identity. This paper proposes the use of trusted computing,
Federated
Identity Management, and OpenID Web SSO to solve identity
theft in the cloud. Besides, this proposed model has been
simulated in
.Net environment. Security analyzing, simulation, and BLP
confidential model are three ways to evaluate and analyze our
proposed
model.
1. Introduction
The new definition of cloud computing has been born
by Amazon’s EC2 in 2006 in the territory of information
technology. Cloud computing has been revealed because
of commercial necessities and makes a suitable application.
According to NIST definition, “cloud computing is a model
for enabling ubiquitous, convenient, on-demand network
access to a shared pool of configurable computing resources.”
Resource pooling, on-demand self-service, rapid elasticity,
measured serviced, and broad network access are five vital
features of cloud computing. Cloud has various attributes.
It can be executed; supplies materials; can be organized and
is in infrastructure based. These qualities provide rapid and
unrestricted automatic access [1].
“The push for growth in 2011 is leading to changes in em-
phasis,” said M. Chiu. Also he said that IT is now recognized
by business transformation, or better said that digitaliza-
tion is accelerating. Cloud computing and IT management
according to statistics are of top ten business and technol-
ogy priorities of Asian CIOs strongly. Cloud computing, IT
management, mobile technology, virtualization, business
http://www.hindawi.com/journals/tswj/2014/260187/
https://creativecommons.org/licenses/by/3.0/
https://creativecommons.org/licenses/by/3.0/
http://dx.doi.org/10.1155/2014/260187
mailto:[email protected]
intelligence, infrastructure, business process
management, data management, enterprise application,
collaboration tech- nologies, and network data
communication are the top ten technology priorities.
Cloud will move to become the most popular business
worldwide. It shows that business and technology have
been dominated by cloud computing. Therefore,
thousands of new business are released every day about
cloud computing technology [2].
1.1. Security and Cloud Computing. While cost and
ease are two top benefits of cloud, trust and security
are two concerns of cloud computing users. Cloud
computing has claimed insurance for sensitive data
such as accounting, government, and healthcare and
brought opportuneness for end users. Virtualization,
multitenancy, elasticity, and data owners which
traditional security techniques cannot solve security
problems for them are some new issues in the cloud
computing. Trust is one of the most important issues in
cloud computing security; indeed, two trust questions are
in cloud computing. The first question is do cloud users
trust cloud computing? And second question is how to
make a trusted
2 The Scientific World Journal
base environment for cloud users? However, when it comes
to transfer their businesses to cloud, people tend to worry
about the privacy and security. Is CSP trustworthy? Will the
concentrated resources in the cloud be more attractive to
hackers? Will the customers be locked in to a particular CSP?
All these concerns constitute the main based obstacle to cloud
computing. Based on these questions and also to address
these concerns when considering moving critical application,
cloud must deliver a sufficient and powerful security level
for the cloud user in terms of new security issues in cloud
computing [3].
Abbadi and Martin illustrated that trust is not a novel
concept but users need to understand the subjects associated
with cloud computing. Technology and business perspective
are two sides of trust computing. The emerging technologies
that best address these issues must be determined. They
believed that trust means an act of confidence, faith, and
reliance. For example, if a system gives users insufficient
information, they will give less trust to the system. In contrast,
if the system gives users sufficient information, they will
trust the system well. There are some important issues with
trust. Control of our assets is the most important security
issue in cloud computing because users trust a system less
when users do not have much control of it. For example,
bank customers use a confidentiality ATM because when they
withdraw money from an ATM machine, they trust it [4].
Cloud computing suppliers should prepare a secure
territory for their consumers. A territory is a collection
of computing environments connected by one or more
networks that control the use of a common security policy.
Regulation and standardization are other issues of trust that
PGP, X509, and SAML are three examples for establishing
trust by using standard. Access management, trust, identity
management, single sign-on and single sign-off, audit and
compliance, and configuration management are six patterns
that identify basic security in cloud computing. Computer
researchers believe that trust is one of the most important
parts of security in cloud computing. There are three distinct
ways to prove trust. First, we trust someone because we know
them. Second, we trust someone because the person puts
trust on us. Third, sometimes we trust someone because
an organization that we trust vouches for that person. In
addition, there are three distinct definitions of trust. First,
trust means the capability of two special groups to define a
trust connection with an authentication authority. Second,
trust identifies that the authority can exchange credentials
(X.509 Certificates). Third, it uses those credentials to secure
messages and create signed security token (SAML). The main
goal of trust is that users can access a service even though that
service does not have knowledge of the users [5].
Trusted computing grows with technology development
and is raised by the Trusted Computing Group. Trusted
computing is the industry’s response to growing security
problems in the enterprise and is based on hardware root
trust. From this, enterprise system, application, and network
can be made safer and secure. With trusted computing, the
computer or system will reliably act in definite ways and
thus works in specific ways, and those performances will
be obligated by hardware and software when the owner of
those systems enabled these technologies. Therefore, using
trust causes computer environments to be safer, less prone to
viruses and malware, and thus more consistent.
In addition, trusted computing will permit computer
systems to propose improved security and efficiency. The
main aim of trusted computing has prepared a framework for
data and network security that covers data protection, dis-
aster recovery, encryption, authentication, layered security,
identity, and access control [6].
1.2. Cloud Computing and Federated Identity. One of the
problems with cloud computing is the management and
maintenance of the user’s account. Because of the many
advantages of cloud such as cost and security, identity man-
agement should promise all the cloud advantages. Therefore,
the new proposed model and standards should make use of
all the cloud advantages and prepare an environment to ease
the use of all of them.
Private cloud, public cloud, community cloud, and hybrid
cloud are four essential types of cloud computing that
were named by cloud users and venders. The community
cloud definition shares infrastructure for definite community
between organizations with common concern and private
or internal cloud shares cloud services between a classified
set of users. Attending to establish the best digital identity
for users to use is the most important concern of cloud
providers. Cloud prepares a large amount of various comput-
ing resources; consequently, diverse users in the same system
can share their information and work together. Classified
identity and common authentication are an essential part of
cloud authentication and federated identity acts as a service
based on a common authentication even though it is broadly
used by most important companies like Amazon, Microsoft,
Google, IBM, and Yahoo [7].
Today’s internet user has about thirteen accounts that
require usernames and passwords and enters about eight
passwords per day. It becomes a problem and internet users
face the load of managing this huge number of accounts
and passwords, which leads to password tiredness; indeed
the burden of human memory, password fatigue may cause
users to use password management strategies that reduce the
security of their secured information. Besides, the site centric
Web makes online and each user account is created and
managed in a distinct administrative domain so that profile
management and personal content sharing will be difficult.
SSO or Web single sign-on systems are invited to address the
mentioned problem with the site centric. Web users, identity
provider and service provider are three parts of SSO and they
have a distinct role in identity scenario. An IdP collects user
identity information and authenticates users, while an RP
trusts on the authenticated identity to make authorization
decisions. OpenID is a promising and open user centric Web
SSO solution. More than one billion OpenID accounts have
been permitted to use service providers according to the
OpenID Foundation. Furthermore, the US Government has
cooperated with the OpenID Foundation in the provision of
the Open Government Initiative’s pilot acceptance of OpenID
technology [8].
The Scientific World Journal 3
Single username and password is an essential part of
single sign-on protocols in terms of authenticate crossway
numerous systems and applications, efforts to statement
privacy, and security issues for web users. There are three
widespread Web SSO implementations that named that
OpenID, Passport/Live ID, and SAML will be enclosed along
with details of common weaknesses and issues [9].
Web SSO solutions were initially advanced by numer-
ous educational institutions in the mid-1990s. For example,
Cornell University’s SideCar, Stanford University’s WebAuth
and Yale’s Central Authentication System (CAS) were entirely
early reformers in the field [10].
SSO today has a critical role in cloud security and
becomes essential to realize how secure the deployed SSO
mechanisms truly are. Nevertheless, it is a new idea and
no previous work includes a broad study on commercially
deployed web SSO systems, a key to understanding of what
extent these real systems are subject to security breaches [11].
In addition, Wang and Shao showed that weaknesses that
do not illustrate on the protocol level could be brought in by
what the system essentially agrees to each SSO party to do
[12].
1.3. Background of the Problem. Madsen et al. in [13] defined
some problems of federated identity and illustrated that FIM
or Federated Identity Management established on standards
permits and simplifies joining federated organizations in
terms of sharing user identity attributes, simplifying authen-
tication and allowance, or denying using service access
requirements. The definition of SSO is defined as using it
facility user authenticates only one time to home identity
provider and logging in to access successive service providing
service providers within the federated identity. There are
some active problems and concerns in a federated identity
environment like misuse of user identity information through
SSO capability in service providers and identity providers,
user’s identity theft, service providers and identity providers,
and trustworthiness of the user. Federated identity has iden-
tified that regardless of good architectures it still has some
security problems that should be considered in the real estate
implementation.
Wang in [9] discovered another problem of federated
identity that is switching authentication mechanisms to an
SSO solution. It means further education of the users is
required and possible loss of the user base if the transition
is not smoothly executed. Also worsening the situation is the
lack of demand from users for a Web SSO solution. Studies
have shown that users are already satisfied by their own
password managers.
1.4. Scope and Limitation. This research focuses on trust in
cloud computing. Trust in the cloud is a critical part in cloud
security. As it has been mentioned in the introduction, access
management, trust, identity management, single sign-on and
single sign-off, audit and compliance, and configuration man-
agement are six patterns that identify basic security in cloud
computing. This study will focus on federated identity which
is used for user identity management. In addition, there
are several federated identity architectures for managing this
problem. Hub and spoke model, free form model, and hybrid
model are three models for implementing federated identify
in trusted computing. Although, there are some remaining
issues and challenges in these management trust models so
this study will focus on the hybrid model to enhance and
propose architecture.
Stopping Phishing attacks completely is very difficult.
Therefore, the aim of this study is to decrease the number
of identity theft and Phishing attacks. Trusted computing
that is applied in this study tries to decrease the success
probability of Phishing attack and identity theft. However,
as TPM in OpenID has been leveraged, other attacks, apart
from Phishing, attempt to deal with sensitive information
as an asset of users. Furthermore, in this study attacks that
compromise users’ computers have been ignored. Also key
loggers and rootkits and cookie attacks that look like cross
site request forgery (CSRF) attacks and cross site script (XSS)
attacks have been ignored. Finally, attacks which compromise
the integrity of the website will not be discussed.
In conclusion, this study’s scope is single sign-on authen-
tication by using some of the protocols like SAML, OAuth,
and OpenID. As it has been mentioned, user uses these
authentication models in many APIs uses these authentica-
tion models. Visual Studio 2012 and SQL Server 2012 have
been used for simulation of the proposed model.
1.5. Related Work. You and Jun in [14] proposed Internet
Personal Identification Number or I-PIN technique to make
stronger user authentications with the cloud OpenID envi-
ronment against phishing problem of relying parties which
means users could obtain internet service with the OpenID,
although, forcing in OpenID territory in some countries
has been found by them. Besides, they analyze and evaluate
their methods in comparison with the existing OpenID
method, and they recommend some ways to secure OpenID
environment. In their method user has to choose only 1
company out of 3 companies which delivers OpenID and the
index to be a member in order to obtain OpenID service. If
a user receives I-PIN information from main confirmation
authority via IDP with creation OpenID, the problem of main
authentication that OpenID has could be solved.
In addition, they evaluated their study and after compari-
son with an existing OpenID they confirmed that authentica-
tion is secure and safe against attack and private information
exposition. Also they found the problem of Phishing and
shown that the existing OpenID system could be solving
Phishing problem and relying party authentication guaran-
tees security against the exposition of user’s passwords and
ID.
Ding and Wei in [15] proposed a multilevel security
framework of OpenID authentication. They claimed that
their recommended framework is appropriate for all types of
rule-based security model. They presented the basic OpenID
infrastructure based on the single sign-on explanations that
included hierarchy, OpenID components, and other security
issues. Next they proposed a security access control structure
for OpenID based on BLP model, which can be used to
4 The Scientific World Journal
resolve the difficulty on the access control of multilevel
security, and finally they store the security label in the XML
document.
Mat Nor et al. in [16] took initiative remote user authen-
tication protocol with both elements of trust and privacy by
using TPM (VTPM). Ding and Wei investigated the security
access control scheme for OpenID based on BLP model,
which can be used to solve the problem on access control
of multilevel security with storing the security label in XML
document. Sun et al. [17] discussed the problem of Web SSO
adoption for RPs and argued that solutions in this space
must offer RPs sufficient business incentives and trustworthy
identity services in order to succeed. Fazli BinMat Nor and
Ab Manan [18] proposed an enhanced remote authentication
protocol with hardware based attestation and pseudonym
identity enhancement to mitigate MITM attacks as well as
improving user identity privacy.
Latze and Ultes-Nitsche in [19] proposed using the
trusted TPM (VTPM) for ensuring both an e-commerce
application’s integrity and binding user authentication to
user credentials and the usage of specific hardware during
the authentication process. Urien in [20] illustrated strong
authentication (without password) for OpenID, according to
a plug and play paradigm, based on SSL smart cards against
Phishing attack. Khiabani et al. in [21] discovered previous
trust and privacy models in pervasive systems and analyzed
them to highlight the importance of trusted computing
platforms in tackling their weaknesses.
1.6. Paper Organization. The rest of this paper is organized
as follows. In the next section, we discuss the proposed
solution in detail. Section 3 provides a simulation of trusted
OpenID. In Section 4, we consider issues that may arise
when implementing the solution and analysis based on these
issues in three approaches, and Section 5 summarizes our
conclusions.
2. Proposed
Solution
In this part, the proposed procedure will be explained.
Programming languages help this study to evaluate the
efficiency of the procedure. Waterfall model will be utilized
to design and implement the prototype. Thus, firstly the
requirements will be analyzed and secondly the algorithms
based on the requirements will be proposed to solve the
problem. Thirdly, the implantation steps and the essential
functions are explained.
First step in designing software is requirement analysis.
There are a few activities that need to be done before
designing the algorithm. This research will be conducted
using an experimental and modeling approach. It will begin
with in-depth reading to understand the state of the art in
the areas of cloud computing authentication. One of the
main requirements of this study is a flow diagram of cloud
authentication. Some of the best techniques that have been
proposed were discussed in the related work. The advantages
and disadvantages of those works were discussed. Therefore
it seems that there are still open issues that are required to be
searched.
This model is based ona hybrid model architecture which
is a mix of both the hub and spoke and free form model. The
hybrid model will also be found in organizations that mix
traditional or legacy computing with a public or private cloud
environment. The aim is to make a seamless authentication
environment that is based on the hybrid model. It has been
trying to bring some consideration for time table between
private clouds and choose the best way to achieve single sign-
on. According to Figure 1, trusted authority to make a good
relation between IDP and RP to prevent ID theft has been
used.
This proposed model in comparison with the previous
proposed models in the related work should grab authentic
cloud services using the user’s privacy policies, an indepen-
dent third party, and ensure mutual protection of both clients
and providers as main goals to afford identity management
and access control. Cloud computing, SSO, and trusted
computing are highlighted to support to the tasks of OpenID
authentication.
To avoid a weak and vulnerable link in enterprise secu-
rity, trust must be established in all the components in
the infrastructure. This requires establishing a relationship,
considering the identity of all of the devices and parties
within the trust domain, to exchange essential key secrets
that allows signing any messages that are sent back and forth.
The novelty lies in the fact that the proposed model has not
been offered by OpenID yet. Combining trusted computing,
cloud computing, and federated identity, the model can
contribute to security and privacy of cloud computing. To our
knowledge, this model has not been proposed so far and we
intend to pursue this model in our future work.
Binary attestation and Direct Anonymous Attestation
(DAA) are two types of remote attestation. Binary attestation
which is static in nature and cryptographic protocol which
allows the remote authentication of a trusted platform whilst
preserving the user’s privacy is DAA. In our proposed model,
binary attestation is used for checking integrity of users, RPs,
and IDPs.
Cloud computing users, IDPs, and SPs can use a Trusted
Platform Module to authenticate their hardware devices.
Since each TPM (VTPM) chip or Virtual TPM (VTPM) has
a unique and secret RSA key burned in it as it is produced,
it is capable of performing platform authentication. On the
trusted platform each system should be independent of
any trusted third party. Besides Independence, each system
should be able to unambiguously identify users that can be
trusted both across the Web and within enterprises.
In comparison with the related work, the essential goal of
the proposed model is to provide access control and identity
management which are the following:
(i) being an independent third party;
(ii) authenticate cloud services using the user’s privacy
policies, providing minimal information to the SP;
(iii) ensure mutual protection of both clients and pro-
viders.
The Scientific World Journal 5
Cloud user
Figure 1: Proposed trust based federated identity architecture.
Using trusted computing in OpenID should be platform
independent and flexible enough to meet the challenges of
today’s scalable computing environments. The proposed sce-
nario is as in Figure 1. This figure shows that Trusted Author-
ity to handles trust relationships between user’s browser, RP,
and IDP. The negotiations between them show a six-step
scenario. This proposed model highlights the use of the TPM
(VTPM), Virtual TPM (VTPM), OpenID protocol which
try to support the tasks of authentication, authorization
and identity federation. Beyond these objectives, the main
contribution of our work is the implementation in the
cloud computing. In addition, Figure 2 shows the trusted
based model sequence diagram based on trusted OpenID
proposed model. Besides, Figure 3 illustrates the proposed
model flowchart and highlights Trusted Authority (TA) and
TPM (VTPM) roles in this model.
2.1. Log on Using the User’s URL. OpenID allows us to sign
into websites using a single identifier in the form of a URL.
This URL is used instead of our username and password
and it has been named Personal Identity Portal (PIP) URL.
For example, in the Symantec site for user Bob, the PIP
URL is bob.pip.verisignlabs.com and he can use many sites
that support OpenID instead of username and password.
In this example, these sites are service providers and the
Symantec is identity provider. PIP helps us eliminate the
need to remember different usernames and passwords for
our service providers. PIP also delivers the elasticity and
flexibility to share only the information we choose with each
service provider. Furthermore, by creating a PIP account, we
will receive a PIP URL that we can use to sign in or register at
any service provider that supports OpenID and displays the
OpenID logo.
2.2. Redirection for Client Authentication. In this step, the SP
locates the user’s location and creates an authentication token.
SP asks the user to prove that he/she is who he/she is. For this
reason, after getting the RP’s request, the browser performs
the next step. In the OpenID Authentication version 2.0 the
IDP and RP establish an association with RSA technique
secret key whereas it does not require to associate with RSA
by using TPM (VTPM) hash code.
2.3. Client Authentication with IDP. In this scenario, the
browser proceeds with token exchange based on SAML pro-
tocol. Here, users have three options to prove their identity
to IDP, depending on the context of the interaction. Firstly,
proving their identity, this is inserting his/her user name and
password to prove his/her identity. Secondly, this can be done
by using browser cookies if the users activate them. Thirdly,
it can be done through using some software like SeatBelt
extension in Firefox browser which is a plug-in that helps
users when signing into OpenID sites, IDP using PIP URL.
SeatBelt extension detects that the user has clicked on an
OpenID sign-in field and prompts users to sign in. Finally,
after first-time sign in, subsequent requests to SeatBelt will
automatically return the user to the OpenID sign-in page
with his/her PIP already filled in.
M TPM M
TPM
T
6 The Scientific World Journal
Figure 2: Trusted based model sequence diagram.
2.4. Mutual Attestation. Step 4 is the most critical part
of our proposed OpenID trust based Federated Identity
Architecture. In our environment we have assumed that IDP
can collude with SP to connect user’s alliance and collect
user’s behaviors. Also we have supposed that there is no
Privacy Enhancing Technology (PET) organized in our cloud
environment. Using Trusted Authority (TA) as the core, user’s
browser, Relying Party (RP) or Service Provider (SP), and IDP
must prove their identity based on mutual attestation process
using their TPM- (VTPM-) enabled platforms and verified
by the TA. We have assumed that the Trusted Authority
is trustworthy and must be sanctioned by the country’s
government or authorized by all participating parties in the
Federated Identity Architecture. In this scenario TA is attester
that sends the challenge to attestees (IDP, USER, and SP)
to check their integrity. TA by default uses binary remote
attestation to check their integrity but in this scenario uses
DAA technique regarding integrity checking.
2.5. Authorizing User’s Browser. If and only if the mutual
attestation process has been successful, that is, the user and
IDP have confidence in each other, then the IDP will deliver
SAML token to the user’s browser. Today, some of the IDPs
like Google, Facebook, and IBM use SAML token and deliver
their users access rights through SAML token. For future
work we may consider other mechanisms.
2.6. User’s Browser Request. In this step, the user puts more
confidence in the SP for getting a service base on the SAML
token. IDP sends a encrypt token by the user’s public key that
shows IDP is legitimated and verified by a trusted authority.
User decrypts the token by his/her private key. Finally, the
user uses the token to get more services from RP or SP.
It is important to note that the TA in our proposed
architecture will need to deal with Virtual Machine (VM).
It will be associated with different services requested by
the client. Hence, TA needs to attest each virtual TPM
(VTPM) that is associated with each VM on Client Machine.
Finally, this paper highlights the use of an OpenID, trusted
computing, and federated identity which deliver provision
to make secure authentication, authorization, and identity
federation. Trusted computing is very flexible with regards
to its use in a cloud environment allowing a service to be
provided securely and reliably. In comparison with other
frameworks in related work, our proposed model has more
strength because of better security management, shifted risk
management, private remote attestation, and efficient binding
of identity assertions.
3. Simulation of Trusted OpenID
As mentioned before the overall flow of OpenID authoriza-
tion works are signed up for an OpenID Consumer Key,
perform discovery, get a pre-approved request token, and
continue with the OpenID Flow. In addition to this flow, in
order to evaluate the process we could find the vulnerabilities
of OpenID flow to better understand the behavior of the
proposed model in the real world.
In this section we turn our attention to simulate the
purpose algorithm and design to develop this model based on
simulation in C#. Our system, referred to as Trust OpenID,
eir gr
gin
gin
gr
gr
gr
gin
TPM
The Scientific World Journal 7
Initiation
User
User-
Supplied
Identifier
User request
to RP
Check
user supplied
identifier
validation
No Inform the user Termination
TPM
secret key
Yes
Redirect data
to IDP via
user’s browser
User’S
openID
TA
database
TPM
secret key
Authentication
request
IDP
database
Username
and
password
Yes
Trust No
check
Yes
Inform
the
user
Termination
Username
and
password
Check
the
validation
User
authorisation
Redirect
user approval
to RP via
user browser
End
No
Figure 3: Trusted based model sequence flow chart.
is composed of different component which has its own
tasks. For now, based on our scope we run our system on
the virtualization system but in the future work we will
explain that our system has this ability to be located in the
cloud environment and integrated with VMware, Microsoft
Azur, XenServer. Our system will observe the data flow of
OpenID, request, discover, approve and authorize the user
and providers. In this simulation first we fetch all the vendors,
process, request, discovery between the user’s browser and
service and identity providers. On the other hand we are
observing and monitoring data exchange between objects and
subjects.
As it is illustrated in Figure 4, the main page contains
sign-in form, registration form, simulation, and exit. This
simulation should cover the communication type, generating
signature, initiation and discovery, establishing association,
request authentication, responding to authentication request,
and verifying assertion. The first step in using OpenID is
to set up an account with one of the many providers on
the web. Registration form helps us to register just one
time for getting and using OpenID account. Next in sign-
in form after getting OpenID account, user can use her
or his OpenID account to get the service’s SP by IDP
account.
Inform the user
8 The Scientific World Journal
Figure 4: Main form of simulation.
Figure 5: Registration Form.
If the user is thinking that OpenID sounds a lot like
Facebook Connect, he or she can use this credential to use
other services of other sites which accept Facebook Connect.
It allows users to sign into websites using their Facebook
credentials.
In order for an OpenID Connect Client to utilize OpenID
services for a user, the Client needs to register with the
OpenID Provider to acquire a Client ID and shared secret.
Figure 5 describes how a new Client can register with the
provider, and how a Client already in possession of a Client
ID can retrieve updated registration information. The Client
Registration Endpoint may be coresident with the token
endpoint as an optimization in some deployments. This study
assumes that UTM, UPM, UM, MMU, UITM, and USM are
identity providers. Figure 6 shows IDP’s database which uses
dbo. OpenID to save data.
The user sends request encoded as a post with the Name,
Last Name, ID Number, Email, username, Password, and
Identity provider parameters added to the HTTP request to
the IDP. Also Figure 6 illustrates the registration information
database which is containing registration information and
TPM (VTPM) and OpenID URL of the users. Figure 7 shows
confirmation of the registration part and OpenID URL which
has been issued by IDP. In this step with OpenID acceptance,
the TPM (VTPM) also saves in the dbo.reginfo SQL database
as shown in Figure 8. Also it shows the OpenID URL which
assigned by the IDP provider.
3.1. Log on Using the User’s URL. Based on the proposed
model as mentioned before, the first step is logged on in
service provider by OpenID URL. OpenID allows us to
sign into websites using a single identifier in the form of a
URL. This study assumed that service providers are Skydrive,
Dropbox, and MSN which accepted our IDPs. First in RP
combo user choose the RP and after that user should enter the
OpenID URL or PIP. Figure 9 simulates that user assigned to
Skydrive by iqbalqazi.usmopenid.com.
3.2. Redirection for Client Authentication. In this step, the
SP redirects user to IDP to prove his or her identity and
get appropriate token. For this reason, after getting the RP’s
request, sign-in form will appear. Figure 10 shows this step.
3.3. Client Authentication with IDP. In this step based on
Figure 10, the user should prove his or her identity. As
mentioned before the user has three options to prove identity
to IDP, depending on the context of the interaction. In this
simulation user is asked to enter username and password.
3.4. Mutual Attestation. Checking user’s integrity is the aim
of this step. First of all IDP checks the username and password
based on dbo.reginfo. Next, TA checks the integrity based on
the TPM (VTPM). In this study it is assumed that TA after
attestation has all the TPM (VTPM) in its database. Code in
dbo. TA means TPM (VTPM) of the all vendors and users
which has been shown in Figure 11.
This step is the most important part and has been
emphasized by hackers. Simulation after checking username
and password checks the TPM (VTPM) after clicking on the
Check button. If the result of checking the username and
password was correct the continue button will be highlighted
otherwise the Phishing page will appear. Figures 12 and 13
show this process.
3.5. Authorizing User’s Browser. If and only if the previous
process has been successful which means the user and IDP
have confidence in each other. Based on OpenID privacy
user can determine how long the SP accepts her or his
approved identity. Figure 14 illustrates that user can choose
Allow forever, Allow once, and Deny. IDP delivers SAML
token to RP via user’s browser.
3.6. User’s Browser Request. Figure 15 illustrates the confi-
dence between the user and RP; that is, user could get a
service based on the SAML token.
The Scientific World Journal 9
Figure 6: IDP database.
Figure 7: Confirmation of registration.
In this part we explained trusted based model, sequence
and flow chart to mitigate identity theft in OpenID single sign
on environment. Next part shows the evaluation and test of
this proposed model.
4. Test and Evaluation
In this part, the simulation and design of the proposed
solution will be discussed in detail. Objectives, subjects, and
their attributes are three considerations of each system. The
details of the subjects and objects were discussed and then
the process was explained; finally the Trusted OpenID will
be analyzed by three-approache sample tests. Design phase,
architecture, flowchart, and flow diagram of the proposed
model have been shown and discussed in Section 3. Then
analyzing the results of the proposed design must fulfill the
objectives of the project which is using trusted computing
against identity theft in a cloud identity environment espe-
cially in OpenID authentication. Confidentiality analysis,
simulation analysis, and security analysis are three methods
to evaluate and test the proposed model.
4.1. Confidentiality Analysis. Key role in establishing com-
puter system is the security model which has been used
to express the security policy. Availability, integrity, and
confidentiality are three most important concerns in security
area where confidentiality is the aim of this research and in
this part we will explain how our proposed try to confidence
authentication in a cloud environment by using Trusted
computing. There are some security models which attend
to gain security in their enterprises. For example, Biba and
Bell&LaPadula (BLP) are two most popular security models;
Biba focused on integrity in recognized mathematical terms
and BLP is the most classic confidentiality model focused
on confidentiality. In this study BPL will be used as a basic
theory for confidentiality prove in multilevel security Trusted
OpenID and as a way for evaluating the proposed model,
it has been focused on confidentiality of trusted OpenID
mechanism. Also, in case of adopting the BLP model to
10 The Scientific World Journal
Figure 8: Registration database.
Figure 9: Simulation of RP website.
OpenID we focused on the subjects, objects, and their data
exchange of users’ application to enhance the confidentiality
and security of Trusted OpenID mechanism besides bring the
user convenience.
4.1.1. BLP Model. Military affair is a good example for using
BLP model all the same the designer of computer security
utilized and applied to contribute to computer security. While
this model has so many advantages, but there are some
Figure 10: IDP Sign on.
disadvantages such as forgetting access history [15]. Security
level and access matrix are two main parts in BLP model
which define the access permissions. Security policies accept
information flowing upwards from a low security level to
a high security level and also prevent information flowing
downwards from a high security level to a low security level.
4.1.2. Terminology
Identifier (I): an Identifier is either an “http” or “https” URI,
commonly referred to as a “URL” within this document or an
The Scientific World Journal 11
Figure 11: TA database.
Figure 12: TPM checking.
XRI (XRI Syntax 2.0). This document defines various kinds
of Identifiers, designed for use in different contexts.
User-Agent (UA): the end user’s Web browser which imple-
ments HTTP/1.1. User-Agents will primarily be common Web
browsers
Relying Party (RP): a Web application that wants proof that
the end user controls an Identifier.
OpenID Provider (IDP): an OpenID Authentication server on
which a Relying Party relies for an assertion that the end user
controls an Identifier.
Figure 13: Pass TPM checking.
IDP Endpoint URL (IEU): the URL which accepts OpenID
Authentication protocol messages, obtained by performing
discovery on the User-Supplied Identifier. This value must be
an absolute HTTP or HTTPS URL.
IDP Identifier (II): an Identifier for an OpenID Provider.
User-Supplied Identifier (USI): an Identifier that was presented
by the end user to the Relying Party, or selected by the user
12 The Scientific World Journal
Figure 14: User authorization.
Figure 15: RP accepts user’s credential.
at the OpenID Provider. During the initiation phase of the
protocol, an end user may enter either their own Identifier
or an IDP Identifier. If an IDP Identifier is used, the IDP may
then assist the end user in selecting an Identifier to share with
the Relying Party.
Claimed Identifier (CI): an Identifier that the end user claims
to own; the overall aim of the protocol is verifying this claim.
IDP-Local Identifier (ILI): an alternate Identifier for an
end user that is local to a particular IDP and thus
not necessarily under the end user’s control. http://eqh-
ball.pip.verisignlabs.com.
Generate (G): an identifier which has been generated by TPM
or VTPM.
Challenge (CH): TA is attester that sends the challenge
to attestees (IDP, USER, and RP) to check their integrity.
As mentioned before TA by default uses binary remote
attestation to check their integrity but in this scenario uses
DAA technique regarding integrity checking.
Trusted Authority (TA): an enterprise to verify mutual attes-
tation process using the TPM- (VTPM-)enabled platforms.
Figure 16 shows completely the objects and subjects of the
proposed model and the place based on the final OpenID
authentication architecture. These subjects and objects help
to end user to prove that he or she owns a URL or XRI
through a series of communications between the user and the
website to which he/she is authenticated. The user submits the
URI/XRI identifier at OpenID Relying Party through User-
Agent’s browser.
4.1.3. Protocol Overview. Based on the OpenID authentica-
tion [22] in the first step, the end user initiates authentication
by presenting a USI to the RP via their UA. To initiate OpenID
Authentication, the RP should present the end user with a
form that has a field for entering a USI.
Next, after normalizing the USI, RP performs discovery
on USI and establishes the IEU that the end user uses for
authentication. Upon successful completion of discovery, RP
will have IEU and Protocol Version but if the end user did not
enter II the CI and ILI will be present. In this scenario it has
been assumed that the user entered the II and therefore CI
and ILI will be omitted.
The RP and the IDP establish an association based on
their TPM (VTPM). The IDP uses an association to sign
subsequent messages and the RP to verify those messages
whereas OpenID Authentication supports two signature
algorithms which are HMAC-SHA256 with 256 bit key length
and HMAC-SHA1 with 160 bit key length.
Once the RP has successfully performed discovery and
created an association with the discovered IDU, it can send
an authentication request to the IDP to obtain an assertion.
An authentication request is an indirect request. In step three,
RP redirects the end user’s UA to the IDP with an OpenID
Authentication request.
This model is indirect communication in which messages
are passed through the User-Agent. This can be initiated by
either the Relying Party or the IDP. Indirect communica-
tion is used for authentication requests and authentication
responses. There are two methods for indirect communica-
tion: HTTP redirections and HTML form submission
When an authentication request comes from the UA via
indirect communication, the IDP should determine that an
authorized end user wishes to complete the authentication. If
an authorized end user wishes to complete the authentication,
the IDP should send a positive assertion to the RP.
Next in step four, the IDP establishes whether the end user
is authorized to perform OpenID Authentication and wishes
to do so.
In step five, the TA checks the trust’s objectives
based on their TPM (VTPM). IDP redirects the end
user’s UA back to the RP with either an assertion that
authentication is approved or a message that authentic-
ation failed. This identifier includes openid.ns, openid.mode,
openid.op endpoint, openid.claimed id, openid.identity,
http://eqh-/
The Scientific World Journal 13
Registration back channel
Replying Party (RP)
OpenID Provider (IDP)
TPM TPM
5 2
4
3 1
Trust Authority (TA)
6
Assertion, identifier (I) Assertion, identifier (I)
Claim Identifier (CI)
IDP Local Identifier (ILI)
Figure 16: Objects and subjects of the proposed model.
openid.return to, openid.response nonce, openid.invalidate
handle, openid.assoc handle, openid.signed, and openid.sig.
If the IDP is unable to identify the end user or the end user
does not or cannot approve the authentication request, the
IDP should send a negative assertion to the RP as an indirect
response.
Step six and the last step which the RP verifies the
information received from the IDP including checking the
Return URL, verifying the discovered information, checking
the nonce, and verifying the signature by using the TPM
(VTPM) established during the association to the IDP.
4.1.4. Phishing Attack. A special type of Phishing attack is
where the RP is a rogue party acting as a fake RP. The
RP would perform discovery on the EUI and instead of
redirecting the UA to the IDP, it would proxy the IDP through
itself. This would thus allow the RP to capture credentials that
the end user provides to the IDP. While TA prevents this sort
of attack by checking the integrity of the RP and IDP.
4.1.5. Definition of System State. Our proposed model is a
state machine model, and the state is expressed by Figures
16, 17, and 18. It consists of subjects, objects, access attribute,
access matrix, subject functions, and object functions. It
has been defined that the subjects of the proposed model
are Challenge (CH), Generate TPM (VTPM) Identifier (G),
User-Supplied Identifier (USI), IDP Identifier (II), IDP End-
point URL (IEU), Identifier (I), Claim identifier (CI), and
IDP Local Identifier (ILI). Also objects of the proposed
model are Relying Party (RP), OpenID Provider (IDP), User-
Agent (UA), Trust Authority (TA), Trusted Platform Module
(TPM), and Virtual TPM (vTPM). Access attributes are Read,
Write, Read and write, and execute. Access Matrix is access
Figure 17: Access class matrix and relationship between objects
and
subjects.
matrix, where each member represents the access authority
of subject to object.
The proposed model state machine is �� where each
member of �� is ��.
(1) �� ∈ �� where �� is sorted quaternion.
(2) �� = (��, ��, ��, ��), where,
(3) �� ⊆ (�� × �� × ��),
(4) �� is a set of subjects.
(5) �� is a set of objects.
(6) �� = [��, ��, ��, ��] is the set of access attribute.
(7) �� is access matrix, where ���� ⊆ �� signifies the
access
authority of ���� to ����.
(8) ∈ �� is the access class function, denoted as �� =
(����, ����).
TPM
T
Read/wri
14 The Scientific World Journal
CH
CH
CI and ILI
IEU
Security level
TPM
TA
IDP
RP
state 4 as shown in Figure 1. Therefore, if state 4 is secure all
the states will be secure and based on the Section 4.1.4 this
step also is secure to gain the secure state.
G
G (19) Σ(��, ��, ��, �� 0)⊂ �� × �� × ��.
G (20) (��, ��, ��) ∈ Σ(��, ��, ��, ��0).
(21) If and only if (�� ��, �� ��, �� ��, �� �� −1)
∈ �� for each �� ∈
��, where �� 0 is the initial state. Based on the above
definition, Σ(��, ��, ��, ��0) is secure iff all states of the
I
system, for example, (��0 , ��1 , . . . , ����), are secure
states.
USI IEU BLP model has several axioms (properties) which can
CH G (ILI)
UA
Figure 18: Security level, Subject, and object of the proposed
model.
(9) ���� is the security level of the subject and
includes
the confidentiality level ��1 (��) and category level ��4
(��).
Figure 17 shows the security level in BLP and relation
between subjects and objects.
(10) �� signifies the security function of objects. Figure 18
be used to limit and restrict the state transformation. If the
arbitrary state of the system is secure, then the system is
secure. In this study the simple-security property will be
adopted.
4.1.6. Simple-Security Property (SSP)
(22) �� = (��, ��, ��, ��)
(23) Satisfies SSP if and only if,
(24) For all �� ∈ ��,
(25) �� ∈ �� ⇒ [(�� ∈ (�� : ��, ��)) ⇒ (���� (��), >
���� (��))],
shows the relation the entire subjects, objects, security
functions, security level of the proposed model. As
shown in this figure confidentiality of the TPM
(VTPM) is highest in the state machine and the lowest
is the user agent. Therefore,
(11) Confidentiality level is ��1(������),
��2(����), ��3(������),
��4(����), and level ��5(����).
(12) �� signifies the existing Hierarchy on the
proposed model as shown in Figure 18.
(13) ��: �� × �� → × �� shows all therolese in the
proposed
model in which �� is the system response and the next
state. �� is the requests set and �� is the arbitrate set
of the request which is yes, no, error, and question.
In this study question is important because if the
response equal to question, it means that the current
rule cannot transact with this request.
(14) �� = [��1 , ��2 , . . . , ����], where �� is the list
exchange data
between objects.
(15) �� (��) ⊆ �� × �� × �� × ��.
(16) (����, ����, ��∗ , ��) ∈ (��).
(26) that is, ��1 (��) ≥ ��2 (��)), ��3 (��) ⊇ ��4 (��).
(27) ��1(��) ≥ ��2(TPM),
(28) ��1(IEU) ≥ ��2(RP),
Based on the figure 18 and the SSP, all the objects of the
proposed model which has been categorized in the proposed
security level can write in the lower state. As well as all the
objects can read at a higher level, but cannot read at a lower
state.
Star property, discretionary security, and compatibility
property are other models which can be used to limit and
restrict the state transformation and they will be used in
future work.
4.2. Simulation Environment and Evaluation. In software
development lifecycle, testing and evaluation is the final step
in developing the software. In fact, testing process verifies
that whether the system has achieved its aim, objectives,
and determined scope. Therefore, proposed model will be
evaluated to show whether it could gain the mentioned
objective. To do this we need to conduct a test.
(17) If and only if ���� ≠ question and exit a unique
��,
1 ≤ �� ≤ ��, it means that the current rule is
valid and subject and object also are valid, because
object verifies the TPM (VTPM) of the other object
(attestee) by the request (challenge) for integrity
checking.
(18) Consequently, the result is (����, ��∗ ) = ����
(����, ��), which
shows that for all the requests in the �� there is unique
response which is valid.
We should prove that each state of the proposed model is
secure. It has been assumed that each state is secure except
Based on the objectives the aim of this study is to
mitigate the identity theft and Phishing attack. Thus wrong
RP will be inputted to the system and then it will analyze
the dataset based on the username and TPM (VTPM)
conditions that have been discussed in Section 3. Figures 19,
20, and 21 show this evaluation. First, user goes in the fake
service provider which is http://www.skkydrive.com/. Next,
after passing username checking the TPM (VTPM) will be
checked. After checking Skkydrive’s TPM (VTPM) by TA, TA
sends a Phishing message via user browser and reports false
integrity of the fake RP.
http://www.skkydrive.com/
The Scientific World Journal 15
Figure 19: Phishing Relying Party website.
Figure 20: Checking identity based on TPM.
4.3. Security Analysis. In this section, we examine the
strengths and weaknesses of the proposed solution in terms
of security and consider possible improvements.
4.3.1. Mutual Authentication. Mutual authentication, also
called two-way authentication, is a process or technology in
which both entities in a communications link authenticate
each other. In our proposed model, IDP authenticates a
User-Agent by asking him/her to input the username and
password. As the TA by using TPM (VTPM) approved the
IDP which is only known to the user, other users cannot
guess the correct TPM (VTPM) because of the TPM (VTPM)
characteristics. Meanwhile, the user authenticates the RP
website and deals with it presenting a USI to the RP. We
assume that an attacker can send the UA to the fake IDP;
thus, TA reports to the end user that he/she is on the Phishing
website. The absence of a TA will cause authentication of the
OpenID to fail.
4.3.2. Compromising the TA. The strength of the proposed
model lies in TPM (vTPM) which is a secure vault for
Figure 21: TA reports phishing message.
certificates and the fact that it is difficult to compromise the
components. The attacker is assumed to call TPM (vTPM)
commands without bounds and without knowing the TPM
(vTPM) root key, expecting to obtain or replace the user
key. The analysis goal in TPM (vTPM) study is to guarantee
the corresponding property of IDP, the user, RP execution,
and the integrity of them [21, 23]. Also, the overall aim
of the proposed model is verifying the TA. Therefore, to
compromise the solution, an attacker must at least know TPM
content and then target the components accordingly.
An essential axiom is TPM (vTPM) and is bound to
one and only one Platform which has been used in this
study to check the integrity. Hence in [24] Black Hat showed
how one TPM could be physically compromised to gain
access to the secrets stored inside. But Microsoft believed
that using a TPM is still an effective means to help protect
sensitive information and accordingly take advantage of
a TPM. The attack shown requires physical possession of
the PC and requires someone with specialized equipment,
intimate knowledge of semiconductor design, and advanced
skills.
While this attack is certainly interesting, these methods
are difficult to duplicate, and as such, pose a very low risk
in our proposed model. Furthermore, IDP asks the user for
his/her credential to gain security assurance. As a result, an
attacker must not only be able to retrieve the appropriate
secret from TPM (vTPM), but also find the user credential
(username and password). If the credential is sufficiently
complex, this poses a hard, if not infeasible, problem to solve
in order to obtain the required key to Phishing attack in
OpenID environment.
4.3.3. Authentication in an Untrustworthy Environment.
Sometimes users have to sign into their web accounts in an
untrustworthy environment, for example, accessing a credit
card account using a public internet in university, shared
computer, or pervasive environment. Our solution is also
applicable to such cases.
In this case, the user must sign into his/her OpenID
account via the trusted device, and then just follow the
OpenID login instructions. In our proposed model required
16 The Scientific World Journal
checking the TPM for each authentication regardless of
cookies attesting the integrity of the trusted device. Because
of the using TPM and trusted device, the user can read the
authentication information from the device and then log into
the website on the untrustworthy device.
Khiabani et al. [21] described that pervasive systems are
weaving themselves in our daily life, making it possible to
collect user information invisibly, in an unobtrusive manner
by even unknown parties. So OpenID as a security activity
would be a major issue in these environments. The huge
number of interactions between users and pervasive devices
necessitates a comprehensive trust model which unifies
different trust factors like context, recommendation, and
history to calculate the trust level of each party precisely.
Trusted computing enables effective solutions to verify the
trustworthiness of computing platforms in untrustworthi-
ness environment.
4.3.4. Insider Attack. Weak client’s password or server secret
key stored in server side is vulnerable to any insider who has
access to the server. Thus, in the event that this information
is exposed, the insider is able to impersonate either party.
The strength of our proposed protocol is that TPM key is the
essential part in the Trust OpenID protocol and TA stores
TPM database with its TPM key which cannot be accessed
by attackers. Therefore, our scheme can prevent the insider
from stealing sensitive authentication information.
4.3.5. The Man in the Middle Attack. In the man in the
middle (MITM) attack, a malicious user located between two
communication devices can monitor or record all messages
exchanged between the two devices. Suppose an attacker tries
to launch an MITM attack on a user and a website, and that
the attacker can monitor all messages sent to or received by
the user.
The idea of making conventional Phishing, pharming,
and MITM attacks useless will apply to private users, which
usually are not connected to a well-configured network. Fur-
thermore private users often administrate their computers by
themselves. Using public key infrastructures (PKI), stronger
mutual authentication such as secret keys and password,
latency examination, second channel verification, and one
time password are some of the ways which have been released
to prevent MITM in the network area.
Mat Nor et al. in [16] described that many security
measures have been implemented to prevent MITM attacks
such as Secure Sockets Layer (SSL) or Transport Layer
Security (TLS) protocol; adversaries have come out with a
new variant of MITM attack which is known as the Man-
in-the-Browser (MitB) attack which tries to manipulate the
information between a user and a browser and is much harder
to detect due to its nature of attacks.
Trust relationship between interacting platforms has
become a major element in increasing the user confidence
when dealing with Internet transactions especially in online
banking and electronic commerce. Therefore, in our pro-
posed model in order to ensure the validity of the integrity
measurement from the genuine TPM (vTPM), the Attestation
Identity Key (AIK) is used to sign the integrity measurement.
AIK is an asymmetric key and is derived from the unique
Endorsement Key (EK) certified by its manufacturer which
can identify the TPM identity and represent the Certificate
Authority role against MITM attack.
5. Conclusion
In this research first we did a thorough literature review to be
familiar with cloud computing, cloud architecture, federated
identity, SAML, OAuth, OpenID, Trusted computing, and
problems of authentication in cloud computing. Next, it
has been defined that Phishing attack, identity theft, MITM
attack, DNS attack are common attacks and vulnerabilities in
the cloud. The related work emphasized on those researches
that found any way against identity theft and Phishing attack
in the federated identity environment.
The proposed algorithm that combined trusted com-
puting with OpenID and adopted trust in federated area
was explained. TPM (vTPM) is an essential part of trusted
computing and also in this research; TPM (vTPM) are
the main part against identity theft. The proposed model
has six steps based on OpenID exchange data flow. For
implementation, Visual Studio and SQL are good tools to
simulate the proposed model. The last part of each project
tested and evaluated the project with real world datasets.
In this part, based on the statistics and security model, we
proposed BLP and adopted it for the Trusted OpenID model.
The main goal of this research was to gain some predefined
security objectives based on the problem statement. Besides,
the simulation method has been evaluated against Phishing
attack and also has evaluated the strengths and weaknesses
of the proposed solution in terms of security and considered
possible improvements.
Conflict of Interests
The authors declare that there is no conflict of interests
regarding the publication of this paper.
Acknowledgments
The authors would like to express greatest appreciation
to the Ministry of High Education (MOHE) Malaysia
through Prototype Development Research Grant Scheme
(R.K130000.7338.4L622) for financial support of the research
done in Advanced Informatics School (AIS), Universiti
Teknologi Malaysia (UTM).
References
[1] P. Mell and T. Grance, “The NIST definition of cloud
computing
(draft),” NIST Special Publication, vol. 800, p. 145, 2011.
[2] A. Earls, “Gartner takes on cloud identity management,”
2012,
http://searchsoa.techtarget.com/tip/Gartner-takes-on-cloud-
identity-management.
[3] U. F. Rodriguez, M. Laurent-Maknavicius, and J. Incera-
Dieguez, Federated Identity Architectures, 2006.
http://searchsoa.techtarget.com/tip/Gartner-takes-on-cloud-
The Scientific World Journal 17
[4] I. M. Abbadi and A. Martin, “Trust in the cloud,”
Information
Security Technical Report, vol. 16, no. 3-4, pp. 108–114, 2011.
[5] A. Carmignani, Identity Federation Using SAML and
WebSphere
Software, 2010.
[6] TCG, “Trusted computing,” 2012,
http://www.trustedcomput-
inggroup.org/trusted computing.
[7] L. Yan, C. Rong, and G. Zhao, “Strengthen cloud computing
security with federal identity management using hierarchical
identity-based cryptography,” Cloud Computing, pp. 167–177,
2009.
[8] S.-T. Sun, E. Pospisil, I. Muslukhov, N. Dindar, K. Hawkey,
and
K. Beznosov, “What makes users refuse web single sign-on?:
an empirical investigation of OpenID,” in Proceedings of the
7th
Symposium on Usable Privacy and Security (SOUPS ’11), p. 4,
July
2011.
[9] S. Wang, An Analysis of Web Single Sign-on, 2011.
[10] H. Hodges, Johansson, and Morgan, Towards Kerberizing
Web
Identity and Services, Kerberos Consortium, 2008.
[11] A. Armando, R. Carbone, L. Compagna, J. Cuellar, and L.
Tobarra, “Formal analysis of SAML 2.0 web browser single
sign-
on: breaking the SAML-based single sign-on for google apps,”
in Proceedings of the 6th ACM Workshop on Formal Methods in
Security Engineering (FMSE ’08), pp. 1–9, October 2008.
[12] K. Wang and Q. Shao, “Analysis of cloud computing and
information security,” in Proceedings of the 2nd International
Conference on Frontiers of Manufacturing and Design Science
(ICFMD ’11), pp. 3810–3813, Taichung, Taiwan, December
2011.
[13] P. Madsen, Y. Koga, and K. Takahashi, “Federated identity
management for protecting users from ID theft,” in Proceedings
of the Workshop on Digital Identity Management, pp. 77–83,
2005.
[14] J. H. You and M. S. Jun, “A mechanism to prevent RP
phishing
in OpenID system,” in Proceedings of the IEEE/ACIS 9th Inter-
national Conference on Computer and Information Science
(ICIS
’10), pp. 876–880, 2010.
[15] X. Ding and J. Wei, “A scheme for confidentiality
protection
of OpenID authentication mechanism,” in Proceedings of the
International Conference on Computational Intelligence and
Security (CIS ’10), pp. 310–314, December 2010.
[16] F. B. Mat Nor, K. Abd Jalil, and J.-L. Ab Manan, “Remote
user authentication scheme with hardware-based attestation,”
Communications in Computer and Information Science, vol.
180,
no. 2, pp. 437–447, 2011.
[17] S.-T. Sun, Y. Boshmaf, K. Hawkey, and K. Beznosov, “A
billion
keys, but few locks: the crisis of web single sign-on,” in
Proceedings of the New Security Paradigms Workshop (NSPW
’10), pp. 61–71, September 2010.
[18] K. A. J. Fazli Bin Mat Nor and J. l. Ab Manan, “Mitigating
man-
in-the-browser attacks with Hardware-based authentication
scheme,” International Journal of Cyber-Security and Digital
Forensics, vol. 1, no. 3, p. 6, 2012.
[19] C. Latze and U. Ultes-Nitsche, “Stronger authentication in
e-
commerce: how to protect even Näıve user against Phishing,
pharming, and MITM attacks,” in Proceedings of the Interna-
tional Association of Science and Technology for Development
(IASTED ’07), pp. 111–116, October 2007.
[20] P. Urien, “An OpenID provider based on SSL smart cards,”
in
Proceedings of the 7th IEEE Consumer Communications and
Networking Conference (CCNC ’10), pp. 1–2, Las Vegas, Nev,
USA, 2010.
[21] H. Khiabani, J.-L. A. Manan, and Z. M. Sidek, “A study of
trust
& privacy models in pervasive computing approach to trusted
computing platforms,” in Proceedings of the International Con-
ference for Technical Postgraduates (TECHPOS ’09), pp. 1–5,
December 2009.
[22] B. Ferg, OpenID Authentication 2. 0-Final, OpenID
Commu-
nity, 2007.
[23] M. Donovan and E. Visnyak, “Seeding the cloud with trust:
real world trusted multi-tenancy use cases emerge,” 2011,
http://www.ittoday.info/Articles/Trust/Trust.htm.
[24] P. Cooke, “Black Hat TPM Hack and BitLocker,” 2010,
http://bl-
ogs.windows.com/windows/b/windowssecurity/archive/2010/
02/10/black-hat-tpm-hack-and-bitlocker.aspx.
http://www.ittoday.info/Articles/Trust/Trust.htm
http://bl-/
Eghbal Ghazizadeh,1 Mazdak Zamani,1 Jamalul-lail Ab
Manan,2 and Mojtaba Alizadeh11. Introduction2. Proposed

1. Security and vulnerability assessment analysis tool - Microsoft.docx

  • 1.
    1. Security andvulnerability assessment analysis tool - Microsoft Baseline Security Analyzer (MBSA) for Windows OS Locate and launch MBSA CLI Check computer for common security misconfigurations MBSA will automatically select by default to scan WINDOWS VM WINATCK01 While scanning WINDOWS VM WINATCK01 Security Assessment Report 2 Security updates are missing ACTION **Requires immediate installation to protect computer 1 Update roll up is missing ACTION **Obtain and install latest service pack or update roll up by using download link Administrative Vulnerabilities More than 2 Administrators were found on the computer, ACTION **Keep number to a minimum because administrators have complete control of the computer. User accounts have non-expiring passwords ACTION ***Password should be changed regularly to prevent password attacks
  • 2.
    Windows firewall disabledand has exceptions configured Great! Auto logon is disabled (Even if it is configured, provided password is encrypted; not stored as text) GREAT! Guest account is disabled on the computer. GREAT! Anonymous access is restricted from the computer ADMINISTRATIVE SYSTEM INFORMATION DANGER! Logon success and logon failure auditing is not enabled. ACTION ** Enable and turn on auditing for specific events such as logon and logoff to watch for unauthorized access. 3 Shares are present ACTION ** Review list of shares and remove any shares that are not needed. GREAT! Internet explorer has secure settings for all users. Following to be included in the SAR a. Windows administrative vulnerabilities present are that more than 2 Administrators were found on the computer. It is advised to keep minimum number because administrators have complete control of the computer. b. Windows accounts were found to have non-expiring passwords while passwords should be changed regularly to prevent password attacks. One user account has blank or simple password or could not be analyzed c. Windows OS has two security updates missing and so requires immediate installation to protect the computer. One
  • 3.
    update roll upis missing which requires that latest service pack should be obtained and installed or roll up updated using the download link. 2.Security and vulnerability assessment analysis tool – OpenVAS for Linux OS Using the ifconfig command in Terminal to check the IP Address assigned to your VM Linux machine. eth0: (device name for Linux Ethernet cards), with IP Address in this example is determined to be 172.21.20.185 The IP address, 127.0.0.1, is the loopback address that points to the localhost, or the computer that applications or commands are being run from. This address will be used to access the OpenVas application on the VM. Using OpenVAS Web Interface which is running on port number 9392 and can be opened using the Mozilla Firefox browser. After getting a security exception, on Adding Exception Scan IP address by typing 127.0.0.1 next to the ‘Start Scan’ button, then click. Status says Requested Status percent changes to 94%
  • 4.
    Status percent changesto 98% Scan completed with severity as 5.0 Medium Vulnerabilities: 1) Remote server configured to allow weak encryption and Mac algorithms 2) possible to detect deprecated protocols that will allow attackers to eavesdrop connection between service and clients to get access to sensitive data transferred across secured connection. 3) Host is installed with open SSL prone to information disclosure vulnerability which may allow man-in-the middle attackers gain access to the plain text data stream. Application layer is impacted. Weak ciphers offered by service. Scan management results All Security information by severity level Assets Management – host Host Details Configuration Scanner details
  • 5.
    Extras – Mysettings Extras – Performance Administration – User details Administration – Roles Administration – CERT feed management Following to be included in the SAR a. No Linux administrative vulnerability was present as there was only one Administrator found on the computer. It is advised to keep minimum number because administrators have complete control of the computer. Vulnerabilities present were as follows: 1) Remote server configured to allow weak encryption and Mac algorithms 2) Possible to detect deprecated protocols that will allow attackers to eavesdrop connection between service and clients to get access to sensitive data transferred across secured connection. 3) Host is installed with open SSL prone to information disclosure vulnerability which may allow man-in-the middle attackers gain access to the plain text data stream. Application layer is impacted. Weak ciphers offered by service. b. Admin was the only user on the computer. Admin password should be restricted access to or changed regularly to prevent
  • 6.
    password attacks. Nouser was found to have blank or simple password or could not be analyzed. c. No Linux security updates For explanation: Multiple sources exist for ideas for theory development and research in which many problem areas are identified within the domain of nursing to expand theory. In the recent times, nursing theory has originated from: · Role Proliferation: Conceptualization of nursing as a set of functions became the focus of investigation, with theory evolving on how to prepare nurses for various roles. · Basic Sciences: PhD education of nurses in sociology, psychology, anthropology, and physiology introduced ideas from other disciplines and stimulated new ways of looking at nursing phenomena. · Nursing Process: Examination of processes in nurse-patient relationships and patient care triggered ideas and questions related to problem solving, priority setting, and decision making. · Nursing Diagnosis: Labels were given to and a universal nomenclature was developed for problems that fall within the domain of nursing. These are actually conceptual statements about the health status of patients and, therefore, the first step in theory development. · Concept Identification: Theory development encouraged identification of concepts central to nursing. Identification of relationships between concepts (propositions) defined a new nursing perspective. CYB 610 PROJECT 1 – SCREENSHOTS OF PASSWORD CRACKING EXERCISE Results on experiment with the password cracking tools; please
  • 7.
    find answers tothe questions below while findings are equally shared in the project report. 1. Which tool, on which operating system was able to recover passwords the quickest? Provide examples of the timing by your experimental observations. Ophcrack tool was able to recover passwords the quickest on Windows 7 Enterprise operating system Cain and Abel tool using BRUTE FORCE attack with NTLM Hashes loaded User NT Password Time left Time taken Time start Time stop 1 Xavier xmen blank < 1 min 12.01am 12.01am 2 Wolverine Not cracked 3.1e+10 years > 1 hr 3 Shield Not cracked 2.9e+10 years
  • 8.
    > 1 hr 4 EarthBase Notcracked 2.9e +10 years > 1 hr 5 dbmsAdmin Not cracked 1.3e +17 years > 1 hr 6 Kirk Not cracked 1.3e+17 years > 1 hr 7 Mouse Not cracked 3.0e+10 years > 1 hr 8 Rudolph Not cracked 1.3e+17 years > 1 hr
  • 9.
    9 Snoopy Not cracked 3.0e+10 years >1 hr 10 Spock Not cracked 3.0e+10 years > 1 hr 11 Apollo Not cracked 2.9e+10 years > 1 hr 12 Chekov Not cracked 2.9e+10 years > 1 hr 13 Batman Not cracked 2.9e+10 years > 1 hr
  • 10.
    Cain and Abeltool using Dictionary attack with NTLM Hashes loaded (common stop position 3456292) User NT Password Time spent 1 Xavier Not cracked 2 Wolverine Motorcycle (position 1747847) < 1 min 3 Shield Not cracked < 1 min 4 EarthBase Not cracked < 1 min 5 dbmsAdmin Not cracked < 1 min 6 Kirk Not cracked < 1 min 7 Mouse
  • 11.
    Not cracked < 1min 8 Rudolph Not cracked < 1 min 9 Snoopy Not cracked < 1 min 10 Spock Not cracked < 1 min 11 Apollo Not cracked < 1 min 12 Chekov Not cracked < 1 min 13 Batman Not cracked < 1 min Ophcrack tool used for password cracking User LM Password 1 Xavier xmen 2
  • 12.
    Wolverine Not found 3 Shield Not found 4 EarthBase Notfound 5 dbmsAdmin Not found 6 Kirk Not found 7 Mouse Not found 8 Rudolph Not found 9 Snoopy Not found 10 Spock Not found 11 Apollo M00n 12 Chekov Riza 13 Batman Bats Time elapsed was 17 secs
  • 13.
    2.Which tool(s) providedestimates of how long it might take to crack the passwords? Cain and Abel too using Brute force attack Cain and Abel provided time left in number of years, days or minutes depending on the specified password length. What was the longest amount of time it reported? 3.1e+10 years and for which username? Wolverine 3. Compare the amount of time it took for three passwords that you were able to recover. · Cain and Abel tool: Xavier user (xmen password cracked using brute force attack) = <1 min · Cain and Abel tool: Wolverine user (motorcycle password cracked using dictionary attack) = <1 min · Ophcrack tool : Batman user (bats password cracked under 17 secs) 4. Compare the complexity of the passwords for those discussed in the last question. What can you say about recovery time relevant to complexity of these specific accounts? Xavier user password (xmen) was easier and quicker to crack using brute force attack due to its short password length, while it allocated a long time to crack Wolverine user password (motorcycle). Dictionary attack found Wolverine user password (motorcycle) simple to attack because it possibly exists in its wordlists. Dictionary attack of Cain and Abel tool keeps a record position on the wordlist which allows you to continue from the word where you left off before you halt the attack. Brute force attack takes the specified password length (e.g minimum of 1, maximum of 16 or more) into consideration to run through every possible combination which makes it CPU intensive and extremely slow. Ophcrack tool has proved to be quicker and more powerful than Cain and Abel to crack up to four passwords for users; Xavier, Apollo, Chekov and Batman. One would have expected
  • 14.
    Ophcrack tool tosucceed in cracking Wolverine password (motorcycle). It shows that the longer and/or more complex a password is, the longer it will take to crack. It is advisable to use passphrase (thkGodisfri), long words or complex words with all four types of character sets as they are pretty secure. 5. What are the 4 types of character sets generally discussed when forming strong passwords? Lower case letters, upper case letters, numbers (0 through 9) and special characters ($,*, #, ! and so on) How many of the 4 sets should you use, as a minimum? Upper case letters, Lower case letters, and numbers depending on the company password policy What general rules are typically stated for minimum password length? Minimum of eight characters in length 6.How often should password policies require users to change their passwords? 90 days 7. Discuss the pros and cons of using the same username accounts and passwords on multiple machines. Pros : You would only need to remember one set of username and password for multiple machines Cons : Hackers can have access to other machines using the same account credentials that have been compromised. 8. What are the ethical issues of using password cracker and recovery tools? Systems Administrators can use it to recover lost passwords and regain access to machines Users that either lost their passwords or corporate entities that need to look into any compromise or vulnerability they might have exposed themselves to, should be made aware of when and why such password cracker, recovery tools may be run Recovered or cracked passwords should not be disclosed to third parties and there shouldn’t be a way to link passwords back to the users 9.Are there any limitations, policies or regulations in their use
  • 15.
    on local machines? Consentof the system owner should be acquired before usage of password recovery tools as there might be damages and / or loss of data using the password cracking software. Home networks? As an individual you can build your own labs and explore password cracking Small business local networks? Intranets? Internets? Where might customer data be stored? We need to coordinate with clients to clearly define the scope of password recovery or cracking efforts and abide by the related rules of engagement. It is illegal to launch password cracking tools on individual’s or organization’s web sites through the internet without adequate notice and/or consent. It does not really matter how long or strong your password may be,what matters is how is your confidential or protected data (password) is stored 10.If you were using these tools for approved penetration testing, how might you get the sponsor to provide guidance and limitations to your test team? The test team will request guidance and limitation from the sponsor by scheduling for a kickoff meeting where the timing and duration of the testing will be discussed, so that normal business and everyday operations of the organization will not be disrupted. Information will be gathered as much as possible by identifying the targeted machines, systems and network, operational requirements and the staff involved. 11.Discuss any legal issues in using these tools on home networks in States, which have anti-wiretap communications regulations. Who has to know about the tools being used in your household? Using these tools on home networks in States, which have anti- wiretap communications regulations would involve obtaining the relevant legal documents protecting against any legal actions, should anything go wrong during tests. If it is a rented
  • 16.
    apartment, the leasingoffice management needs to be informed. Please, find screenshots of step by step exercise on password cracking: References Password storage and hashing ● http://lifehacker.com/5919918/how-your-passwords-are- stored-on-the-internet-and-whenyour-password-strength-doesnt- matter ● http://www.darkreading.com/safely-storing-user-passwords- hashing-vs-encrypting/a/did/1269374 ● http://www.slashroot.in/how-are-passwords-stored-linux- understanding-hashing-shadowutils Click on Lab broker to allocate resources Signing into the workspace with StudentFirst/[email protected] Right on the desktop of VM WINATK01, click Lab Resource then folder - Applications, locate and launch Cain. After launch of Cain Maximize screen and click Cracker tab 20 user accounts on the local machine used to populate the right window.
  • 17.
    1 hash crackedon Xavier user using brute force plus NTLM hash Brute force attack on Wolverine user Results after using Cain and Abel application for both types (brute force and dictionary) of attacks Results after using Ophcrack application to crack all user passwords Click on Ophcrack -> click ‘crack’ then stop and take note of time taken Running Head: Risk Assessment Repot (RAR) 1 Risk Assessment Report (SAR) 5
  • 18.
    Risk Assessment Report(RAR) CHOICE OF ORGANIZATION IS UNIVERSITY OF MARYLAND MEDICAL CENTER (UMMC) OR A FICTITIUOS ORGANIZATION (BE CREATIVE) · Define risk, remediation and security exploitation. (Cite ) · Read the attached documents on risk assessment resource for the process in preparing the risk assessment START TO READ: Organizations perform risk assessments to ensure that they are able to identify threats (including attackers, viruses, and malware) to their information systems. According to the National Institute of Standards and Technology (NIST), Risk assessments address the potential adverse impacts to organizational operations and assets, individuals, other organizations, and the economic and national security interests of the United States, arising from the operation and use of information systems and the information processed, stored, and transmitted by those systems (NIST, 2012).
  • 19.
    When a riskassessment is completed, organizations rate risks at different levels so that they can prioritize them and create appropriate mitigation plans. References U.S. Department of Commerce, National Institute of Standards and Technology (NIST). (2012). Information security: Guide for conducting risk assessments (Special Publication 800-30). Retrieved August 5, 2016, from http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublicatio n800-30r1.pdf END OF READING: · identify threats, vulnerabilities, risks, and likelihood of exploitation and suggested remediation. · Include the OPM OIG Final Audit Report findings and recommendations as a possible source for methods to remediate vulnerabilities. · Research through the attached doc and cite THREAT IDENTIFICATION threat attacks on vulnerabilities Potential hacking actors types of remediation and mitigation techniques IP address spoofing/cache poisoning attacks denial of service attacks (DoS) packet analysis/sniffing session hijacking attacks
  • 20.
    distributed denial ofservice attacks · prepare a Risk Assessment Report (RAR) with information on the threats, vulnerabilities, likelihood of exploitation of security weaknesses, impact assessments for exploitation of security weaknesses, remediation, and cost/benefit analyses of remediation. · Devise and include a high-level plan of action with interim milestones (POAM), in a system methodology, to remedy your findings. · Summarize and include the results you obtained from the vulnerability assessment tools (i.e., MBSA and OpenVas- proj 2 Lab) in RAR
  • 21.
    Product I boughtstarts from herePURPOSE The purpose of this risk assessment is to evaluate the adequacy of the Amazon corporation’s security. This risk assessment report provides a structured but qualitative assessment of the operational environment for Amazon corporation’s . It addresses issues of sensitivity, threats analysis , vulnerabilities analysis , risks analysis and safeguards applied in Amazon corporation’s . The report and the assessment recommends use of cost-effective safeguards in order to mitigate threats as well as the associated exploitable vulnerabilities in Amazon corporation’s. THE ORGANISATION The The Amazon corporation’s system environment is run as a distributed client and server environment consisting of a Microsoft Structured Query Language -SQL database built with Powerful programming code. The Amazon corporation contains SQL data files, Python application code, and executable Java and Javascripts. The SQL production data files, documented as consisting of SQL stored procedures and SQL tables, reside on a CLOUD storage area network attached to a HP server running on Windows XP and MS SQL 2000 operating systems. The Python application code resides on a different IBM server running on KALI LINUX . Both servers are housed in the main office Building 233 Data Center at the CDC Clifton Road campus in Atlanta, GA. The The Amazon corporation’s executables reside on a fileserver running Windows 2000 and KALI LINUX or occasaionally a local workstation is installed depending upon the loads and jobs Requirements.. Users are physically distributed in multiple locations multiple
  • 22.
    sales offices inAtlanta, Pittsburgh, Ft. Collins, Research Triangle Park, Cincinnati, and San Juan. Their desktop computers are physically connected to a wide area network - WAN. Some users revealed that they usually connect via secured dial-up and DSL connections using a powerful Citrix server. Normally, a user should connect to an active application server in their city that hosts the the Amazon corporation’s application, and to the shared database server located in Atlanta. SCOPE The scope of this risk assessment is to assess the system’s use of resources and controls implemented and to report on plans set to eliminate and manage vulnerabilities exploitable by threats identified in this report whether internal and external to Amazon. If not eliminated but exploited, these vulnerabilities could possibly result in: · Unauthorized disclosure of data as well as unauthorized modification to the system, its data, or both and Denial of service, denial of access to data, or both to authorized users. This Risk Assessment Report project for Amazon corporation evaluates the confidentiality which means protection from unauthorized disclosure of system and data information, integrity which means protection from improper modification of information, and availability which means loss of system access of the system. Intrusion detection tools used in the methodology are MBSA security analyzer in CLAB 699 Cyber Computing Lab, OpenVAS security analyzer in CLAB 699 Cyber Computing Lab, and Wire shark security analyzer. In conducting the analysis the screenshots taken using each of the tools has been looked at with a view to arriving at relevant conclusions. Recommended security safeguards are meant to allow management to make proper decisions about security-related initiatives in Amazon . METHODOLOGY
  • 23.
    This risk assessmentmethodology for and approach Amazon Corporation was conducted using the guidelines in NIST SP 800-30, Risk Management Guide for Information Technology Systems and OPM OIG Final Audit Report findings and recommendations. The assessment is very broad in its scope and evaluates Amazon Corporation ‘s security vulnerabilities affecting confidentiality, integrity, and availability. The assessment also recommends a handful of appropriate security safeguards, allowing the management to make knowledge-based decisions on security-related initiative in Amazon Corporation . This initial risk assessment report provides an independent review to help management at Amazon to determine what is the appropriate level of security required for their system to support the development of a stringent System Security Plan . The accompanying review also provides the information required by the Chief Information Security Officer (CISO) and Designated Approving Authority DAA also known as the Authorizing Official to assist in to making informed decision about authorizing the system to operate. Intrusion detection tools are used in the methodology and includes the MBSA security analyzer, the OpenVAS security analyzer, and Wire shark security analyzer. DATA The data collected using the MBSA and other tools reveals that the following internal routines were done by MBSA and other tools in the LAB 2 screenshots in CLAB 699 Cyber Computing Lab and LAB 3 screenshots in CLAB 699 Cyber Computing Lab given together with the question. The MBSA security analyzer, the OpenVAS security analyzer converted the raw scan data and particularly succeeded in outputting the following vulnerabilities into risks based on the following methodology in CLAB 699 Cyber Computing Lab: · Categorizing vulnerabilities as evident the LAB 2 screenshots · Compairing threat vectors the LAB 3 screenshots · Assessing the probability of occurrence and possible impact
  • 24.
    as evident inthe LAB 2 screenshots · Authentication assessment as evident the LAB 2 screenshots · Determining authentication threat vectors as evident in the LAB 3 screenshots The MBSA security analyzer and the OpenVAS security also had routines which communicated with green bone security assessment centre especially to provide the automated recommendation as evident in the LAB 2 in CLAB 699 Cyber Computing Lab and LAB 3 screenshots in CLAB 699 Cyber Computing Lab. The green bone security assessment centre particularly succeeded in doing the following as evident in output file. Lists the risks and the associated recommended controls to mitigate these risks. The management has the option of doing the following in the corporation: Accepting the risks and chosen recommended controls or negotiating an alternative mitigation, while reserving the right to override the green bone security assessment centre and incorporate the proposed recommended control into the Amazons Plan of Action and Milestones . RESULTS The following operational as well as managerial vulnerabilities were identified in Amazon while using the project methodology: Inadequate adherence and advocacy for existing security controls. Inadequate adherence to management of changes to the information systems infrastructure. Weak authentication protocols; Inadequate adherence for life-cycle management of the information systems; Inadequate adherence and advocacy for configuration management and change management plan; Inadequate adherence for and advocacy for implementing a robust inventory of systems, for servers, for databases, and for network devices; Inadequate adherence to and advocacy for mature vulnerability scanning tools; Inadequate adherence to and advocacy for valid authorizations for many systems, Inadequate adherence to and advocacy for plans of action to remedy the findings of previous audits.
  • 25.
    Thefollowing attacks wereidentified in Amazon while using the above project methodology. IP address spoofing/cache poisoning attacks; denial of service attacks (DoS) packet analysis/sniffing; session hijacking attacks and distributed denial of service attacks NIST SP 800-63 describes the classification of potential harm and impact as follow as well as OPM OIG Final Audit Report findings and recommendations: Inconvenience, distress, or damage to standing or reputation; Financial loss or agency liability; Harm to agency programs or public interests; Unauthorized release of sensitive information ; Personal safety and Civil or criminal violations Potential Impact of Inconvenience, Distress, or Damage to Standing or Reputation: · Low— limited, short-term inconvenience, consisting of distress or embarrassment to any party within Amazon. · Moderate— serious short term or limited long-term inconvenience, consisting distress or damage to the standing or reputation of any party within Amazon. High—severe or serious long-term inconvenience, consisting of distress or damage to the standing or reputation of any party within Amazon. Potential Impact of Financial Loss · Low— insignificant or inconsequential unrecoverable financial loss to any party consisting of an insignificant or inconsequential agency liability within Amazon. · Moderate— a serious unrecoverable financial loss to any party, consisting of a serious agency liability within Amazon. · High—severe or catastrophic unrecoverable financial loss to any party; consisting of catastrophic agency liability within Amazon. Potential Impact of Harm to Agency Programs or Public Interests
  • 26.
    · Low, alimited adverse effect on organizational operations or assets, or public interests within Amazon. · Moderate—a serious adverse effect on organizational operations or assets, or public interests within Amazon.. High—a severe or catastrophic adverse effect on organizational operations or assets, or public interests within Amazon. Potential impact of Unauthorized Release of Sensitive Information · Low- a limited release of personal, U.S. government sensitive or commercially sensitive information to unauthorized parties resulting in a loss of confidentiality with a low impact · Moderate— a release of personal, U.S. government sensitive or commercially sensitive information to unauthorized parties resulting in loss of confidentiality with a moderate impact. High – a release of personal, U.S. government sensitive or commercially sensitive information to unauthorized parties resulting in loss of confidentiality with a high impact . Potential impact to Personal Safety Low— minor injury not requiring medical treatment within Amazon. Moderate— moderate risk of minor injury or limited risk of injury requiring medical treatment within Amazon. High—a risk of serious injury or death within Amazon. Potential Impact of Civil or Criminal Violations · Low— a risk of civil or criminal violations of a nature that would not ordinarily be subject to enforcement efforts within USA. · Moderate— a risk of civil or criminal violations that may be subject to enforcement efforts within USA. · High—a risk of civil or criminal violations that are of special importance to enforcement programs within USA. CONCLUSIONS AND RECOMMENDATION In the risk assessment, two issues came out that were striking and which are resolved below. An employee was terminated and
  • 27.
    his user IDwas not removed from the system. This is dependency failure kind of vulnerability and risk pair and has an overall risk that is moderate. The RECOMMENDED safeguard is to remove userID from the system upon notification of termination Secondly a VPN /Keyfob access does not meet certification and accreditation level stipulated in NIST SP 800-63 .This is a kind of vulnerability that touches on inconvenience, standing and reputation and and has an overall risk that is moderate. The RECOMMENDED safeguard is to migrate all remote authentication roles to CDC or any other approved authority. This Risk Assessment report for the organization identifies risks of the operations especially in those domains which fails to meet the minimum requirements and for which appropriate countermeasures have yet to be implemented. The RAR also determines the Probability of occurrence and issues countermeasures aimed at mitigating the identified risks in an endeavor to provide an appropriate level-of-protection and to satisfy all the minimum requirements imposed on the organization’s policy document. The system security policy requirements are satisfied now with the exception of those specific areas identified in this report. The countermeasure recommended in this report adds to the additional security controls needed to meet policies and to effectively manage the security risk to the organization and its operating environment. Finally, the Certification Official and the Authorizing Officials must determine whether the totality of the protection mechanisms approximate a sufficient level of security, and/or are adequate for the protection of this system and its resources and information. The Risk Assessment Results supplies critical information and should be carefully reviewed by the AO prior to making a final accreditation decision.
  • 28.
    References National Institute ofStandards and Technology (NIST) (2014). Assessing security and privacy controls in federal information systems and organizations. NIST Special Publication 800-53A Revision 4. Retrieved from http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.8 00-53Ar4.pdf National Institute of Standards and Technology (NIST) (2010). Guide for applying the risk management framework to federal information systems. NIST Special Publication 800-37 Revision 1. Retrieved from http://csrc.nist.gov/publications/nistpubs/800-37-rev1/sp800-37- rev1-final.pdf . Running Head: Security Assessment Repot (SAR) 1 Security Assessment Report (SAR) 27
  • 29.
    Intentionally left blank SecurityAssessment Report (SAR) CHOICE OF ORGANIZATION IS UNIVERSITY OF MARYLAND MEDICAL CENTER (UMMC) OR A FICTITIUOS ORGANIZATION (BE CREATIVE) Introduction · Research into OPM security breach. · What prompts this assessment exercise in our choice of organization? “but we have a bit of an emergency. There's been a security breach at the Office of Personnel Management. need to make sure it doesn't happen again. · What were the hackers able to do? OPM OIG report and found that the hackers were able to gain access through compromised
  • 30.
    credentials · How couldit have been averted? A) security breach could have been prevented, if the Office of Personnel Management, or OPM, had abided by previous auditing reports and security findings.b) access to the databases could have been prevented by implementing various encryption schemas and c) could have been identified after running regularly scheduled scans of the systems. Organization · Describe the background of your organization, including the purpose, organizational structure, · Diagram of the network system that includes LAN, WAN, and systems (use the OPM systems model of LAN side networks), the intra-network, and WAN side networks, the inter-net. · Identify the boundaries that separate the inner networks from the outside networks. · include a description of how these platforms are implemented in your organization: common computing platforms, cloud computing, distributed computing, centralized computing, secure programming fundamentals (cite references) Threats Identification Start Reading: Impact of Threats The main threats to information system (IS) security are physical events such as natural disasters, employees and consultants, suppliers and vendors, e-mail attachments and viruses, and intruders. Physical events such as fires, earthquakes, and hurricanes can cause damage to IT systems. The cost of this damage is not restricted to the costs of repairs or new hardware and software. Even a seemingly simple incident such as a short circuit can have a ripple effect and cost thousands of dollars in lost earnings. Employees and consultants; In terms of severity of impact, employees and consultants working within the organization can cause the worst damage. Insiders have the most detailed
  • 31.
    knowledge of howthe information systems are being used. They know what data is valuable and how to get it without creating tracks. Suppliers and vendors; Organizations cannot avoid exchanging information with vendors, suppliers, business partners, and customers. However, the granting of access rights to any IS or network, if not done at the proper level—that is, at the least level of privilege—can leave the IS or network open to attack. Someone with an authorized connection to the network can enter and probe further into the network to sniff proprietary or confidential data. In this case, Fluffy Toys was obliged to give one supplier better terms and is losing another supplier altogether. Email attachments and viruses; Viruses can corrupt an IS and completely disrupt the normal functioning of a business. Most virus infections spread through e-mail attachments and downloads. E-mail attachments can also be used to implant malware or malicious programs in order to steal data or even money. Trojan horses, logic bombs, and trapdoors are types of malicious code that can corrupt an IS and destroy or compromise information. Intruders; Corporate espionage agents are typically hired by competitors to obtain confidential or proprietary information, such as patents, intellectual property, and trademarks. Hackers are usually out to get any information they can lay their hands on. Regardless of the identity of the intruder, the damage to the organization can be substantial if confidential information is lost. Equally read STEP 2 attachments End reading: · From above, identify insider threats that are a risk to your organization. · differentiate between the external threats to the system and the insider threats. Identify where these threats can occur in the previously created diagrams. · Define threat intelligence, and explain what kind of threat
  • 32.
    intelligence is knownabout the OPM breach. · Relate the OPM threat intelligence to your organization. How likely is it that a similar attack will occur at your organization? · Mention security communication, message and protocols, or security data transport methods used such as (TCP/IP), SSL, and others in your organization. · Describe the purpose and function of firewalls for organization network systems, and how they address the threats and vulnerabilities you have identified. · Describe the value of using access control, database transaction and firewall log files. · Describe the purpose and function of encryption, as it relates to files and databases and other information assets on the organization's networks. Scope Methodology Data · Share and relate the data from evaluating the strength of passwords used by employees (password cracking tools – ophcrack and Cain/Abel -test data –project lab1), run scans using security and vulnerability assessment analysis tools such as MBSA, OpenVAS, (data from project lab2) Nmap, or NESSUS (proj lab 3 attached), network traffic sampled and scanned using Wireshark (proj lab 3 attached) on either the Linux or Windows systems. Wireshark allows you to inspect all OSI Layers of traffic information. READ NOTES about other network security tools and relate their use at your organization · Network Mapper (Nmap) is a security scanner that is used to discover hosts and services on a computer network. Based on network conditions, it sends out packets with specific information to the target host device and then evaluates the responses. Thus, it creates a graphical network map illustration.
  • 33.
    · Nessus isa UNIX vulnerability assessment tool. It was a free and open-source vulnerability scanner until 2005. Nessus' features include remote and local security checks, client/server architecture with a GTK graphical interface, and an embedded scripting language for plugins. · Netcat is a feature-rich network debugging and exploration tool. It is designed to be a reliable back-end tool for other programs. It reads and writes data across TCP and UDP network connections. · GFI LanGuard is an all-in-one package. It is a network security and vulnerability scanner that helps to scan for, detect, assess, and correct problems on the network. · Snort is a free and open-source responsive IDS. It performs real-time traffic analysis and packet logging on IP networks. The program can also be used to detect probes and attacks. Snort can be configured in three main modes—sniffer, packet logger, and network intrusion detector. In the sniffer mode, it reads network packets. In the packet logger mode, it logs the packets onto a disk; and in the network intrusion detector mode, it logs traffic. In this mode, it also analyzes the traffic against a set of rules and acts accordingly. · Fport is a free tool that scans the system and maintains a record of the ports that are being opened by various programs. System administrators can examine the Fport record to find out if there are any alien programs that could cause damage. Any nonstandard port, typically opened up by a virus or Trojan, serves as an indicator of system compromise. · PortPeeker is a freeware security tool used for capturing network traffic using TCP, UDP, or Internet Control Message Protocol (ICMP). · SuperScan is a tool used to evaluate a computer's security. It can be used to test for unauthorized open ports on the network. Hackers also use this tool to check for insecure ports in order to gain access to a system. SuperScan helps to identify open ports and the services running on them. It also runs queries such as
  • 34.
    Whois and ping. Endof reading Results Network Security Issues: · Provide an analysis of the strength of passwords used by the employees in your organization, emphasizing weak and vulnerable passwords as a security issue for your organization. (from the use of password cracking tools to crack weak and vulnerable passwords – proj lab 1) · Findings Reflect any weaknesses found in the network and information system diagrams previously created and the security issues the following may cause: · Some critical security practices, such as lack of diligence to security controls and · lack of management of changes to the information systems infrastructure were cited as contributors to the massive data breach in the OPM Office of the Inspector General's (OIG) Final Audit Report, which can be found in open source searches. (research and cite) · weak authentication mechanisms; · lack of a plan for life-cycle management of the information systems; · lack of a configuration management and change management plan; · lack of inventory of systems, servers, databases, and network devices; · lack of mature vulnerability scanning tools; · lack of valid authorizations for many systems, and · lack of plans of action to remedy the findings of previous audits.(POAM) · determine the role of firewalls and encryption, and auditing – RDBMS that could assist in protecting information and
  • 35.
    monitoring the confidentiality,integrity, and availability of the information in the information systems. · In the previously created Wireshark files,(see proj lab 3 attached) identify if any databases had been accessed. What are the IP addresses associated with that activity? · Provide analyses of the scans and any recommendation for remediation, if needed. · Identify any suspicious activity and formulate the steps in an incidence response that could have been, or should be, enacted. · Include the responsible parties that would provide that incidence response and any follow-up activity. Include this in the SAR. Please note that some scanning tools are designed to be undetectable. · While running the scan and observing network activity with Wireshark, attempt to determine the detection of the scan in progress. If you cannot identify the scan as it is occurring, indicate this in your SAR. Product I bought starts from here
  • 36.
    INTRODUCTION OPERATING SYSTEMS This isan interface that sits between a user and hardware resources. Basically it is a software that has among others the following modules: File management modules, memory management module, process management modules , input and output and control module and peripheral device control modules. This description applies to all operating systems. ROLE OF USERS. In order to appreciate the role of users it must be recognized that an operating system provides users the services to execute the programs in a convenient way .So the operating system interacts by users when the users play the roles of asking the operating system to do each of the following: Role of direct program execution using threads and Parallel programming routines. Role of I/O operation request by writing to external devices and reading from the same. Role of File system manipulation by creating directories. Role of requesting communication by stopping some running processes or issuing interrupts and signals Role of requesting program verification by getting error detection and flagging of errors especially by parsers and debuggers and compilers which are part of the operating systems. Role of protection especially by using locks for drives or documents so that unauthorized personnel may not access the the resources for malicious use. Role of resource allocation where a programming uses system calls to do some routines instead of programming his own and these routines operate in critical regions to avoid deadlock and starvation issues. Furthermore users get interactivity with operating systems which is defined as the ability of users to interact with a computer through each of the following: Providing an interface to interact with the system. Secondly
  • 37.
    managing input devicesthat takes from the user such as keyboard and managing outputs to the user. Multitasking is also another feature that allows users to interact with the system. In Multitasking is enabled by threads and Symmetric multiprocessing (SMP).The role of schedulers can’t be ignored in multitasking and the Operating system Handles multitasking in the way it can handle operations in a time slice. KERNEL AND OPERATING SYSTEM APPLICATIONS The kernel is the core of the operating system and executes such privileged instructions over the management of the following resources: · Memory Management: Which algorithms avoids memory leak and buffer overflow attacks and allow shadow paging and using virtual pages and pointers efficiently? · Processor Management: Which strategies when applied reduce idle time and avoids starvation and deadlock over processor time for competing resources? · Device Management: Which techniques reduce disk access times, disk seek times, disk write and disk read for internal head disk drives and external storage. How do routines respond to beeps, alarms, keypress,enter and other keyboard commands. File Management: Which is the most efficient structures for file system to minimize search times and also increase performance even when in distributed environments? Security regards and concerns: How can passwords and other techniques be integrated to improve overall system vulnerability to protect data, information and resources? . Control towards the system performance: How do the OS regulate delays between request for a service and response from the requested entity. Jobs and processes accounting: How do we know the times and resources each user wanted so as to form models that can be used for learning and estimating resources requirements. Coordination among softwares as well as among users :How do the communication take place between compilers .assemblers,
  • 38.
    debuggers ,linkers, loadersand interpreters about which component is using which resource and system users. TYPES OF OPERATING SYSTEMS · Batch operating system: There is lack of direct interaction between the user and the computer so the user prepares his job on punch cards and gives it to computer operator much like calling customer care centre nowadays. To increase processing batches of jobs are prepared meaning they have similar processing cycle and runt at one time. It was the initial generation of computing system.: Time-sharing operating systems: This second generation OS mostly in Unix/Linux allows many people located at various terminals to use a particular computer at the same time. Processor’s time is shared among multiple users simultaneously so the use of the term timesharing is allowed. In such distributed computing environments, Processors are connected and they use message passing systems to communicate and because of conditions such as global starvation and global deadlocks, additional layer of software called middleware is used and use of cohorts and elect ions algorithms justified. A real-time system : A real time system is defined as a data processing system in which time intervals required to process and respond to in a particular input is so small such that it controls the rest of the environment. Theses systems are called real time because an input is processed against the clock and response given within restricted time. Systems such as temperature regulating systems and air traffic control are real time as work goes as per the clock. VULNERABILITIES. A vulnerability is a weaknesses in the design or operation of a system that can be used by a threat or else a threat agent with the consequences of causing undesirable impact especially in the areas of computer information confidentiality ,data and information integrity or service availability. This could also be
  • 39.
    a phenomenon createdby errors in configuration which makes your current network or hosts on your computer network be exposed to malicious attacks from either local or remote users. Vulnerability attacks affect the following important subsystems in a system: Firewalls in a VLAN or WAN.See diagram for these in DMZ diagram included . Application servers in networked system architectures. See diagram for these in DMZ diagram included. Web servers in networked system architectures. See diagram for these in DMZ diagram included. Operating systems discussed in chapter one . Fire suppression systems .In the diagram below which includes the architecture I have discussed and investigated on .the internet is supplied from ISP(Internet Service Provider)with the icon of a cloud. This creates the WAN(Wide Area Network )component mentioned previously . The firewall separates the backend of the WAN architecture from End and is the Orange Book –In-The-Shelf icon Depicting library arrangement. Beneath the firewall is the switch appearing as a box with slots for determining in which segment of sub network a packet belongs to. Similar symbols will indicate such switching and forwarding services. The devices with the network addresses appended are the routers for determining the shortest route a packet should be placed on so as to reach destination. The DMZ is acronym for demilitarized zone in a network. The computers which individuals use are the bottom nodes of the structures. This structure is a non-binary tree structure and at each level we can identify the services being done like securing connections and sessions as per TCP/IP protocols, routing as per TCP/IP protocols, switching and forwarding as per TCP/IP protocols and encapsulation as well as encoding and decoding of packet data and packet formats as per TCP/IP protocols. Level I of the tree: Connections and connectionless services
  • 40.
    integrated with Encodingand Encapsulation-Creating sessions and connections by incorporated transaction monitors. Level 2 of the tree: Routing services.-Use of routing algorithms like OSPF. Level 3 of the tree: Switching and forwarding services. Use of address (MAC ) tables . Level 4 of the tree: De-encapsulation and decoding services- The presentation layer. Physical layer and data link layer are covered in transport media Cables in the diagram. WINDOWS VULNERABILITY. A threat is a force that is adversarial that directly can cause harm to availability, integrity or confidentiality of a computer information system including all the subsystems .A threat agent is an element that provides delivery mechanisms for a threat while an entity that initiates the launch of a threat is referred to as a threat actor. Threat actors are normally made more active through forces of too much curiosity, or huge monetary gain without work or a big political leverage or any form of social activism and lastly by revenge. . Windows vulnerabilities summarized: Audit compromise: Reported in Windows 8, Internal procedures were not followed in generating audited results so a compromise was programmed at Bells Lab. Logic bomb:Planted at the National institute for Carnegie research on Windows 1998 and discovered by programmers, for source see references Communication failure: Occurred on a hacked distributed air system for FAA (FEDERAL AVIATION AUTHORITIES) and reported to Microsoft databases also happened with NASA, for source see references. Cyber brute force: Reported in windows 2000 in Stanford Laboratories with brute force attack on encrypted message. This is trying all possible combination of passwords, for source see
  • 41.
    references. Cyber disclosure: Reportedin windows 8 in which Microsoft databases reported complaints about Hacked Dept of State for Immigration Services, for source see references. INTRUSION METHODS IN WINDOWS: Stealth port scans: This is an advanced technique in intrusion when port scanning Can’t be detected by auditing tools. Normally, by observing frequent attempts to connect, in which no data is available , detecting intrusion is easy. In stealth port scans, ports scan are done at a very low rate such that it is hard for auditing tools to identify connections requests or malicious attempt to intrude into computer systems. Occurred on a hacked distributed Military control system for DOD and reported to Microsoft databases also happened with NASA. CGI attacks: Common gateway interface is an interface between client side computing and server side computing. Cyber criminals who are good programmers can break into computer systems even without the usual login capabilities .They just bypass this. : Reported in windows XP in which Microsoft databases reported complaints about Hacked Department of NSA for Security Services. SMB probes: A server message block works as an application layer protocol that functions by providing permissions to files, ports, processes and so on. A probe into SMB can check for shared entities that are available on the systems. If a cybercriminal uses an SMB Probe ,he can detect which files or ports are shared on the system. Cyber brute force: Reported in windows 10 in Carnegie Research laboratories with brute force attack on encrypted message. This is trying all possible combination of passwords. LINUX VULNERABILITY ATTACKS A threat actor might purposefully launch a threat using an agent .A threat actor could be for instance be a trusted employee who
  • 42.
    commits an unintentionalhuman error like a Trusted employee who clicks on an email designed to be a phishing email then the email downloads a malware. Scavenging: Reported in Variant of UNIX in which company databases reported complaints about Oracle security files, for source see references. Software Tampering: Reported in Ubuntu .Internal command controls overtasked to work differently in a Publishing firm in UK, for source see references. Theft: Planned at the National institute for Carnegie research and hard disks stolen by programmers, for source see references Time and state: Occurred on a hacked distributed medical dosage injection system in Maldives a race condition occurred, for source see references Transportation accidents: Reported in KALI Linux in MIT when some computers in transportation were damaged for source see references. LINUX INTRUSION DETECTION: OS fingerprinting attempts:In OS finger printing, the OS details of a target computer are looked after and the attacker goes for the same. Information looked after includes the vendor name,underlying OS,OS generation device type and such. Occurred on a hacked distributed Airport clearance system in Panama , for source see references. Buffer overflows: In buffer overflow attacks, the inputs provided to a program overruns the buffer’s capacity and spills over to overwrite data stored at neighbouring memory locations. Reported in Ubuntu .Program routines overflowed buffer memory by design in a Cargo clearance firm in France, for source see references. Covert port scans: This is an advanced technique in intrusion when port scanning Can’t be detected by auditing tools. Normally, by observing frequent attempts to connect, in which no data is available, detecting intrusion is easy. In stealth port scans, ports scan are done at a very low rate such that it is hard for auditing tools to identify connections requests or malicious
  • 43.
    attempt to intrudeinto computer systems. Occurred on a hacked distributed Guests control system for US Embassy in Russia, for source see references. CGI attacks: Common gateway interface is an interface between client side computing and server side computing. Cyber criminals who are good programmers can break into computer systems even without the usual login capabilities .They just bypass this. : Reported in Variant of Ubuntu in which databases reported complaints about Hacked Credit cards systems for Payment Services, for source see references. MAC VULNERABILITY ATTACKS Equipment failure: Reported in MAC OS in which MAC ‘S databases reported complaints about IPhone malfunctioning , for source see references. Hardware Tampering: Reported in MAC Tablets .Internal design procedures were not followed in manufacturing the apple devices, for source see references. Malicious Softwares: Discovered at the Payroll system using MAC system by programmers in department of labour, for source see references Phishing attacks: Occurred on a hacked distributed National Data Services system and reported to company .Done by rogue employees. Resource exhaustion: Reported in apple IPHONE 7 in which databases reported complaints about Hacked Dept of Taxation Services, for source see references. MOBILE DEVICES VULNERABILITY ATTACKS Date Entry Error: Reported in windows 7 devices in which Microsoft mobile databases reported complaints about illegal login for Department of social welfare, for source see references. Denial of Service: Reported in Windows 8 phones .Internal routines overloaded in MIT’S MOBILE Lab, for source see references.
  • 44.
    Earthquake: Hurricanes andearthquakes in China and Japan destroy tablets at home and in office, for source see references. Espionage: Occurred on a hacked facial recognition system for FBI and reported to Android databases , for source see references. Floods: Reported in parts of South America and Central Asia flooding homes and destroying mobile devices .Control Analysis The objective of risks control analysis is to assess the status of the management ‘s operational ,managerial and technical capabilities in the security system which may be either preventive or detective in nature. Assessment of this form is required in order to assess the likelihood that the identified vulnerabilities could be exploited and the impact that such exploitation could have on the organization security strength. The impact of a threat event could be described in terms of mission impact that occur because of loss of data or compromise .Mission impact could be degraded if data integrity, data availability and data confidentiality are compromised.Risk Level Determination The guidelines in NIST SP 800-30 stipulate the risk ratings as a mathematical function of the likelihood and of impact. These values are calculated in the table drawn below using probability scale as decimal values. RISK MITIGATION. When the risks have all been identified and risk levels determined, recommendations or countermeasures are drawn to mitigate or eliminate the risks .The goal is to reduce the risk to an acceptable level as considered by management just before system accreditation can be granted. The countermeasures draw their arguments from the following authoritative sources. The effectiveness of the recommended options like system compatibility. Legislation and regulations in place The strength of organization policy
  • 45.
    Overall Operational impact Safetyand reliability of the system in consideration. A sample table to be filled for the mitigation plan is outlined below. ACCEPTABLE RISKS. According to the observations listed in the discussion section of this research paper,11 vulnerabilities were regarded as having low risk ratings,15 as having moderate risk rating and 7 as having a high risk rating .This observations leads us to comment that the overall level of risk for the organization is Moderate. Among the 33 total number of vulnerabilities identified, 49 % are considered unacceptable because serious harm could result with the consequence of affecting the operations of the organization. Therefore immediate mandatory countermeasures needs to be implemented so as to mitigate the risk brought about by these threats and resources should be made available so as to reduce the risk level to acceptable level. Of the identified vulnerabilities 51% are considered acceptable to the system because only minor problems may result from these risks and recommended countermeasures have also been provided to be implemented so as to reduce or eliminate risks. PURPOSE The purpose of this security assessment report is to evaluate the adequacy of the Amazon corporation’s security. This security assessment report provides a structured but qualitative assessment of the operational environment for Amazon corporation’s . It addresses issues of svulnerabilities, threats analysis , Firewalls aqnd encryption , risks remediation and safeguards applied in Amazon corporation’s . The report and the assessment recommends use of cost-effective safeguards in order to mitigate threats as well as the associated exploitable vulnerabilities in Amazon corporation’s. ORGANISATION
  • 46.
    The Amazon corporation’ssystem environment is run as a distributed client and server environment consisting of a Microsoft Structured Query Language -SQL database built with Powerful programming code. The Amazon corporation contains SQL data files, Python application code, and executable Java and Javascripts. The SQL production data files, documented as consisting of SQL stored procedures and SQL tables, reside on a CLOUD storage area network attached to a HP server running on Windows XP and MS SQL 2000 operating systems. The Python application code resides on a different IBM server running on KALI LINUX . Both servers are housed in the main office Building 233 Data Center at the CDC Clifton Road campus in Atlanta, GA. The The Amazon corporation’s executables reside on a fileserver running Windows 2000 and KALI LINUX or occasaionally a local workstation iis installed depending upon the loads and jobs Requirements.. SCOPE The scope of this project includes the carrying out of the nine steps outlined in the project namely. · Conducting independent research on organization structure and identifying the boundaries that seperate the inner network from outside network in Amazon Corporation. · Describing threats to the Amazon Corporation whose background is detailed above. · A review of the information on TCP/IP especially identifying security in communications and data transport in Amazon Corporation. · An analysis of the strength of passwords used in by employees in Amazon corporation · Determining the role of firewalls and encryption as well as auditing that affects integrity, confidentiality and availability in Amazon Corporation. · Determining the various threats known to the Amazon’s network architecture and IT assets.
  • 47.
    · Providing ananalysis of the scans and the recommendations for remediation if needed in Amazon as well as designing and formulating steps in an incidence response that should be in place in Amazon Corporation. METHODOLOGY In order to successfully carry out the security assessment, I used the resources provided in for CLAB 699 Cyber Computing Lab workspace which is a collection of automated Security scan tools and network analyzers. The use of automated methodology proved efficient for the project since dialog boxes communicate with the experimenter and navigation tools aid in browsing various categories of services that can be automated. These tools run various scans are designed to scan and report on the appropriateness of the following security concerns especially in Amazon. Security of Software and hardware components, Security of firmware components, Security of governance policies and and effectiveness and implementation of security c Security of firmware components controls. The automated monitoring and the automated assessment of the computer infrastructure and its components, its policies, and its processes should also account for changes made to security system. In using the automated tools in CLAB 699 Cyber Computing Lab the following security vulnerabilities were looked for using each of the MBSA tools, OpenVMS tools , Nmap tools , or NESSUS tools in CLAB 699 Cyber Computing Lab. SECURITY AREAS LEADING TO DISCOVERY OF VULNERABILITIES BEING ADDRESSED WHILE SCANNING WINDOWS USING MBSA AND LINUX USING OpenVMS. Whether the Amazon ever been compromised internally or externally as in CLAB 699 Cyber Computing Lab screenshots 3 Listing of all IP address blocks and domain names registered to
  • 48.
    Amazon as inCLAB 699 Cyber Computing Lab screen shots 3 Whether the Amazon uses a local firewall, a local intrusion detection system and a local intrusion prevention system as in CLAB 699 Cyber Computing Lab screen shots 2 and 3. Whether Amazon uses a host based or network based IDS or a combination of both as in in CLAB 699 Cyber Computing Lab screen shots 2. Whether the Amazon uses DMZ networks and whether it uses remote access services and what type as in CLAB 699 Cyber Computing Lab screen shots 3. Whether Amazon uses site to site virtual private network tunnels and modems for connections. What antivirus application do you use and is it standalone or managed client server architecture as in CLAB 699 Cyber Computing Lab screen shots 2. SECURITY AREAS LEADING TO DISCOVERY OF VULNERABILITIES BEING ADDRESSED WHILE SCANNING NETWORK USING WIRESHARK. What services does Amazon expose to the internet like web database FTP, SSH? What type of authentication do you use for web services e.g. pubcookies, windows integrated or http access. DATA The following are the results generated using wire shark as in CLAB 699 Cyber Computing Lab .The above vulnerabilities are investigated And any suspicious activity in the network flagged accordingly as highlighted in the screen shots 3. The vulnerabilities being investigated in this project include the following, remotely exploitable vulnerabilities; patch level vulnerabilities; unnecessary services; weakness of encryption and weakness of authentication. The following are the results generated using wire shark as in CLAB 699 Cyber Computing Lab .The above vulnerabilities are investigated
  • 49.
    From the screenshots obtained from screen shots 3 it enables the analyst to answer the question of whether patching is up to date; whether unnecessary services are running and whether antivirus and malware are up to date. A look at the data in Amazon clearly indicates that the vulnerabilities include but not limited to the following :Remotely exploitable vulnerabilities; patch level vulnerabilities; unnecessary services; weakness of encryption and weakness of authentication ;access right vulnerabilities and vulnerabilities from failing to use best practices. Consequently the following kind of threat was identified as likely to have been carried in Amazon corporation’s network infrastructure : IP address spoofing/cache poisoning attacks related to patch level vulnerabilities Denial of service attacks (DoS) related to remotely exploitable vulnerabilities Packet analysis/sniffing related to vulnerabilities from failing to use best practices. Session hijacking attacks related to weakness of authentication Distributed denial of service attacks related to weakness of encryption In using RDBMS Amazon corporation has had instances of SQL Injection affecting database transactions mechanisms in RDBMS. · Cross Site Scripting affecting database transactions mechanisms in RDBMS. · Cross Site Request Forgery affecting access control mechanisms in RDBMS. · Improper data sanitization affecting access control mechanisms in RDBMS · Buffer overflows attacks affecting firewall log file mechanisms. RESULTS When the results of the Amazons’ security assessment report were provided to the Remediation committee, the committee’s
  • 50.
    initial activity wasto review and prioritize each weakness identified for Amazon Corporation. The committee then identified and disclosed vulnerabilities which did apply to Amazon corporation as it is configured and hosted by the CISCO Infrastructure. The justification for disclosing these findings is documented in the system’s remediation spreadsheet. With the support of the System stakeholders in Amazon, the database and system administrators in Amazon, and application support staff in Amazon, a Corrective Action Plan was developed to identify findings that could otherwise be corrected immediately and then to schedule the remaining findings with an appropriate Plan of Action and Milestones (POA&M)designed for Amazon corporation. There are three stages of remediation. Stage 1: Comprehensive analysis of the reported weaknesses Stage 2: Generation of a comprehensive corrective Action Plan to address each weakness Stage 3: Disclosure of findings and creation of (POA&M) for remaining items of risk High risk findings were identified for immediate correction. Immediate correction of moderate risk findings is preferred by stakeholders, but near term scheduling was accepted for funding or complex implementation issues. Because of the minimal impact of Low vulnerabilities, the remediation team prioritized each of these vulnerabilities in coordination with the appropriate stakeholders and identified and scheduled remediation activities as part of routine system maintenance. For those findings in this Amazon report that required a more complex corrective action, the stakeholders, and System Administrators collaborated in the development of a CAP and POA&M. The Amazon system’s RAR and SSP were updated to reflect the weaknesses that were remediated prior to the presentation of the final Certification and Accreditation package to the Accrediting authority. The remediation team continues to track and report on the status of correction activities for the POA&M
  • 51.
    items on aperiodic basis after the presentation of the final C&A package to the AO. FINDINGS The following is an action plan adopted by Amazon Corporation .This is because non conforming Controls were identified during the project in Amazon. AS technology continues to change the specific non conforming controls in Amazon requires to be reevaluated as detailed below in the conclusions section of this report. False positives must be identified within the SAR along with the supporting clean scan report that does not have to be identified in the (POA AND M). Each item on the proposed POA &M should have a unique identifier matched with a finding in SAR. All high and critical risk findings must be remediated prior to receiving a provisional authorization. Moderate findings shall have a mitigation date within 90 days of provisional authorization date CONCLUSIONS AND RECOMEDATIONS This Risk Assessment report for the organization identifies risks of the operations especially in those domains which fails to meet the minimum requirements and for which appropriate countermeasures have yet to be implemented. The RAR also determines the Probability of occurrence and issues countermeasures aimed at mitigating the identified risks in an endeavor to provide an appropriate level-of-protection and to satisfy all the minimum requirements imposed on the organization’s policy document. The system security policy requirements are satisfied now with the exception of those specific areas identified in this report. The countermeasure recommended in this report adds to the
  • 52.
    additional security controlsneeded to meet policies and to effectively manage the security risk to the organization and its operating environment. Finally, the Certification Official and the Authorizing Officials must determine whether the totality of the protection mechanisms approximate a sufficient level of security, and/or are adequate for the protection of this system and its resources and information. The Risk Assessment Results supplies critical information and should be carefully reviewed by the AO prior to making a final accreditation decision. . References National Institute of Standards and Technology (NIST) (2014). Assessing security and privacy controls in federal information systems and organizations. NIST Special Publication 800-53A Revision 4. Retrieved from http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.8 00-53Ar4.pdf National Institute of Standards and Technology (NIST) (2010). Guide for applying the risk management framework to federal information systems. NIST Special Publication 800-37 Revision 1. Retrieved from http://csrc.nist.gov/publications/nistpubs/800-37-rev1/sp800-37- rev1-final.pdf
  • 53.
    . Vetting the Securityof Mobile Applications by Steve Quirolgico, Jeffrey Voas, Tom Karygiannis, Christoph Michael, and Karen Scarfone comprises public domain material from the National Institute of Standards and Technology, U.S. Department of Commerce. Vetting the Security of Mobile Applications Steve Quirolgico Jeffrey Voas Tom Karygiannis Christoph Michael Karen Scarfone This publication is available free of charge from: http://dx.doi.org/10.6028/NIST.SP.800-163 C O M P U T E R S E C U R I T Y
  • 54.
    http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.8 00-163.pdf Vetting the Securityof Mobile Applications by Steve Quirolgico, Jeffrey Voas, Tom Karygiannis, Christoph Michael, and Karen Scarfone comprises public domain material from the National Institute of Standards and Technology, U.S. Department of Commerce. Vetting the Security of Mobile Applications Steve Quirolgico, Jeffrey Voas, Tom Karygiannis Computer Security Division Information Technology Laboratory
  • 55.
    Christoph Michael Leidos Reston, VA KarenScarfone Scarfone Cybersecurity Clifton, VA This publication is available free of charge from: http://dx.doi.org/10.6028/NIST.SP.800-163 January 2015
  • 56.
    U.S. Department ofCommerce Penny Pritzker, Secretary National Institute of Standards and Technology Willie May, Acting Under Secretary of Commerce for Standards and Technology and Acting Director http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.8 00-163.pdf Authority This publication has been developed by NIST in accordance with its statutory responsibilities under the Federal Information Security Management Act of 2002 (FISMA), 44 U.S.C. § 3541 et seq., Public Law 107-347. NIST is responsible for developing information security standards and guidelines, including minimum requirements for federal information systems, but such standards and guidelines shall not apply to national security systems without the express approval of appropriate federal officials exercising policy authority over such systems. This guideline is consistent with the requirements of the Office of Management and Budget (OMB) Circular A-130, Section 8b(3), Securing Agency Information Systems, as analyzed in Circular A-130, Appendix IV: Analysis of Key Sections. Supplemental information is
  • 57.
    provided in CircularA-130, Appendix III, Security of Federal Automated Information Resources. Nothing in this publication should be taken to contradict the standards and guidelines made mandatory and binding on federal agencies by the Secretary of Commerce under statutory authority. Nor should these guidelines be interpreted as altering or superseding the existing authorities of the Secretary of Commerce, Director of the OMB, or any other federal official. This publication may be used by nongovernmental organizations on a voluntary basis and is not subject to copyright in the United States. Attribution would, however, be appreciated by NIST. National Institute of Standards and Technology Special Publication 800-163 Natl. Inst. Stand. Technol. Spec. Publ. 800-163, 44 pages (January 2015) CODEN: NSPUE2 This publication is available free of charge from: http://dx.doi.org/10.6028/NIST.SP.800-163 Certain commercial entities, equipment, or materials may be identified in this document in order to describe an experimental procedure or concept adequately. Such identification is not intended to imply recommendation or endorsement by NIST, nor is it intended to imply that the
  • 58.
    entities, materials, orequipment are necessarily the best available for the purpose. There may be references in this publication to other publications currently under development by NIST in accordance with its assigned statutory responsibilities. The information in this publication, including concepts and methodologies, may be used by federal agencies even before the completion of such companion publications. Thus, until each publication is completed, current requirements, guidelines, and procedures, where they exist, remain operative. For planning and transition purposes, federal agencies may wish to closely follow the development of these new publications by NIST. Organizations are encouraged to review all draft publications during public comment periods and provide feedback to NIST. All NIST Computer Security Division publications, other than the ones noted above, are available at http://csrc.nist.gov/publications. Comments on this publication may be submitted to: National Institute of Standards and Technology Attn: Computer Security Division, Information Technology Laboratory 100 Bureau Drive (Mail Stop 8930) Gaithersburg, MD 20899- 8930 [email protected] ii
  • 59.
    Reports on ComputerSystems Technology The Information Technology Laboratory (ITL) at the National Institute of Standards and Technology (NIST) promotes the U.S. economy and public welfare by providing technical leadership for the Nation’s measurement and standards infrastructure. ITL develops tests, test methods, reference data, proof of concept implementations, and technical analyses to advance the development and productive use of information technology. ITL’s responsibilities include the development of management, administrative, technical, and physical standards and guidelines for the cost- effective security and privacy of other than national security-related information in federal information systems. The Special Publication 800-series reports on ITL’s research, guidelines, and outreach efforts in information system security, and its collaborative activities with industry, government, and academic organizations. Abstract The purpose of this document is to help organizations (1) understand the process for vetting the security of mobile applications, (2) plan for the implementation of an app vetting process, (3) develop app security requirements, (4) understand the types of app vulnerabilities and the testing methods used to detect those vulnerabilities, and (5) determine if an app is acceptable for deployment on the organization's mobile devices.
  • 60.
    Keywords app vetting; apps;malware; mobile devices; security requirements; security vetting; smartphones; software assurance; software security; software testing; software vetting Acknowledgments Special thanks to the National Security Agency’s Center for Assured Software for providing Appendices B and C which describe Android and iOS app vulnerabilities. Also, thanks to all the organizations and individuals who made contributions to the publication including Robert Martin of The MITRE Corporation, Paul Black and Irena Bojanova of NIST, the Department of Homeland Security (DHS), and the Department of Justice (DOJ). Finally, special thanks to the Defense Advanced Research Projects Agency (DARPA) Transformative Applications (TransApps) program for funding NIST research in mobile app security vetting. Trademarks All registered trademarks belong to their respective organizations. 3
  • 61.
    Table of Contents 1Introduction ............................................................................................... ....................... 1 1.1 Traditional vs. Mobile Application Security Issues .....................................................1 1.2 App Vetting Basics ............................................................................................... .....2 1.3 Purpose and Scope ............................................................................................... ....3 1.4 Audience ............................................................................................... ....................3 1.5 Document Structure ............................................................................................... ...4 2 App Vetting Process ...............................................................................................
  • 62.
    ......... 5 2.1 AppTesting ............................................................................................... ................5 2.2 App Approval/Rejection .............................................................................................5 2.3 Planning ............................................................................................... .....................6 2.3.1 Developing Security Requirements ............................................................... 6 2.3.2 Understanding Vetting Limitations ................................................................. 9 2.3.3 Budget and Staffing................................................................................... ...10 3 App Testing ............................................................................................... ......................11 3.1 General Requirements
  • 63.
    ............................................................................................1 1 3.1.1 Enabling AuthorizedFunctionality ................................................................12 3.1.2 Preventing Unauthorized Functionality .........................................................12 3.1.3 Limiting Permissions ....................................................................................13 3.1.4 Protecting Sensitive Data .............................................................................14 3.1.5 Securing App Code Dependencies ..............................................................15 3.1.6 Testing App Updates................................................................................... .16 3.2 Testing Approaches ............................................................................................... .16 3.2.1 Correctness Testing .....................................................................................16
  • 64.
    3.2.2 Source CodeVersus Binary Code................................................................17 3.2.3 Static Versus Dynamic Analysis ...................................................................17 3.2.4 Automated Tools ............................................................................... ...........18 4 3.3 Sharing Results ............................................................................................... ........20 4 App Approval/Rejection ............................................................................. .................. ...21 4.1 Report and Risk Auditing .........................................................................................21 4.2 Organization-Specific Vetting Criteria ......................................................................22
  • 65.
    4.3 Final Approval/Rejection.................................................................. ........................23 Appendix A— Recommendations.................................................................... ......................24 A.1Planning ............................................................................................... ...................24 A.2 App Testing ............................................................................................... ..............24 A.3 App Approval/Rejection ...........................................................................................25 Appendix B— Android App Vulnerability Types...................................................................26 Appendix C— iOS App Vulnerability Types ..........................................................................29 Appendix D— Glossary ............................................................................................... ...........32
  • 66.
    Appendix E— Acronymsand Abbreviations......................................................................... 33 Appendix F— References.............................................................................. .........................34 List of Figures and Tables Figure 1. An app vetting process and its related actors. ...................................................... 6 Figure 2. Organizational app security requirements. ............................................................ 8 Table 1. Android Vulnerabilities, A Level ..............................................................................26 Table 2. Android Vulnerabilities by Level..............................................................................27 Table 3. iOS Vulnerability Descriptions, A Level
  • 67.
    ..................................................................29 Table 4. iOSVulnerabilities by Level .....................................................................................30 5 Executive Summary Recently, organizations have begun to deploy mobile applications (or apps) to facilitate their business processes. Such apps have increased productivity by providing an unprecedented level of connectivity between employees, vendors, and customers, real-time information sharing, unrestricted mobility, and improved functionality. Despite the benefits of mobile apps, however, the use of apps can potentially lead to serious security issues. This is so because, like traditional enterprise applications, apps may contain software vulnerabilities that are susceptible to attack. Such vulnerabilities may be exploited by an attacker to gain unauthorized access to an organization’s information technology resources or the user’s personal data. To help mitigate the risks associated with app vulnerabilities, organizations should develop security requirements that specify, for example, how data used by an app should be secured, the environment in which an app will be deployed, and the acceptable level of risk
  • 68.
    for an app.To help ensure that an app conforms to such requirements, a process for evaluating the security of apps should be performed. We refer to this process as an app vetting process. An app vetting process is a sequence of activities that aims to determine if an app conforms to an organization’s security requirements. An app vetting process comprises two main activities: app testing and app approval/rejection. The app testing activity involves the testing of an app for software vulnerabilities by services, tools, and humans to derive vulnerability reports and risk assessments. The app approval/rejection activity involves the evaluation of these reports and risk assessments, along with additional criteria, to determine the app's conformance with organizational security requirements and ultimately, the approval or rejection of the app for deployment on the organization's mobile devices. Before using an app vetting process, an organization must first plan for its implementation. To facilitate the planning of an app vetting process implementation, this document: -sensitive requirements that make up an organization's app security requirements. es a questionnaire for identifying the security needs and expectations of an organization that are necessary to develop the organization's app security requirements.
  • 69.
    With respect tothe app testing activity of an app vetting process, this document describes: files during testing. unauthorized functionality, protecting sensitive data, and securing app code dependencies. code vs. binary code, static vs. dynamic analysis, and automated tools for testing apps. of test results. With respect to the app approval/rejection activity of an app vetting process, this document describes: -specific vetting criteria that may be used to help determine an app's conformance to context-sensitive security requirements. 6 NIST SP 800-163 Vetting the Security of Mobile Applications
  • 70.
    1 Introduction When deployinga new technology, an organization should be aware of the potential security impact it may have on the organization’s IT resources, data, and users. While new technologies may offer the promise of productivity gains and new capabilities, they may also present new risks. Thus, it is important for an organization’s IT professionals and users to be fully aware of these risks and either develop plans to mitigate them or accept their consequences. Recently, there has been a paradigm shift where organizations have begun to deploy new mobile technologies to facilitate their business processes. Such technologies have increased productivity by providing (1) an unprecedented level of connectivity between employees, vendors, and customers; (2) real-time information sharing; (3) unrestricted mobility; and (4) improved functionality. These mobile technologies comprise mobile devices (e.g., smartphones and tablets) and related mobile applications (or apps) that provide mission-specific capabilities needed by users to perform their duties within the organization (e.g., sales, distribution, and marketing). Despite the benefits of mobile apps, however, the use of apps can potentially lead to serious security risks. This is so because, like traditional enterprise applications, apps may contain software vulnerabilities that are susceptible to attack. Such vulnerabilities may be exploited by an attacker to steal information or control a user's device.
  • 71.
    To help mitigatethe risks associated with software vulnerabilities, organizations should employ software assurance processes. Software assurance refers to “the level of confidence that software is free from vulnerabilities, either intentionally designed into the software or accidentally inserted at any time during its life cycle, and that the software functions in the intended manner” [1]. The software assurance process includes “the planned and systematic set of activities that ensures that software processes and products conform to requirements, standards, and procedures” [2]. A number of government and industry legacy software assurance standards exist that are primarily directed at the process for developing applications that require a high level of assurance (e.g., space flight, automotive systems, and critical defense systems).1 Although considerable progress has been made in the past decades in the area of software assurance, and research and development efforts have resulted in a growing market of software assurance tools and services, the state of practice for many today still includes manual activities that are time- consuming, costly, and difficult to quantify and make repeatable. The advent of mobile computing adds new challenges because it does not necessarily support traditional software assurance techniques. 1.1 Traditional vs. Mobile Application Security Issues The economic model of the rapidly evolving app marketplace challenges the traditional software development process. App developers are attracted by the opportunities to reach a market of millions of
  • 72.
    users very quickly.However, such developers may have little experience building quality software that is secure and do not have the budgetary resources or motivation to conduct extensive testing. Rather than performing comprehensive software tests on their code before making it available to the public, developers often release apps that contain functionality flaws and/or security-relevant weaknesses. That can leave an app, the user’s device, and the user’s network vulnerable to exploitation by attackers. Developers and users of these apps often tolerate buggy, unreliable, and insecure code in exchange for the low cost. In addition, app developers typically update their apps much more frequently than traditional applications. 1 Examples of these software assurance standards include DO- 178B, Software Considerations in Airborne Systems and Equipment Certification [3], IEC 61508 Functional Safety of Electrical/Electronic/Programmable Electronic Safety-related System [4], and ISO 26262 Road vehicles -- Functional safety [5]. 1 NIST SP 800-163 Vetting the Security of Mobile Applications Organizations that once spent considerable resources to develop in-house applications are taking advantage of inexpensive third-party apps and web services to improve their organization’s productivity. Increasingly, business processes are conducted on mobile
  • 73.
    devices. This contrastswith the traditional information infrastructure where the average employee uses only a handful of applications and web-based enterprise databases. Mobile devices provide access to potentially millions of apps for a user to choose from. This trend challenges the traditional mechanisms of enterprise IT security software where software exists within a tightly controlled environment and is uniform throughout the organization. Another major difference between apps and enterprise applications is that unlike a desktop computing system, far more precise and continuous device location information, physical sensor data, personal health metrics, and pictures and audio about a user can be exposed through apps. Apps can sense and store information including user personally identifiable information (PII). Although many apps are advertised as being free to consumers, the hidden cost of these apps may be selling the user’s profile to marketing companies or online advertising agencies. There are also differences between the network capabilities of traditional computers that enterprise applications run on and mobile devices that apps run on. Unlike traditional computers that connect to the network via Ethernet, mobile devices have access to a wide variety of network services including Wireless Fidelity (Wi-Fi), 2G/3G, and 4G/Long Term Evolution (LTE). This is in addition to short-range data connectivity provided by services such as Bluetooth and Near Field Communications (NFC). All of these mechanisms of data transmission are typically available to apps that run on a mobile device and can
  • 74.
    be vectors forremote exploits. Further, mobile devices are not physically protected to the same extent as desktop or laptop computers and can therefore allow an attacker to more easily (physically) acquire a lost or stolen device. 1.2 App Vetting Basics To provide software assurance for apps, organizations should develop security requirements that specify, for example, how data used by an app should be secured, the environment in which an app will be deployed, and the acceptable level of risk for an app (see [6] for more information). To help ensure that an app conforms to such requirements, a process for evaluating the security of apps should be performed. We refer to this process as an app vetting process. An app vetting process is a sequence of activities that aims to determine if an app conforms to the organization’s security requirements.2 This process is performed on an app after the app has been developed and released for distribution but prior to its deployment on an organization's mobile device. Thus, an app vetting process is distinguished from software assurance processes that may occur during the software development life cycle of an app. Note that an app vetting process typically involves analysis of an app’s compiled, binary representation but can also involve analysis of the app’s source code if it is available. The software distribution model that has arisen with mobile computing presents a number of software assurance challenges, but it also creates opportunities. An app
  • 75.
    vetting process acknowledgesthe concept that someone other than the software vendor is entitled to evaluate the software’s behavior, allowing organizations to evaluate software in the context of their own security policies, planned use, and risk tolerance. However, distancing a developer from an organization’s app vetting process can also make those activities less effective with respect to improving the secure behavior of the app. 2 The app vetting process can also be used to assess other app issues including reliability, performance, and accessibility but is primarily intended to assess security-related issues. 2 NIST SP 800-163 Vetting the Security of Mobile Applications App stores may perform app vetting processes to verify compliance with their own requirements. However, because each app store has its own unique, and not always transparent, requirements and vetting processes, it is necessary to consult current agreements and documentation for a particular app store to assess its practices. Organizations should not assume that an app has been fully vetted and conforms to their security requirements simply because it is available through an official app store. Third- party assessments that carry a moniker of “approved by” or “certified by” without providing details of which tests are performed, what the findings were, or how apps
  • 76.
    are scored orrated, do not provide a reliable indication of software assurance. These assessments are also unlikely to take organization- specific requirements and recommendations into account, such as federal-specific cryptography requirements. Although a “one size fits all” approach to app vetting is not plausible, the basic findings from one app vetting effort may be reusable by others. Leveraging another organization’s findings for an app should be contemplated to avoid duplicating work and wasting scarce app vetting resources. With appropriate standards for scoping, managing, licensing, and recording the findings from software assurance activities in consistent ways, app vetting results can be looked at collectively, and common problems and solutions may be applicable across the industry or with similar organizations. An app vetting process should be included as part of the organization's overall security strategy. If the organization has a formal process for establishing and documenting security requirements, an app vetting process should be able to import those requirements created by the process. Bug and incident reports created by the organization could also be imported by an app vetting process, as could public advisories concerning vulnerabilities in commercial and open source apps and libraries, and conversely, results from app vetting processes may get stored in the organization’s bug tracking system.
  • 77.
    1.3 Purpose andScope The purpose of this document is to help organizations (1) understand the app vetting process, (2) plan for the implementation of an app vetting process, (3) develop app security requirements, (4) understand the types of app vulnerabilities and the testing methods used to detect those vulnerabilities, and (5) determine if an app is acceptable for deployment on the organization's mobile devices. Ultimately, the acceptance of an app depends on the organization’s security requirements which, in turn, depend on the environment in which the app is deployed, the context in which it will be used, and the underlying mobile technologies used to run the app. This document highlights those elements that are particularly important to be considered before apps are approved as “fit-for-use.” Note that this document does not address the security of the underlying mobile platform and operating system, which are addressed in other publications [6, 7, 8]. While these are important characteristics for organizations to understand and consider in selecting mobile devices, this document is focused on how to vet apps after the choice of platform has been made. Similarly, discussion surrounding the security of web services used to support back-end processing for apps is out of scope for this document. 1.4 Audience This document is intended for organizations that plan to
  • 78.
    implement an appvetting process or leverage existing app vetting results from other organizations. It is also intended for developers that are interested in understanding the types of software vulnerabilities that may arise in their apps during the app’s software development life cycle. 3 NIST SP 800-163 Vetting the Security of Mobile Applications 1.5 Document Structure The remainder of this document is organized into the following sections and appendices: including recommendations for planning the implementation of an app vetting process. process. Organizations interested in leveraging only existing security reports and risk assessments from other organizations can skip this section. app vetting process.
  • 79.
    app testing andapp approval/rejection activities of an app vetting process as well as the planning of an app vetting process implementation. -specific vulnerabilities for apps running on the Android and iOS operating systems, respectively. efines selected terms used in this document. in this document. 4 NIST SP 800-163 Vetting the Security of Mobile Applications 2 App Vetting Process An app vetting process comprises a sequence of two main activities: app testing and app approval/rejection. In this section, we provide an overview of these two activities as well as provide
  • 80.
    recommendations for planningthe implementation of an app vetting process. 2.1 App Testing An app vetting process begins when an app is submitted by an administrator to one or more analyzers for testing. An administrator is a member of the organization who is responsible for deploying, maintaining, and securing the organization's mobile devices as well as ensuring that deployed devices and their installed apps conform to the organization's security requirements. Apps that are submitted by an administrator for testing will typically be acquired from an app store or an app developer, each of which may be internal or external to the organization. Note that the activities surrounding an administrator's acquisition of an app are not part of an app vetting process. An analyzer is a service, tool, or human that tests an app for specific software vulnerabilities and may be internal or external to the organization. When an app is received by an analyzer, the analyzer may perform some preprocessing of the app prior to testing the app. Preprocessing of an app may be used to determine the app's suitability for testing and may involve ensuring that the app decompiles correctly, extracting meta-data from the app (e.g., app name and version number) and storing the app in a local database. After an app has been received and preprocessed by an analyzer,
  • 81.
    the analyzer thentests the app for the presence of software vulnerabilities. Such testing may include a wide variety of tests including static and dynamic analyses and may be performed in an automated or manual fashion. Note that the tests performed by an analyzer are not organization-specific but instead are aimed at identifying software vulnerabilities that may be common across different apps. After testing an app, an analyzer generates a report that identifies detected software vulnerabilities. In addition, the analyzer generates a risk assessment that estimates the likelihood that a detected vulnerability will be exploited and the impact that the detected vulnerability may have on the app or its related device or network. Risk assessments are typically represented as ordinal values indicating the severity of the risk (e.g., low-, moderate-, and high- risk). 2.2 App Approval/Rejection After the report and risk assessment are generated by an analyzer, they are made available to one or more auditors of the organization. An auditor is a member of the organization who inspects reports and risk assessments from one or more analyzers to ensure that an app meets the security requirements of the organization. The auditor also evaluates additional criteria to determine if the app violates any organization-specific security requirements that could not be ascertained by the analyzers. After evaluating all reports, risk assessments, and additional criteria, the auditor then collates this information into a single report and risk assessment and derives a
  • 82.
    recommendation for approvingor rejecting the app based on the overall security posture of the app. This recommendation is then made available to an approver. An approver is a high-level member of the organization responsible for determining which apps will be deployed on the organization's mobile devices. An approver uses the recommendations provided by one or more auditors that describe the security posture of the app as well as other non-security-related criteria to determine the organization's official approval or rejection of an app. If an app is approved by the approver, the administrator is then permitted to deploy the app on the organization's mobile devices. If, however, the app is rejected, the organization will follow specified procedures for identifying a 5 NIST SP 800-163 Vetting the Security of Mobile Applications suitable alternative app or rectifying issues with the problematic app. Figure 1 shows an app vetting process and its related actors. Figure 1. An app vetting process and its related actors. Note that an app vetting process can be performed manually or through systems that provide semi-
  • 83.
    automated management ofthe app testing and app approval/rejection activities [9]. Further note that although app vetting processes may vary among organizations, organizations should strive to implement a process that is repeatable, efficient, consistent, and that limits errors (e.g., false positive and false negative results). 2.3 Planning Before an organization can implement an app vetting process, it is necessary for the organization to first (1) develop app security requirements, (2) understand the limitations of app vetting, and (3) procure a budget and staff for supporting the app vetting process. Recommendations for planning the implementation of an app vetting process are described in Appendix A.1. 2.3.1 Developing Security Requirements App security requirements state an organization’s expectations for app security and drive the evaluation process. When possible, tailoring app security assessments to the organization’s unique mission requirements, policies, risk tolerance, and available security countermeasures is more cost-effective in time and resources. Such assessments could minimize the risk of attackers exploiting behaviors not taken into account during vetting, since foreseeing all future insecure behaviors during deployment is unrealistic. Unfortunately, it is not always possible to get
  • 84.
    security requirements fromapp end users; the 6 NIST SP 800-163 Vetting the Security of Mobile Applications user base may be too broad, may not be able to state their security requirements concretely enough for testing, or may not have articulated any security requirements or expectations of secure behavior. An organization's app security requirements comprise two types of requirements: general and context- sensitive. A general requirement is an app security requirement that specifies a software characteristic or behavior that an app should exhibit in order to be considered secure. For an app, the satisfaction or violation of a general requirement is determined by analyzers that test the app for software vulnerabilities during the app testing activity of an app vetting process. If an analyzer detects a software vulnerability in an app, the app is considered to be in violation of a general requirement. Examples of general requirements include Apps must prevent unauthorized functionality and Apps must protect sensitive data. We describe general requirements in Section 3.1. A context-sensitive requirement is an app security requirement that specifies how apps should be used by the organization to ensure the organization's security posture. For an app, the satisfaction or violation of a
  • 85.
    context-sensitive requirement isnot based on the presence or absence of a software vulnerability and thus cannot be determined by analyzers, but instead must be determined by an auditor who uses organization- specific vetting criteria for the app during the app approval/rejection activity of an app vetting process. Such criteria may include the app's intended set of users or intended deployment environment. If an auditor determines that the organization-specific vetting criteria for an app conflicts with a context- sensitive requirement, the app is considered to be in violation of the context-sensitive requirement. Examples of context-sensitive requirements include Apps that access a network must not be used in a sensitive compartmented information facility (SCIF) and Apps that record audio or video must only be used by classified personnel. We describe organization- specific vetting criteria in Section 4.2. The relationship between the organization's app security requirements and general and context-sensitive requirements is shown in Figure 2. Here, an organization's app security requirements comprise a subset of general requirements and a subset of context-sensitive requirements. The shaded areas represent an organization's app security requirements that are applied to one or more apps. During the app approval/rejection activity of an app vetting process, the auditor reviews reports and risk assessments generated by analyzers to determine an app's satisfaction or violation of general requirements as well as evaluates organization-specific vetting criteria to determine the app's satisfaction or violation of context-sensitive requirements.
  • 86.
    Developing app securityrequirements involves identifying the security needs and expectations of the organization and identifying both general and context-sensitive requirements that address those needs and expectations. For example, if an organization has a need to ensure that apps do not leak PII, the general requirement Apps must not leak PII should be defined. If the organization has an additional need to ensure that apps that record audio or video must not be used in a SCIF, then the context-sensitive requirement Apps that record audio or video must not be used in a SCIF should be used. After deriving general requirements, an organization should explore available analyzers that test for the satisfaction or violation of those requirements. In this case, for example, an analyzer that tests an app for the leakage of PII should be used. Similarly, for context-sensitive requirements, an auditor should identify appropriate organization-specific vetting criteria (e.g., the features or purpose of the app and the intended deployment environment) to determine if the app records audio or video and if the app will be used in a SCIF. The following questionnaire may be used by organizations to help identify their specific security needs in order to develop their app security requirements. 7 NIST SP 800-163 Vetting the Security of Mobile Applications
  • 87.
    Figure 2. Organizationalapp security requirements. needs, secure behavior expectations, and/or risk management needs? For example, what assets in the organization must be protected, and what events must be avoided? What is the impact if the assets are compromised or the undesired events occur? nces should an app not be used? store, if known? concern that mobile devices will be used as a springboard to attack those assets?
  • 88.
    organization concerned about?For example: o What information about personnel would harm the organization as a whole if it were disclosed? o Is there a danger of espionage? o Is there a danger of malware designed to disrupt operations at critical moments? o What kinds of entities might profit from attacking the organization? 8 NIST SP 800-163 Vetting the Security of Mobile Applications devices carried by personnel connect to public providers, a communication infrastructure owned by the organization, or both at different times? How secure is the organization's wireless infrastructure if it has one?
  • 89.
    app vetting processexpected to evaluate other aspects of mobile device security such as the operating system (OS), firmware, hardware, and communications? Is the app vetting process meant only to perform evaluations or does it play a broader role in the organization’s approach to mobile security? Is the vetting process part of a larger security infrastructure, and if so, what are the other components? Note that only software-related vetting is in the scope of this document, but if the organization expects more than this, the vetting process designers should be aware of it. expected to handle, and how much variation is permissible in the time needed to process individual apps? High volume combined with low funding might necessitate compromises in the quality of the evaluation. Some organizations may believe that a fully automated assessment pipeline provides more risk reduction than it actually does and therefore expect unrealistically low day-to-day expenses. Organizations should perform a risk analysis [10] to understand and document the potential security risks in order to help identify appropriate security requirements. Some types of vulnerabilities or weaknesses discovered in apps may be mitigated by other security controls included in the enterprise mobile device architecture. With respect to these other controls, organizations should review and document the mobile device hardware and operating system security controls (e.g.,
  • 90.
    encrypted file systems),to identify which security and privacy requirements can be addressed by the mobile device itself. In addition, mobile enterprise security technologies such as Mobile Device Management (MDM) solutions should also be reviewed to identify which security requirements can be addressed by these technologies. 2.3.2 Understanding Vetting Limitations As with any software assurance process, there is no guarantee that even the most thorough vetting process will uncover all potential vulnerabilities. Organizations should be made aware that although app security assessments should generally improve the security posture of the organization, the degree to which they do so may not be easily or immediately ascertained. Organizations should also be made aware of what the vetting process does and does not provide in terms of security. Organizations should also be educated on the value of humans in security assessment processes and ensure that their app vetting does not rely solely on automated tests. Security analysis is primarily a human-driven process [11, 12]; automated tools by themselves cannot address many of the contextual and nuanced interdependencies that underlie software security. The most obvious reason for this is that fully understanding software behavior is one of the classic impossible problems of computer science [13], and in fact current technology has not even reached the limits of what is theoretically possible. Complex, multifaceted software architectures cannot be fully analyzed by
  • 91.
    automated means. A furtherproblem is that current software analysis tools do not inherently understand what software has to do to behave in a secure manner in a particular context. For example, failure to encrypt data transmitted to the cloud may not be a security issue if the transmission is tunneled through a virtual private network (VPN). Even if the security requirements for an app have been correctly predicted and are completely understood, there is no current technology for unambiguously translating human-readable requirements into a form that can be understood by machines. For these reasons, security analysis requires humans (e.g., auditors) in the loop, and by extension the quality of the outcome depends, among other things, on the level of human effort and level of expertise 9 NIST SP 800-163 Vetting the Security of Mobile Applications available for an evaluation. Auditors should be familiar with standard processes and best practices for software security assessment (e.g., see [11, 14, 15, 16]). A robust app vetting process should use multiple assessment tools and processes, as well as human interaction, to be successful; reliance on only a single tool, even with human interaction, is a significant risk because of the inherent limitations of each tool.
  • 92.
    2.3.3 Budget andStaffing App software assurance activity costs should be included in project budgets and should not be an afterthought. Such costs may include licensing costs for analyzers and salaries for auditors, approvers, and administrators. Organizations that hire contractors to develop apps should specify that app assessment costs be included as part of the app development process. Note, however, that for apps developed in- house, attempting to implement app assessments solely at the end of the development effort will lead to increased costs and lengthened project timelines. It is strongly recommended to identify potential vulnerabilities or weaknesses during the development process when they can still be addressed by the original developers. To provide an optimal app vetting process implementation, it is critical for the organization to hire personnel with appropriate expertise. For example, organizations should hire auditors experienced in software security and information assurance as well as administrators experienced in mobile security. 10 NIST SP 800-163 Vetting the Security of Mobile Applications
  • 93.
    3 App Testing Inthe app testing activity, an app is sent by an administrator to one or more analyzers. After receiving an app, an analyzer assumes control of the app in order to preprocess it prior to testing. This initial part of the app testing activity introduces a number of security-related issues, but such issues pertain more to the security of the app file itself rather than the vulnerabilities that might exist in the app. Regardless, such issues should be considered by the organization for ensuring the integrity of the app, protecting intellectual property, and ensuring compliance with licensing and other agreements. Most of these issues are related to unauthorized access to the app. If an app is submitted electronically by an administrator to an analyzer over a network, issues may arise if the transmission is made over unencrypted channels, potentially allowing unauthorized access to the app by an attacker. After an app is received, an analyzer will often store an app on a local file system, database, or repository. If the app is stored on an unsecured machine, unauthorized users could access the app, leading to potential licensing violations (e.g., if an unauthorized user copies a licensed app for their personal use) and integrity issues (e.g., if the app is modified or replaced with another app by an attacker). In addition, violations of intellectual property may also arise if unauthorized users access the source code of the app. Even if source code for an app is not provided by an administrator, an analyzer will often be required to decompile the (binary) app prior to testing. In both cases, the source or decompiled code will
  • 94.
    often be storedon a machine by the analyzer. If that machine is unsecured, the app's code may be accessible to unauthorized users, potentially allowing a violation of intellectual property. Note that organizations should review an app’s End User License Agreement (EULA) regarding the sending of an app to a third-party for analysis as well as the reverse engineering of the app’s code by the organization or a third-party to ensure that the organization is not in violation of the EULA. If an analyzer is external to the organization, then the responsibility of ensuring the security of the app file falls on the analyzer. Thus, the best that an organization can do to ensure the integrity of apps, the protection of intellectual property, and the compliance with licensing agreements is to understand the security implications surrounding the transmission, storage, and processing of an app by the specific analyzer. After an app has been preprocessed by an analyzer, the analyzer will then test the app for software vulnerabilities that violate one or more general app security requirements. In this section, we describe these general requirements and the testing approaches used to detect vulnerabilities that violate these requirements. Descriptions of specific Android and iOS app vulnerabilities are provided in Appendices B and C. In addition, recommendations related to the app testing activity are described in Appendix A.2. For more information on security testing and assessment tools and techniques that may be helpful for performing app testing, see [17].
  • 95.
    3.1 General Requirements Ageneral requirement is an app security requirement that specifies a software characteristic or behavior that an app should exhibit in order to be considered secure. General app security requirements include: described; all buttons, menu items, and other interfaces must work. Error conditions must be handled gracefully, such as when a service or function is unavailable (e.g., disabled, unreachable, etc.). functionality, such as data exfiltration performed by malware, must not be supported. 11 NIST SP 800-163 Vetting the Security of Mobile Applications ermissions: Apps should have only the minimum permissions necessary and should only grant other applications the necessary permissions.
  • 96.
    transmit sensitive datashould protect the confidentiality and integrity of this data. This category includes preserving privacy, such as asking permission to use personal information and using it only for authorized purposes. dependencies, such as on libraries, in a reasonable manner and not for malicious reasons. to identify any new weaknesses. Well-written security requirements are most useful to auditors when they can easily be translated into specific evaluation activities. The various processes concerning the elicitation, management, and tracking of requirements are collectively known as requirements engineering [18, 19], and this is a relatively mature activity with tools support. Presently, there is no methodology for driving all requirements down to the level where they could be checked completely3 by automatic software analysis, while ensuring that the full scope of the original requirements is preserved. Thus, the best we can do may be to document the process in such a way that human reviews can find gaps and concentrate on the types of vulnerabilities, or weaknesses, that fall into those gaps. 3.1.1 Enabling Authorized Functionality
  • 97.
    An important partof confirming authorized functionality is testing the mobile user interface (UI) display, which can vary greatly for different device screen sizes and resolutions. The rendering of images or position of buttons may not be correct due to these differences. If applicable, the UI should be viewed in both portrait and landscape mode. If the page allows for data entry, the virtual keyboard should be invoked to confirm it displays and works as expected. Analyzers should also test any device physical sensors used by the app such as Global Positioning System (GPS), front/back cameras, video, microphone for voice recognition, accelerometers (or gyroscopes) for motion and orientation-sensing, and communication between devices (e.g., phone “bumping”). Telephony functionality encompass a wide variety of method calls that an app can use if given the proper permissions, including making phone calls, transmitting Short Message Service (SMS) messages, and retrieving unique phone identifier information. Most, if not all of these telephony events are sensitive, and some of them provide access to PII. Many apps make use of the unique phone identifier information in order to keep track of users instead of using a username/password scheme. Another set of sensitive calls can give an app access to the phone number. Many legitimate apps use the phone number of the device as a unique identifier, but this is a bad coding practice and can lead to loss of PII. To make matters worse, many of the carrier companies do not take any caution in protecting this information. Any app that makes use of the phone call or SMS messages that is not clearly stated
  • 98.
    in the EULAor app description as intending to use them should be immediate cause for suspicion. 3.1.2 Preventing Unauthorized Functionality Some apps (as well as their libraries) are malicious and intentionally perform functionality that is not disclosed to the user and violates most expectations of secure behavior. This undocumented functionality may include exfiltrating confidential information or PII to a third party, defrauding the user by sending premium SMS messages (premium SMS messages are meant to be used to pay for products or services), 3 In the mathematical sense of “completeness,” completeness means that no violations of the requirement will be overlooked. 12 NIST SP 800-163 Vetting the Security of Mobile Applications or tracking users’ locations without their knowledge. Other forms of malicious functionality include injection of fake websites into the victim’s browser in order to collect sensitive information, acting as a starting point for attacks on other devices, and generally disrupting or denying operation. Another example is the use of banner ads that may be presented in a manner which causes the user to unintentionally select ads that may attempt to deceive the user.
  • 99.
    These types ofbehaviors support phishing attacks and may not be detected by mobile antivirus or software vulnerability scanners as they are served dynamically and not available for inspection prior to the installation of the app. Open source and commercially available malware detection and analysis tools can identify both known and new forms of malware. These tools can be incorporated as part of an organization’s enterprise mobile device management (MDM) solution, organization’s app store, or app vetting process. Note that even if these tools have not detected known malware in an app, one should not assume that malicious functionality is not present. It is essentially impossible for a vetting process to guarantee that a piece of software is free from malicious functionality. Mobile devices may have some built-in protections against malware, for example app sandboxing and user approval in order to access subsystems such as the GPS or text messaging. However, sandboxing only makes it more difficult for apps to interfere with one another or with the operating system; it does not prevent many types of malicious functionality. A final category of unauthorized functionality to consider is communication with disreputable websites, domains, servers, etc. If the app is observed to communicate with sites known to harbor malicious advertising, spam, phishing attacks, or other malware, this is a strong indication of unauthorized functionality.
  • 100.
    3.1.3 Limiting Permissions Someapps have permissions that are not consistent with EULAs, app permissions, app descriptions, in- program notifications, or other expected behaviors and would not be considered to exhibit secure behavior. An example is a wallpaper app that collects and stores sensitive information, such as passwords or PII, or accesses the camera and microphone. Although these apps might not have malicious intent— they may just be poorly designed—their excessive permissions still expose the user to threats that stem from the mismanagement of sensitive data and the loss or theft of a device. Similarly, some apps have permissions assigned that they don’t actually use. Moreover, users routinely reflexively grant whatever access permissions a newly installed app requests, should their mobile device OS perform permission requests. It is important to note that identifying apps that have excessive permissions is a manual process. It involves a subjective decision from an auditor that certain permissions are not appropriate and that the behavior of the app is insecure. Excessive permissions can manifest in other ways including: ) and removable storage: File I/O can be a security risk, especially when the I/O happens on a removable or unencrypted portion of the file system. Regarding the app’s own behavior, file scanning or access to files that are not part of an app’s own directory could be an
  • 101.
    indicator of maliciousactivity or bad coding practice. Files written to external storage, such as a removable Secure Digital (SD) card, may be readable and writeable by other apps that may have been granted different permissions, thus placing data written to unprotected storage at risk. lower-level command line programs, which may allow access to low-level structures, such as the root directory, or may allow access to sensitive commands. These programs potentially allow a malicious app access to various system resources and information (e.g., finding out the running processes on a device). Although the mobile operating system typically offers protection against directly accessing resources beyond what is 13 NIST SP 800-163 Vetting the Security of Mobile Applications available to the user account that the app is running under, this opens up the potential for privilege elevation attacks. only uses designated APIs from the vendor- provided software development kit (SDK)
  • 102.
    and uses themproperly; no other API calls are permitted.4 For the permitted APIs, the analyzer should note where the APIs are transferring data to and from the app. 3.1.4 Protecting Sensitive Data Many apps collect, store, and/or transmit sensitive data, such as financial data (e.g., credit card numbers), personal data (e.g., social security numbers), and login credentials (e.g., passwords). When implemented and used properly, cryptography can help maintain the confidentiality and integrity of this data, even if a protected mobile device is lost or stolen and falls into an attacker’s hands. However, only certain cryptographic implementations have been shown to be sufficiently strong, and developers who create their own implementations typically expose their apps to exploitation through man-in-the-middle attacks and other forms. Federal Information Processing Standard (FIPS) 140-2 precludes the use of unvalidated cryptography for the cryptographic protection of sensitive data within federal systems. Unvalidated cryptography is viewed by NIST as providing no protection to the data. A list of FIPS 140-2-validated cryptographic modules and an explanation of the applicability of the Cryptographic Module Validation Program (CMVP) can be found on the CMVP website [20]. Analyzers should reference the CMVP-provided information to determine if each app is using properly validated cryptographic modules and approved implementations
  • 103.
    of those modules. Itis not sufficient to use appropriate cryptographic algorithms and modules; cryptography implementations must also be maintained in accordance with approved practices. Cryptographic key management is often performed improperly, leading to exploitable weaknesses. For example, the presence of hard-coded cryptographic keying material such as keys, initialization vectors, etc. is an indication that cryptography has not been properly implemented by an app and might be exploited by an attacker to compromise data or network resources. Guidelines for proper key management techniques can be found in [21]. Another example of a common cryptographic implementation problem is the failure to properly validate digital certificates, leaving communications protected by these certificates subject to man-in-the- middle attacks. Privacy considerations need to be taken into account for apps that handle PII, including mobile-specific personal information like location data and pictures taken by onboard cameras, as well as the broadcast ID of the device. This needs to be dealt with in the UI as well as in the portions of the apps that manipulate this data. For example, an analyzer can verify that the app complies with privacy standards by masking characters of any sensitive data within the page display, but audit logs should also be reviewed when possible for appropriate handling of this type of information. Another important privacy consideration is that sensitive data should not be disclosed without prior notification to the user by a
  • 104.
    prompt or alicense agreement. Another concern with protecting sensitive data is data leakage. For example, data is often leaked through unauthorized network connections. Apps can use network connections for legitimate reasons, such as 4 The existence of an API raises the possibility of malicious use. Even if the APIs are used properly, they may pose risks because of covert channels, unintended access to other APIs, access to data exceeding original design, and the execution of actions outside of the app’s normal operating parameters. 14 NIST SP 800-163 Vetting the Security of Mobile Applications retrieving content, including configuration files, from a variety of external systems. Apps can receive and process information from external sources and also send information out, potentially exfiltrating data from the mobile device without the user’s knowledge. Analyzers should consider not only cellular and Wi-Fi usage, but also Bluetooth, NFC, and other forms of networking. Another common source of data leakage is shared system-level logs, where multiple apps log their security events to a single log file. Some of these log entries may contain sensitive information, either recorded inadvertently or maliciously, so that other apps can retrieve it. Analyzers should examine app logs to look for signs of sensitive data
  • 105.
    leakage. 3.1.5 Securing AppCode Dependencies In general, an app must not use any unsafe coding or building practices. Specifically, an app should properly use other bodies of code, such as libraries, only when needed, and not to attempt to obfuscate malicious activity.5 Examples include: calls are typically calls to a library function that has already been loaded into memory. These methods provide a way for an app to reuse code that was written in a different language. These calls, however, can provide a level of obfuscation that impacts the ability to perform analysis of the app. third-party libraries and classes that are loaded by the app at run time. Third-party libraries and classes can be closed source, can have self- modifying code, or can execute unknown server-side code. Legitimate uses for loaded libraries and classes might be for the use of a cryptographic library or a graphics API. Malicious apps can use library and class loading as a method to avoid detection. From a vetting perspective, libraries and classes are also a concern because they introduce outside code to an app without the direct control of
  • 106.
    the developer. Librariesand classes can have their own known vulnerabilities, so an app using such libraries or classes could expose a known vulnerability on an external interface. Tools are available that can list libraries, and these libraries can then be searched for in vulnerability databases to determine if they have known weaknesses. ehavior: When apps execute, they exhibit a variety of dynamic behaviors. Not all of these operating behaviors are a result of user input. Executing apps may also receive inputs from data stored on the device. The key point here is the need to know where data used by an app originates from and knowing whether and how it gets sanitized. It is critical to recognize that data downloaded from an external source is particularly dangerous as a potential exploit vector unless it is clear how the app prevents insecure behaviors resulting from data from a source not trusted by the organization using the app. Data downloaded from external sources should be treated as potential exploits unless it can be shown that the app is able to thwart any negative consequences from externally supplied data, especially data from untrusted sources. This is nearly impossible, thus requiring some level of risk- tolerance mitigation. -Application Communications: Apps that communicate with each other can provide useful capabilities and productivity improvements, but these inter- application communications can also
  • 107.
    present a securityrisk. For example, in Android platforms, inter-application communications are allowed but regulated by what Android calls “intents.” An intent can be used to start an app component in a different app or the same app that is sending the intent. Intents can also be used to 5 There are both benign and malicious reasons for obfuscating code (e.g., protecting intellectual property and concealing malware, respectively). Code obfuscation makes static analysis difficult, but dynamic analysis can still be effective. 15 NIST SP 800-163 Vetting the Security of Mobile Applications interact and request resources from the operating system. In general, analyzers should be aware of situations where an app is using nonhuman entities to make API calls to other devices, to communicate with third-party services, or to otherwise interact with other systems. 3.1.6 Testing App Updates To provide long-term assurance of the software throughout its life cycle, all apps, as well as their updates, should go through a software assurance vetting process, because each new version of an app can
  • 108.
    introduce new unintentionalweaknesses or unreliable code. Apps should be vetted before being released to the community of users within the organization. The purpose is to validate that the app adheres to a predefined acceptable level of security risk and identify whether the developers have introduced any latent weaknesses that could make the IT infrastructure vulnerable. It is increasingly common for mobile devices to be capable of and be configured to automatically download app updates. Mobile devices are also generally capable of downloading apps of the user’s choice from app stores. Ideally, each app and app update should be vetted before it is downloaded onto any of the organization’s mobile devices. If unrestricted app and app update downloads are permitted, this can bypass the vetting process. There are several options to work around this, depending on the mobile device platform and the degree to which the mobile devices are managed by the organization. Possible options include disabling all automatic updates, only permitting use of an organization-provided app store, preventing the installation of updates through the use of application whitelisting software, and enabling MDM mobile application management features that make app vetting a precondition for allowing users to download apps and app updates. Further discussion of this is out of scope because it is largely addressed operationally as part of OS security practices. 3.2 Testing Approaches
  • 109.
    To detect thesatisfaction or violation of a general requirement, an analyzer tests an app for the presence of software vulnerabilities. Such testing may involve (1) correctness testing, (2) analysis of the app's source code or binary code, (3) the use of static or dynamic analysis, and (4) manual or automatic testing of the app. 3.2.1 Correctness Testing One approach for testing an app is software correctness testing [22]. Software correctness testing is the process of executing a program with the intent of finding errors. Although software correctness testing is aimed primarily at improving quality assurance, verifying and validating described functionality, or estimating reliability, it can also help to reveal potential security vulnerabilities since such vulnerabilities often have a negative effect on the quality, functionality, and reliability of the software. For example, software that crashes or exhibits unexpected behavior is often indicative of a security flaw. One of the advantages of software correctness testing is that it is traditionally based on specifications of the specific software to be tested. These specifications can then be transformed into requirements that specify how the software is expected to behave under test. This is distinguished from security assessment approaches that often require the tester to derive requirements themselves; often such requirements are largely based on security requirements that are considered to be common across many different software artifacts and may not test for vulnerabilities that are unique to the software under test. Nonetheless, because of the tight
  • 110.
    coupling between securityand the quality, functionality, and reliability of software, it is recommended that software correctness testing be performed when possible. 16 NIST SP 800-163 Vetting the Security of Mobile Applications 3.2.2 Source Code Versus Binary Code A major factor in performing app testing is whether source code is available. Typically, apps downloaded from an app store do not provide access to source code. When source code is available, such as in the case of an open source app, a variety of tools can be used to analyze it. The goals of performing a source code review are to find vulnerabilities in the source code and to verify the results of analyzers. The current practice is that these tasks are performed manually by a secure code reviewer who reads through the contents of source code files. Even with automated aids, the analysis is labor-intensive. Benefits to using automated static analysis tools include introducing consistency between different reviews and making review of large codebases possible. Reviewers should generally use automated static analysis tools whether they are conducting an automated or a manual review, and they should express their findings in terms of Common Weakness Enumeration (CWE) identifiers or some other widely accepted nomenclature. Performing a secure code review requires software development and domain-specific
  • 111.
    knowledge in thearea of application security. Organizations should ensure that the individuals performing source code reviews have the necessary skills and expertise. Note that organizations that intend to develop apps in-house should also refer to guidance on secure programming techniques and software quality assurance processes to appropriately address the entire software development life cycle [11, 23]. When source code is not available, binary code can be vetted instead. In the context of apps, the term “binary code” can refer to either byte-code or machine code. For example, Android apps are compiled to byte-code that is executed on a virtual machine, similar to the Java Virtual Machine (JVM), but they can also come with custom libraries that are provided in the form of machine code, that is, code executed directly on the mobile device's CPU. Android binary apps include byte-code that can be analyzed without hardware support using emulated and virtual environments. 3.2.3 Static Versus Dynamic Analysis Analysis tools are often characterized as being either static or dynamic.6 Static analysis examines the app source code and binary code, and attempts to reason over all possible behaviors that might arise at runtime. It provides a level of assurance that analysis results are an accurate description of the program’s behavior regardless of the input or execution environment. Dynamic analysis operates by executing a program using a set of input use cases and analyzing the program’s runtime behavior. In some cases, the
  • 112.
    enumeration of inputtest cases is large, resulting in lengthy processing times. However, methods such as combinatorial testing can reduce the number of dynamic input test case combinations, reducing the amount of time needed to derive analysis results [25]. Still, dynamic analysis is unlikely to provide 100 percent code coverage [26]. Organizations should consider the technical trade-off differences between what static and dynamic tools offer and balance their usage given the organization’s software assurance goals. Static analysis requires that binary code be reverse engineered when source is not available, which is relatively easy for byte-code,7 but can be difficult for machine code. Many commercial static analysis tools already support byte-code, as do a number of open source and academic tools.8 For machine code, it is especially hard to track the flow of control across many functions and to track data flow through 6 For mobile devices, there are analysis tools that label themselves as performing behavioral testing. Behavioral testing (also known as behavioral analysis) is a form of static and dynamic testing that attempts to detect malicious or risky behavior, such as the oft-cited example of a flashlight app that accesses a contact list. [24] This publication assumes that any mention of static or dynamic testing also includes behavioral testing as a subset of its capabilities. 7 The ASM framework [27] is a commonly used framework for byte-code analysis. 8 Such as [28, 29, 30, 31].
  • 113.
    17 NIST SP 800-163Vetting the Security of Mobile Applications variables, since most variables are stored in anonymous memory locations that can be accessed in different ways. The most common way to reverse engineer machine code is to use a disassembler or a decompiler that tries to recover the original source code. These techniques are especially useful if the purpose of reverse engineering is to allow humans to examine the code, since the outputs are in a form that can be understood by humans with appropriate skills. But even the best disassemblers make mistakes [32], and some of those mistakes can be corrected with formal static analysis. If the code is being reverse engineered for static analysis, then it is often preferable to disassemble the machine code directly to a form that the static analyzer understands rather than creating human-readable code as an intermediate byproduct. A static analysis tool that is aimed at machine code is likely to automate this process. For dynamic testing (as opposed to static analysis), the most important requirement is to be able to see the workings of the code as it is being executed. There are two primary ways to obtain this information. First, an executing app can be connected to a remote debugger, and second, the code can be run on an emulator that has debugging capabilities built into it. Running the code on a physical device allows the analyzer to
  • 114.
    select the exactcharacteristics of the device on which the app is intended to be used and can provide a more accurate view about how the app will be behave. On the other hand, an emulator provides more control, especially when the emulator is open source and can be modified by the evaluator to capture whatever information is needed. Although emulators can simulate different devices, they do not simulate all of them, and the simulation may not be completely accurate. Note that malware increasingly detects the use of emulators as a testing platform and changes its behavior accordingly to avoid detection. Therefore, it is recommended that analyzers use a combination of emulated and physical mobile devices so as to avoid false negatives from malware that employs anti- detection techniques. Useful information can be gleaned by observing an app’s behavior even without knowing the purposes of individual functions. For example, an analyzer can observe how the app interacts with its external resources, recording the services it requests from the operating system and the permissions it exercises. Note that although many of the device capabilities used by an app may be inferred by an analyzer (e.g., access to a device's camera will be required of a camera app), an app may be permitted access to additional device capabilities that are beyond the scope of its described functionality (e.g., a camera app accessing the device's network). Moreover, if the behavior of the app is observed for specific inputs, the evaluator can ask whether the capabilities being exercised make sense in the context of those particular inputs. For example, a calendar app may legitimately have permission to send calendar data across the
  • 115.
    network in orderto sync across multiple devices, but if the user has merely asked for a list of the day’s appointments and the app sends data that is not simply part of the handshaking process needed to retrieve data, the analyzer might investigate what data is being sent and for what purpose. 3.2.4 Automated Tools In most cases, analyzers used by an app vetting process will consist of automated tools that test an app for software vulnerabilities and generate a report and risk assessment. Classes of automated tools include: use of a computer to view how the app will display on a specific device without the use of an actual device. Because the tool provides access to the UI, the app features can also be tested. However, interaction with the actual device hardware features such as a camera or accelerometer cannot be simulated and requires an actual device. analyzer to view and access an actual device from a computer. This allows the testing of most device features that do not require physical movement such as using a video camera or making a phone call. Remote debuggers are a common way of examining and understanding apps.
  • 116.
    18 NIST SP 800-163Vetting the Security of Mobile Applications utomated Testing: Basic mobile functionality testing lends itself well to automated testing. There are several tools available that allow creation of automated scripts to remotely run regression test cases on specific devices and operating systems. o User Interface-Driven Testing: If the expected results being verified are UI-specific, pixel verification can be used. This method takes a “screen shot” of a specified number of pixels from a page of the app and verifies that it displays as expected (pixel by pixel). o Data-Driven Testing: Data-driven verification uses labels or text to identify the section of the app page to verify. For example, it will verify the presence of a “Back” button, regardless of where on the page it displays. Data-driven automated test cases are less brittle and allow for some page design change without rewriting the script. o Fuzzing: Fuzzing normally refers to the automated generation
  • 117.
    of test inputs,either randomly or based on configuration information describing data format. Fault injection tools may also be referred to as “fuzzers.” This category of test tools is not necessarily orthogonal to the others, but its emphasis is on fast and automatic generation of many test scenarios. o Network-Level Testing: Fuzzers, penetration test tools, and human-driven network simulations can help determine how the app interacts with the outside world. It may be useful to run the app in a simulated environment during network-level testing so that its response to network events can be observed more closely. to tools that automate the repetition of tests after they have been engineered by humans. This makes it useful for tests that need to be repeated often (e.g., tests that will be used for many apps or executed many times, with variations, for a single app). o Static Analysis Tools: These tools generally analyze the behavior of software (either source or
  • 118.
    binary code) tofind software vulnerabilities, though—if appropriate specifications are available— static analysis can also be used to evaluate correctness. Some static analysis tools operate by looking for syntax embodying a known class of potential vulnerabilities and then analyzing the behavior of the program to determine whether the weaknesses can be exploited. Static tools should encompass the app itself, but also the data it uses to the extent that this is possible; keep in mind that the true behavior of the app may depend critically on external resources. 9 o Metrics Tools: These tools measure aspects of software not directly related to software behavior but useful in estimating auxiliary information such as the effort of code evaluation. Metrics can also give indirect indications about the quality of the design and development process. Commercial automated app testing tools have overlapping or complementary capabilities. For example, one tool may be based on techniques that find integer overflows [34] reliably while another tool may be better at finding weaknesses related to command injection attacks [35]. Finding the right set of tools to evaluate a requirement can be a challenging task because of the varied capabilities of diverse commercial 9 The Software Assurance Metrics and Tool Evaluation (SAMATE) test suite [33] provides a baseline for evaluating static
  • 119.
    analysis tools. Toolvendors may evaluate their own tools with SAMATE, so it is to be expected that, over time, the tools will eventually perform well on precisely the SAMATE specified tests. Still, the SAMATE test suite can help determine if the tool is meant to do what analyzers thought, and the SAMATE test suite is continually evolving 19 NIST SP 800-163 Vetting the Security of Mobile Applications and open source tools, and because it can be challenging to determine the capabilities of a given tool. Constraints on time and money may also prevent the evaluator from assembling the best possible evaluation process, especially when the best process would involve extensive human analysis. Tools that provide a high-level risk rating should provide transparency on how the score was derived and which tests were performed. Tools that provide low-level code analysis reports should help analyzers understand how the tool’s findings may impact their security. Therefore, it is important to understand and quantify what each tool and process can do, and to use that knowledge to characterize what the complete evaluation process does or does not provide. In most cases, organizations will need to use multiple automated tools to meet their testing requirements. Using multiple tools can provide improved coverage of software vulnerability detection as well as provide
  • 120.
    validation of testresults for those tools that provide overlapping tests (i.e., if two or more tools test for the same vulnerability). 3.3 Sharing Results The sharing of an organization's findings for an app can greatly reduce the duplication and cost of app vetting efforts for other organizations. Information sharing within the software assurance community is vital and can help analyzers benefit from the collective efforts of security professionals around the world. The National Vulnerability Database (NVD) [36] is the U.S. government repository of standards-based vulnerability management data represented using the Security Content Automation Protocol (SCAP) [37]. This data enables automation of vulnerability management, security measurement, and compliance. The NVD includes databases of security checklists, security- related software flaws, misconfigurations, product names, and impact metrics. SCAP is a suite of specifications that standardize the format and nomenclature by which security software products communicate software flaw and security configuration information. SCAP is a multipurpose protocol that supports automated vulnerability checking, technical control compliance activities, and security measurement. Goals for the development of SCAP include standardizing system security management, promoting interoperability of security products, and fostering the use of standard expressions of security content. The CWE [38] and Common Attack Pattern Enumeration and Classification (CAPEC) [39] collections can provide a useful list of weaknesses and
  • 121.
    attack approaches todrive a binary or live system penetration test. Classifying and expressing software vulnerabilities is an ongoing and developing effort in the software assurance community, as is how to prioritize among the various weaknesses that can be in an app [40] so that an organization can know that those that pose the most danger to the app, given its intended use/mission, are addressed by the vetting activity given the difference in the effectiveness and coverage of the various available tools and techniques. 20 NIST SP 800-163 Vetting the Security of Mobile Applications 4 App Approval/Rejection After an app has been tested by an analyzer, the analyzer generates a report that identifies detected software vulnerabilities and a risk assessment that expresses the estimated level of risk associated with using the app. The report and risk assessment are then made available to one or more auditors of the organization. In this section, we describe issues surrounding the auditing of report and risk assessments by an auditor, the organization-specific vetting criteria used by auditors to determine if an app meets the organization’s context-sensitive requirements, and issues surrounding the final decision by an approver to approve or reject an app. Recommendations related to the app approval/rejection activity are described in
  • 122.
    Appendix A.3 4.1 Reportand Risk Auditing An auditor inspects reports and risk assessments from one or more analyzers to ensure that the app meets the organization's general security requirements. To accomplish this, the auditor must be intimately familiar with the organization's general security requirements as well as with the reports and risk assessments from analyzers. After inspecting analyzers' reports and risk assessments against general app security requirements, the auditor also assesses the app with respect to context-sensitive requirements using organization-specific vetting criteria. After assessing the app's satisfaction or violation of both general and context-specific security requirements, the auditor then generates a recommendation for approving or rejecting the app for deployment based on the app's security posture and makes this available to the organization's approver. One of the main issues related to report and risk auditing stems from the difficulty in collating and interpreting different reports and risk assessments from multiple analyzers due to the wide variety of security-related definitions, semantics, nomenclature, and metrics. For example, an analyzer may classify the estimated risk for using an app as low, moderate, high, or severe risk while another analyzer classifies estimated risk as pass, warning, or fail. Note that it is in the best interest of analyzers to provide assessments that are relatively intuitive and easy to interpret by
  • 123.
    auditors. Otherwise, organizationswill likely select other analyzers that generate reports and assessments that their auditors can understand. While some standards exist for expressing risk assessment (e.g., the Common Vulnerability Scoring System [41]) and vulnerability reports (e.g., Common Vulnerability Reporting Framework [42]), the current adoption of these standards by analyzers is low. To address this issue, it is recommended that an organization identify analyzers that leverage vulnerability reporting and risk assessment standards. If this is not possible, it is recommended that the organization provide sufficient training to auditors on both the security requirements of the organization as well as the interpretation of reports and risk assessments generated by analyzers. The methods used by an auditor to derive a recommendation for approving or rejecting an app is based on a number of factors including the auditor's confidence in the analyzers' assessments and the perceived level of risk from one or more risk assessments as well as the auditor's understanding of the organization's risk threshold, the organization's general and context-sensitive security requirements, and the reports and risk assessments from analyzers. In some cases, an organization will not have any context-sensitive requirements and thus, auditors will evaluate the security posture of the app based solely on reports and risk assessments from analyzers. In such cases, if risk assessments from all analyzers indicate a low security risk associated with using an app, an auditor may adequately justify the approval of the app based on those risk assessments (assuming that the auditor has confidence in the analyzers' assessments).
  • 124.
    Conversely, if oneor more analyzers indicate a high security risk with using an app, an auditor may adequately justify the rejection of the app based on those risk assessments. Cases where moderate, but no high, levels of risk are assessed for an app by one or more analyzers will require additional evaluation by the auditor in order to adequately justify a recommendation for approving or rejecting the app. To 21 NIST SP 800-163 Vetting the Security of Mobile Applications increase the likelihood of an appropriate recommendation, it is recommended that the organization use multiple auditors. 4.2 Organization-Specific Vetting Criteria As described in Section 2.3.1, a context-sensitive requirement is an app security requirement that specifies how apps should be used by the organization to ensure that organization's security posture. For an app, the satisfaction or violation of context-sensitive requirements cannot be determined by analyzers but instead must be determined by an auditor using organization-specific vetting criteria. Such criteria include: ments, security
  • 125.
    policies, privacy policies,acceptable use policies, and social media guidelines that are applicable to the organization. organization, developer’s reputation, date received, marketplace/app store consumer reviews, etc. collected, stored, and/or transmitted by the app. business processes. configuration on which the app will be deployed. the app (e.g., general public use vs. sensitive military environment). binaries or packages.10
  • 126.
    o User Guide:When available, the app’s user guide assists testing by specifying the expected functionality and expected behaviors. This is simply a statement from the developer describing what they claim their app does and how it does it. o Test Plans: Reviewing the developer’s test plans may help focus app vetting by identifying any areas that have not been tested or were tested inadequately. A developer could opt to submit a test oracle in certain situations to demonstrate its internal test effort. o Testing Results: Code review results and other testing results will indicate which security standards were followed. For example, if an application threat model was created, this should be submitted. This will list weaknesses that were identified and should have been addressed during design and coding of the app. 10 The level of assurance provided by digital signatures varies widely. For example, one organization might have stringent digital signature requirements that provide a high degree of trust, while another organization might allow self-signed certificates to be used, which do not provide any level of trust. 22
  • 127.
    NIST SP 800-163Vetting the Security of Mobile Applications o Service-Level Agreement: If an app was developed for an organization by a third party, a Service-Level Agreement (SLA) may have been included as part of the vendor contract. This contract should require the app to be compatible with the organization’s security policy. Some information can be gleaned from app documentation in some cases, but even if documentation does exist, it might lack technical clarity and/or use jargon specific to the circle of users who would normally purchase the app. Since the documentation for different apps will be structured in different ways, it may also be time-consuming to find the information for evaluation. Therefore, a standardized questionnaire might be appropriate for determining the software’s purpose and assessing an app developer’s efforts to address security weaknesses. Such questionnaires aim to identify software quality issues and security weaknesses by helping developers address questions from end users/adopters about their software development processes. For example, developers can use the Department of Homeland Security (DHS) Custom Software Questionnaire [43] to answer questions such as Does your software validate inputs from untrusted resources? and What threat assumptions were made when designing protections for your software? Another useful question (not included in the DHS
  • 128.
    questionnaire) is Doesyour app access a network application programming interface (API)? Note that such questionnaires are feasible to use only in certain circumstances (e.g., when source code is available and when developers can answer questions). Known flaws in app design and coding may be reported in publicly accessible vulnerability databases, such as NVD.11 Before conducting the full vetting process for a publicly available app, auditors should check one or more vulnerability databases to determine if there are known flaws in the corresponding version of the app. If one or more serious flaws have already been discovered, this alone might be sufficient grounds to reject the version of the app for organizational use, thus allowing the rest of the vetting process to be skipped. However, in most cases such flaws will not be known, and the full vetting process will be needed. This is so because there are many forms of vulnerabilities other than known flaws in app design and coding. Identifying these weaknesses necessitates first defining the app requirements, so that deviations from these requirements can be flagged as weaknesses. 4.3 Final Approval/Rejection Ultimately, an organization's decision to approve or reject an app for deployment rests on the decision of the organization's approver. After an approver receives recommendations from one or more auditors that describe the security posture of the app, the approver will consider this information as well as additional
  • 129.
    non-security-related criteria includingbudgetary, policy, and mission-specific criteria to render the organization’s official decision to approve or reject the app. After this decision is made, the organization should then follow procedures for handling the approval or rejection of the app. These procedures must be specified by the organization prior to implementing an app vetting process. If an app is approved, procedures must be defined that specify how the approved app should be processed and ultimately deployed onto the organization's devices. For example, a procedure may specify the steps needed to digitally sign an app before being submitted to the administrator for deployment onto the organization's devices. Similarly, if an app is rejected, a procedure may specify the steps needed to identify an alternative app or to resolve detected vulnerabilities with the app developer. Procedures that define the steps associated with approving or rejecting an app should be included in the organization's app security policies. 11 Vulnerability databases generally reference vulnerabilities by their Common Vulnerabilities and Exposures (CVE) identifier. For more information on CVE, see [44]. 23 NIST SP 800-163 Vetting the Security of Mobile Applications Appendix A—Recommendations
  • 130.
    This appendix describesrecommendations for vetting the security of mobile applications. These recommendations are described within the context of the planning of an app vetting process implementation as well as the app testing and app approval/rejection activities. A.1 Planning potential security impact of mobile apps on the organization’s computing, networking, and data resources). bile device hardware and operating system security controls, for example an encrypted file system, and identify which security and privacy requirements can be addressed by the mobile device itself. rity technologies, such as Mobile Device Management (MDM) solutions, and identify which security and privacy requirements can be addressed by these technologies. understand what threats are mitigated
  • 131.
    through the technicaland operational controls. Identify potential security and privacy risks that are not mitigated through these technical and operational controls. identifying general and context-sensitive requirements. and the value of human involvement in an app vetting process. process. rsonnel, particularly auditors, with appropriate expertise. A.2 App Testing understand the security implications surrounding the integrity, intellectual property, and licensing issues when submitting an app to an analyzer.
  • 132.
    encrypted channel (e.g.,SSL) and that apps are stored on a secure machine at the analyzer's location. In addition, ensure that only authorized users have access to that machine. organization. determining the satisfaction or violation of general app security requirements. that app updates are tested. 24 NIST SP 800-163 Vetting the Security of Mobile Applications A.3 App Approval/Rejection risk assessment methodology, or that provides intuitive and easy-to-interpret reports and risk assessments.
  • 133.
    organization's security requirementsand interpretation of analyzer reports and risk assessments. Use multiple auditors to increase likelihood of appropriate recommendations. -specific vetting criteria necessary for vetting context-sensitive app security requirements. ing lists, and other publicly available security vulnerability reporting repositories to keep abreast of new developments that may impact the security of mobile devices and mobile apps. 25 NIST SP 800-163 Vetting the Security of Mobile Applications Appendix B—Android App Vulnerability Types This appendix identifies vulnerabilities specific to apps running on Android mobile devices. The scope of this appendix includes app vulnerabilities for Android-based mobile devices running apps written in Java.
  • 134.
    The scope doesnot include vulnerabilities in the mobile platform hardware and communications networks. Although some of the vulnerabilities described below are common across mobile device environments, this appendix focuses only on Android-specific vulnerabilities. The vulnerabilities in this appendix are broken into three hierarchical levels, A, B, and C. The A level is referred to as the vulnerability class and is the broadest description for the vulnerabilities specified under that level. The B level is referred to as the sub-class and attempts to narrow down the scope of the vulnerability class into a smaller, common group of vulnerabilities. The C level specifies the individual vulnerabilities that have been identified. The purpose of this hierarchy is to guide the reader to finding the type of vulnerability they are looking for as quickly as possible. The A level general categories of Android app vulnerabilities are listed below: Table 1. Android Vulnerabilities, A Level Type Description Negative Consequence Incorrect Permissions Permissions allow accessing controlled functionality such as the camera or GPS and are requested in the program. Permissions
  • 135.
    can be implicitlygranted to an app without the user’s consent. An app with too many permissions may perform unintended functions outside the scope of the app’s intended functionality. Additionally, the permissions are vulnerable to hijacking by another app. If too few permissions are granted, the app will not be able to perform the functions required. Exposed Communications Internal communications protocols are the means by which an app passes messages internally within the device, either to itself or to other apps. External communications allow information to leave the device. Exposed internal communications allow apps to gather unintended information and inject new information. Exposed external communication (data network, Wi-Fi, Bluetooth, NFC, etc.) leave information open to disclosure or man-in-the- middle attacks. Potentially Dangerous Functionality Controlled functionality that accesses system-critical resources or the user’s personal information. This functionality can be invoked through API calls or hard coded into an app.
  • 136.
    Unintended functions couldbe performed outside the scope of the app’s functionality. App Collusion Two or more apps passing information to each other in order to increase the capabilities of one or both apps beyond their declared scope. Collusion can allow apps to obtain data that was unintended such as a gaming app obtaining access to the user’s contact list. Obfuscation Functionality or control flows that are hidden or obscured from the user. For the purposes of this appendix, obfuscation was defined as three criteria: external library calls, reflection, and native code usage. 1. External libraries can contain unexpected and/or malicious functionality. 2. Reflective calls can obscure the control flow of an app and/or subvert permissions within an app. 3. Native code (code written in languages other than Java in Android) can perform unexpected and/or malicious functionality. Excessive Power Consumption Excessive functions or unintended apps running on a device which intentionally or unintentionally drain the battery. Shortened battery life could affect the ability to perform mission-critical functions.
  • 137.
    26 NIST SP 800-163Vetting the Security of Mobile Applications Type Description Negative Consequence Traditional Software Vulnerabilities All vulnerabilities associated with traditional Java code including: Authentication and Access Control, Buffer Handling, Control Flow Management, Encryption and Randomness, Error Handling, File Handling, Information Leaks, Initialization and Shutdown, Injection, Malicious Logic, Number Handling, and Pointer and Reference Handling. Common consequences include unexpected outputs, resource exhaustion, denial of service, etc. The table below shows the hierarchy of Android app vulnerabilities from A level to C level. Table 2. Android Vulnerabilities by Level
  • 138.
    Level A LevelB Level C Permissions Over Granting Over Granting in Code Over Granting in API Under Granting Under Granting in Code Under Granting in API Developer Created Permissions Developer Created in Code Developer Created in API Implicit Permission Granted through API Granted through Other Permissions Granted through Grandfathering Exposed Communications External Communications Bluetooth GPS Network/Data Communications NFC Access Internal Communications Unprotected Intents Unprotected Activities Unprotected Services Unprotected Content Providers Unprotected Broadcast Receivers Debug Flag 27 NIST SP 800-163 Vetting the Security of Mobile Applications
  • 139.
    Potentially Dangerous Functionality Direct AddressingMemory Access Internet Access Potentially Dangerous API Cost Sensitive APIs Personal Information APIs Device Management APIs Privilege Escalation Altering File Privileges Accessing Super User/Root App Collusion Content Provider/Intents Unprotected Content Providers Permission Protected Content Providers Pending Intents Broadcast Receiver Broadcast Receiver for Critical Messages Data Creation/Changes/Deletion Creation/Changes/Deletion to File Resources Creation/Changes/Deletion to Database Resources Number of Services Excessive Checks for Service State Obfuscation Library Calls Use of Potentially Dangerous Libraries Potentially Malicious Libraries Packaged but Not Used Native Code Detection Reflection
  • 140.
    Packed Code Excessive PowerConsumption CPU Usage I/O 28 NIST SP 800-163 Vetting the Security of Mobile Applications Appendix C—iOS App Vulnerability Types This appendix identifies and defines the various types of vulnerabilities that are specific to apps running on mobile devices utilizing the Apple iOS operating system. The scope does not include vulnerabilities in the mobile platform hardware and communications networks. Although some of the vulnerabilities described below are common across mobile device environments, this appendix focuses on iOS-specific vulnerabilities. The vulnerabilities in this appendix are broken into three hierarchical levels, A, B, and C. The A level is referred to as the vulnerability class and is the broadest description for the vulnerabilities specified under that level. The B level is referred to as the sub-class and attempts to narrow down the scope of the vulnerability class into a smaller, common group of vulnerabilities. The C level specifies the individual vulnerabilities that have been identified. The purpose of this
  • 141.
    hierarchy is toguide the reader to finding the type of vulnerability they are looking for as quickly as possible. The A level general categories of iOS app vulnerabilities are listed below: Table 3. iOS Vulnerability Descriptions, A Level Type Description Negative Consequence Privacy Similar to Android Permissions, iOS privacy settings allow for user-controlled app access to sensitive information. This includes: contacts, Calendar information, tasks, reminders, photos, and Bluetooth access. iOS lacks the ability to create shared information and protect it. All paths of information sharing are controlled by the iOS app framework and may not be extended. Unlike Android, these permissions may be modified later for individual permissions and apps. Exposed Communication- Internal and External Internal communications protocols allow apps to process information and communicate with other apps. External communications allow information to leave the device.
  • 142.
    Exposed internal communicationsallow apps to gather unintended information and inject new information. Exposed external communication (data network, Wi-Fi, Bluetooth, etc.) leave information open to disclosure or man-in-the- middle attacks. Potentially Dangerous Functionality Controlled functionality that accesses system-critical resources or the user’s personal information. This functionality can be invoked through API calls or hard coded into an app. Unintended functions could be performed outside the scope of the app’s functionality. App Collusion Two or more apps passing information to each other in order to increase the capabilities of one or both apps beyond their declared scope. Collusion can allow apps to obtain data that was unintended such as a gaming app obtaining access to the user’s contact list. Obfuscation Functionality or control flow that is hidden or obscured from the user. For the purposes of this appendix, obfuscation was defined as three criteria: external library calls, reflection, and packed code.
  • 143.
    1. External librariescan contain unexpected and/or malicious functionality. 2. Reflective calls can obscure the control flow of an app and/or subvert permissions within an app. 3. Packed code prevents code reverse engineering and can be used to hide malware. Excessive Power Consumption Excessive functions or unintended apps running on a device which intentionally or unintentionally drain the battery. Shortened battery life could affect the ability to perform mission-critical functions. 29 NIST SP 800-163 Vetting the Security of Mobile Applications Type Description Negative Consequence Traditional Software Vulnerabilities All vulnerabilities associated with Objective C and others. This includes: Authentication and Access Control, Buffer Handling, Control Flow Management, Encryption and Randomness, Error Handling, File Handling, Information Leaks, Initialization and
  • 144.
    Shutdown, Injection, MaliciousLogic, Number Handling and Pointer and Reference Handling. Common consequences include unexpected outputs, resource exhaustion, denial of service, etc. The table below shows the hierarchy of iOS app vulnerabilities from A level to C level. Table 4. iOS Vulnerabilities by Level Level A Level B Level C Privacy Sensitive Information Contacts Calendar Information Tasks Reminders Photos Bluetooth Access Exposed Communications External Communications Telephony Bluetooth GPS SMS/MMS Network/Data Communications Internal Communications Abusing Protocol Handlers Potentially Dangerous Functionality Direct Memory Mapping Memory Access File System Access
  • 145.
    Potentially Dangerous APICost Sensitive APIs Device Management APIs Personal Information APIs App Collusion Data Change Changes to Shared File Resources Changes to Shared Database Resources Changes to Shared Content Providers Data Creation/Deletion Creation/Deletion to Shared File Resources Obfuscation Number of Services Excessive Checks for Service State Native Code Potentially Malicious Libraries Packaged but not Used Use of Potentially Dangerous Libraries Reflection Identification Class Introspection Library Calls Constructor Introspection Field Introspection Method Introspection 30 NIST SP 800-163 Vetting the Security of Mobile Applications Packed Code Excessive Power Consumption CPU Usage
  • 146.
    I/O 31 NIST SP 800-163Vetting the Security of Mobile Applications Appendix D—Glossary Selected terms used in this publication are defined below. Administrator A member of the organization who is responsible for deploying, maintaining, and securing the organization's mobile devices as well as ensuring that deployed devices and their installed apps conform to the organization's security requirements. Analyzer A service, tool, or human that tests an app for specific software vulnerabilities. App Security Requirement A requirement that ensures the security of an app. A general requirement is an app security requirement that defines software characteristics or behavior that an app must exhibit to be considered secure. A context-sensitive requirement
  • 147.
    is an appsecurity requirement that is specific to the organization. An organization's app security requirements comprise a subset of general and context-sensitive requirements. App Vetting Process The process of verifying that an app meets an organization's security requirements. An app vetting process comprises app testing and app approval/rejection activities. Approver A member of the organization that determines the organization's official approval or rejection of an app. Auditor A member of the organization who inspects reports and risk assessments from one or more analyzers as well as organization-specific criteria to ensure that an app meets the security requirements of the organization. Dynamic Analysis Detecting software vulnerabilities by executing an app using a set of input use cases and analyzing the app’s runtime behavior. Fault Injection Testing Attempting to artificially cause an error with an app during execution by forcing it to experience corrupt data or corrupt internal states to see how robust it is against these simulated failures. Functionality Testing Verifying that an app’s user interface, content, and features perform and display as designed. Personally Identifiable
  • 148.
    Information Any information aboutan individual that can be used to distinguish or trace an individual's identify and any other information that is linked or linkable to an individual [45]. Risk Assessment A value that defines an analyzer's estimated level of security risk for using an app. Risk assessments are typically based on the likelihood that a detected vulnerability will be exploited and the impact that the detected vulnerability may have on the app or its related device or network. Risk assessments are typically represented as categories (e.g., low-, moderate-, and high-risk). Static Analysis Detecting software vulnerabilities by examining the app source code and binary and attempting to reason over all possible behaviors that might arise at runtime. Software Assurance The level of confidence that software is free from vulnerabilities, either intentionally designed into the software or accidentally inserted at any time during its life cycle and that the software functions in the intended manner. Software Correctness Testing The process of executing a program with the intent of finding errors and is aimed primarily at improving quality assurance, verifying and validating described
  • 149.
    functionality, or estimatingreliability. Software Vulnerability A security flaw, glitch, or weakness found in software that can be exploited by an attacker. 32 NIST SP 800-163 Vetting the Security of Mobile Applications Appendix E—Acronyms and Abbreviations Selected acronyms and abbreviations used in this publication are defined below. 3G 3rd Generation API Application Programming Interface CAPEC Common Attack Pattern Enumeration and Classification CMVP Cryptographic Module Validation Program CPU Central Processing Unit CVE Common Vulnerabilities and Exposures CWE Common Weakness Enumeration DHS Department of Homeland Security DoD Department of Defense DOJ Department of Justice EULA End User License Agreement FIPS Federal Information Processing Standard FISMA Federal Information Security Management Act GPS Global Positioning System I/O Input/Output IT Information Technology
  • 150.
    ITL Information TechnologyLaboratory JVM Java Virtual Machine LTE Long-Term Evolution MDM Mobile Device Management NFC Near Field Communications NIST National Institute of Standards and Technology NVD National Vulnerability Database OMB Office of Management and Budget OS Operating System PII Personally Identifiable Information QoS Quality of Service SAMATE Software Assurance Metrics And Tool Evaluation SCAP Security Content Automation Protocol SD Secure Digital SDK Software Development Kit SLA Service-Level Agreement SMS Short Message Service SP Special Publication UI User Interface VPN Virtual Private Network Wi-Fi Wireless Fidelity 33 NIST SP 800-163 Vetting the Security of Mobile Applications Appendix F—References References for this publication are listed below. [1] Committee on National Security Systems (CNSS)
  • 151.
    Instruction 4009, NationalInformation Assurance Glossary, April 2010. https://www.cnss.gov/CNSS/issuances/Instructions.cfm (accessed 12/4/14). [2] National Aeronautics and Space Administration (NASA) NASA-STD-8739.8 w/Change 1, Standard for Software Assurance, July 28, 2004 (revised May 5, 2005). http://www.hq.nasa.gov/office/codeq/doctree/87398.htm (accessed 12/4/14). [3] Radio Technical Commission for Aeronautics (RTCA), RTCA/DO-178B, Software Considerations in Airborne Systems and Equipment Certification, December 1, 1992 (errata issued March 26, 1999). [4] International Electrotechnical Commission (IEC), IEC 61508, Functional Safety of Electrical/Electronic/Programmable Electronic Safety-related Systems, edition 2.0 (7 parts), April 2010. [5] International Organization for Standardization (ISO), ISO 26262, Road vehicles -- Functional safety (10 parts), 2011.
  • 152.
    [6] M. Souppayaand K. Scarfone, Guidelines for Managing the Security of Mobile Devices in the Enterprise, NIST Special Publication (SP) 800-124 Revision 1, June 2013. http://dx.doi.org/10.6028/NIST.SP.800-124r1. [7] National Information Assurance Partnership, Protection Profile for Mobile Device Fundamentals Version 2.0, September 17, 2014. https://www.niap- ccevs.org/pp/PP_MD_v2.0/ (accessed 12/4/14). [8] Joint Task Force Transformation Initiative, Security and Privacy Controls for Federal Information Systems and Organizations, NIST Special Publication (SP) 800-53 Revision 4, April 2013 (updated January 15, 2014). http://dx.doi.org/10.6028/NIST.SP.800-53r4. [9] NIST, AppVet Mobile App Vetting System [Web site], http://csrc.nist.gov/projects/appvet/ (accessed 12/4/14). [10] Joint Task Force Transformation Initiative, Guide for Conducting Risk Assessments, NIST Special Publication (SP) 800-30 Revision 1, September 2012. http://csrc.nist.gov/publications/nistpubs/800-30-
  • 153.
    rev1/sp800_30_r1.pdf (accessed 12/4/14). [11]G. McGraw, Software Security: Building Security In, Upper Saddle River, New Jersey: Addison- Wesley Professional, 2006. [12] M. Dowd, J. McDonald and J. Schuh, The Art of Software Security Assessment - Identifying and Preventing Software Vulnerabilities, Upper Saddle River, New Jersey: Addison-Wesley Professional, 2006. 34 NIST SP 800-163 Vetting the Security of Mobile Applications [13] H.G. Rice, “Classes of Recursively Enumerable Sets and Their Decision Problems,” Transactions of the American Mathematical Society 74, pp. 358-366, 1953. http://dx.doi.org/10.1090/S0002- 9947-1953-0053041-6. [14] J.H. Allen, S. Barnum, R.J. Ellison, G. McGraw, and N.R. Mead, Software Security Engineering: a Guide for Project Managers, Upper Saddle River, New Jersey: Addison-Wesley, 2008.
  • 154.
    [15] Microsoft Corporation,The STRIDE Threat Model [Web site], http://msdn.microsoft.com/en- us/library/ee823878%28v=cs.20%29.aspx, 2005 (accessed 12/4/14). [16] Trike - open source threat modeling methodology and tool [Web site], http://www.octotrike.org (accessed 12/4/14). [17] K. Scarfone, M. Souppaya, A. Cody and A. Orebaugh, Technical Guide to Information Security Testing and Assessment, NIST Special Publication (SP) 800- 115, September 2008. http://csrc.nist.gov/publications/nistpubs/800-115/SP800- 115.pdf (accessed 12/4/14). [18] D. C. Gause and G. M. Weinberg, Exploring Requirements: Quality Before Design, New York: Dorset House Publishing, 1989. [19] B. Nuseibeh and S. Easterbrook, “Requirements engineering: a roadmap,” Proceedings of the Conference on The Future of Software Engineering (ICSE '00), Limerick, Ireland, June 7-9, 2000, New York: ACM, pp. 35-46. http://dx.doi.org/10.1145/336512.336523. [20] NIST, Cryptographic Module Validation Program (CMVP) [Web site],
  • 155.
    http://csrc.nist.gov/groups/STM/cmvp/ (accessed 12/4/14). [21]E. Barker and Q. Dang, Recommendation for Key Management, Part 3: Application-Specific Key Management Guidance, NIST Special Publication (SP) 800-57 Part 3 Revision 1 (DRAFT), May 2014. http://csrc.nist.gov/publications/drafts/800- 57pt3_r1/sp800_57_pt3_r1_draft.pdf (accessed 12/4/14). [22] M. Pezzè and M. Young, Software Testing and Analysis: Process, Principles and Techniques, Hoboken, New Jersey: John Wiley & Sons, Inc., 2008. [23] G.G. Schulmeyer, ed., Handbook of Software Quality Assurance, 4th Edition, Norwood, Massachusetts: Artech House, Inc., 2008. [24] B.B. Agarwal, S.P. Tayal and M. Gupta, Software Engineering and Testing, Sudbury, Massachusetts: Jones and Bartlett Publishers, 2010. [25] J.R. Maximoff, M.D. Trela, D.R. Kuhn and R. Kacker, A Method for Analyzing System State- space Coverage within a t-Wise Testing Framework, 4th Annual IEEE International Systems Conference, April 5-8, 2010, San Diego, California, pp. 598- 603. http://dx.doi.org/10.1109/SYSTEMS.2010.5482481.
  • 156.
    [26] G.J. Myers,The Art of Software Testing, second edition, Hoboken, New Jersey: John Wiley & Sons, Inc., 2004. [27] OW2 Consortium, ASM - Java bytecode manipulation and analysis framework [Web site], http://asm.ow2.org/ (accessed 12/4/14). 35 NIST SP 800-163 Vetting the Security of Mobile Applications [28] H. Chen, T. Zou and D. Wang, “Data-flow based vulnerability analysis and java bytecode,” 7th WSEAS International Conference on Applied Computer Science (ACS’07), Venice, Italy, November 21-23, 2007, pp. 201-207. http://www.wseas.us/e- library/conferences/2007venice/papers/570-602.pdf (accessed 12/4/14). [29] University of Maryland, FindBugs: Find Bugs in Java Programs - Static analysis to look for bugs in Java code [Web site], http://findbugs.sourceforge.net/ (accessed 12/4/14). [30] R.A. Shah, Vulnerability Assessment of Java Bytecode,
  • 157.
    Master’s Thesis, AuburnUniversity, December 16, 2005. http://hdl.handle.net/10415/203. [31] G. Zhao, H. Chen and D. Wang, “Data-Flow Based Analysis of Java Bytecode Vulnerability,” In Proceedings of the Ninth International Conference on Web-Age Information Management (WAIM '08), Zhanjiajie, China, July 20-22, 2008, Washington, DC: IEEE Computer Society, pp. 647-653. http://doi.ieeecomputersociety.org/10.1109/WAIM.2008.99. [32] G. Balakrishnan, WYSINWYX: What You See Is Not What You eXecute, Ph.D. diss. [and Tech. Rep. TR-1603], Computer Sciences Department, University of Wisconsin-Madison, Madison, Wisconsin, August 2007. http://research.cs.wisc.edu/wpis/papers/balakrishnan_thesis.pdf (accessed 12/4/14). [33] NIST, SAMATE: Software Assurance Metrics And Tool Evaluation [Web site], http://samate.nist.gov/Main_Page.html (accessed 12/4/14). [34] The MITRE Corporation, Common Weakness Enumeration CWE-190: Integer Overflow or Wraparound [Web site], http://cwe.mitre.org/data/definitions/190.html (accessed 12/4/14).
  • 158.
    [35] The MITRECorporation, Common Weakness Enumeration CWE-77: Improper Neutralization of Special Elements used in a Command ('Command Injection') [Web site], http://cwe.mitre.org/data/definitions/77.html (accessed 12/4/14). [36] NIST, National Vulnerability Database (NVD) [Web site], http://nvd.nist.gov/ (accessed 12/4/14). [37] NIST, Security Content Automation Protocol (SCAP) [Web site], http://scap.nist.gov/ (accessed 12/4/14). [38] The MITRE Corporation, Common Weakness Enumeration [Web site], http://cwe.mitre.org/ (accessed 12/4/14). [39] The MITRE Corporation, Common Attack Pattern Enumeration and Classification [Web site], https://capec.mitre.org/ (accessed 12/4/14). [40] R.A. Martin and S.M. Christey, “The Software Industry’s ‘Clean Water Act’ Alternative,” IEEE Security & Privacy10(3), pp. 24-31, May-June 2012. http://dx.doi.org/10.1109/MSP.2012.3. [41] Forum of Incident Response and Security Teams (FIRST), Common Vulnerability Scoring System
  • 159.
    (CVSS-SIG) [Web site],http://www.first.org/cvss (accessed 12/4/14). [42] Industry Consortium for Advancement of Security on the Internet (ICASI), The Common Vulnerability Reporting Framework (CVRF) [Web site], http://www.icasi.org/cvrf (accessed 12/4/14). 36 NIST SP 800-163 Vetting the Security of Mobile Applications [43] U.S. Department of Homeland Security, Software & Supply Chain Assurance – Community Resources and Information Clearinghouse (CRIC): Questionnaires [Web site], https://buildsecurityin.us-cert.gov/swa/forums-and-working- groups/acquisition-and- outsourcing/resources#ques (accessed 12/4/14). [44] The MITRE Corporation, Common Vulnerabilities and Exposures (CVE) [Web site], https://cve.mitre.org/ (accessed 12/4/14). [45] E. McCallister, T. Grance and K. Scarfone, Guide to Protecting the Confidentiality of Personally
  • 160.
    Identifiable Information (PII),NIST Special Publication (SP) 800-122, April 2010. http://csrc.nist.gov/publications/nistpubs/800-122/sp800- 122.pdf (accessed 12/4/14). 37 Attribution Trusted Computing Strengthens Cloud Authentication by Eghbal Ghazizadeh, Mazdak Zamani, Jamalul-lail Ab Manan, and Mojtaba Alizadeh from The Scientific World Journal is available under a Creative Commons 3.0 Unported license. Copyright © 2014 Eghbal Ghazizadeh et al. Hindawi Publishing Corporation Volume 2014, Article ID 260187, 17 pages http://dx.doi.org/10.1155/2014/260187 Research Article Trusted Computing Strengthens Cloud Authentication Eghbal Ghazizadeh,1 Mazdak Zamani,1 Jamalul-lail Ab Manan,2 and Mojtaba Alizadeh1 1 Universiti Teknologi Malaysia, 54100 Kuala Lumpur, Malaysia 2 MIMOS Berhad, Technology Park Malaysia, 57000 Kuala Lumpur, Malaysia
  • 161.
    Correspondence should beaddressed to Eghbal Ghazizadeh; [email protected] Received 28 September 2013; Accepted 19 December 2013; Published 18 February 2014 Academic Editors: J. Shu and F. Yu Copyright © 2014 Eghbal Ghazizadeh et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Cloud computing is a new generation of technology which is designed to provide the commercial necessities, solve the IT management issues, and run the appropriate applications. Another entry on the list of cloud functions which has been handled internally is Identity Access Management (IAM). Companies encounter IAM as security challenges while adopting more technologies became apparent. Trust Multi-tenancy and trusted computing based on a Trusted Platform Module (TPM) are great technologies for solving the trust and security concerns in the cloud identity environment. Single sign-on (SSO) and OpenID have been released to solve security and privacy problems for cloud identity. This paper proposes the use of trusted computing, Federated Identity Management, and OpenID Web SSO to solve identity theft in the cloud. Besides, this proposed model has been simulated in .Net environment. Security analyzing, simulation, and BLP confidential model are three ways to evaluate and analyze our proposed
  • 162.
    model. 1. Introduction The newdefinition of cloud computing has been born by Amazon’s EC2 in 2006 in the territory of information technology. Cloud computing has been revealed because of commercial necessities and makes a suitable application. According to NIST definition, “cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources.” Resource pooling, on-demand self-service, rapid elasticity, measured serviced, and broad network access are five vital features of cloud computing. Cloud has various attributes. It can be executed; supplies materials; can be organized and is in infrastructure based. These qualities provide rapid and unrestricted automatic access [1]. “The push for growth in 2011 is leading to changes in em- phasis,” said M. Chiu. Also he said that IT is now recognized by business transformation, or better said that digitaliza- tion is accelerating. Cloud computing and IT management according to statistics are of top ten business and technol- ogy priorities of Asian CIOs strongly. Cloud computing, IT management, mobile technology, virtualization, business http://www.hindawi.com/journals/tswj/2014/260187/ https://creativecommons.org/licenses/by/3.0/ https://creativecommons.org/licenses/by/3.0/ http://dx.doi.org/10.1155/2014/260187 mailto:[email protected]
  • 163.
    intelligence, infrastructure, businessprocess management, data management, enterprise application, collaboration tech- nologies, and network data communication are the top ten technology priorities. Cloud will move to become the most popular business worldwide. It shows that business and technology have been dominated by cloud computing. Therefore, thousands of new business are released every day about cloud computing technology [2]. 1.1. Security and Cloud Computing. While cost and ease are two top benefits of cloud, trust and security are two concerns of cloud computing users. Cloud computing has claimed insurance for sensitive data such as accounting, government, and healthcare and brought opportuneness for end users. Virtualization, multitenancy, elasticity, and data owners which traditional security techniques cannot solve security problems for them are some new issues in the cloud computing. Trust is one of the most important issues in cloud computing security; indeed, two trust questions are in cloud computing. The first question is do cloud users trust cloud computing? And second question is how to make a trusted 2 The Scientific World Journal base environment for cloud users? However, when it comes to transfer their businesses to cloud, people tend to worry
  • 164.
    about the privacyand security. Is CSP trustworthy? Will the concentrated resources in the cloud be more attractive to hackers? Will the customers be locked in to a particular CSP? All these concerns constitute the main based obstacle to cloud computing. Based on these questions and also to address these concerns when considering moving critical application, cloud must deliver a sufficient and powerful security level for the cloud user in terms of new security issues in cloud computing [3]. Abbadi and Martin illustrated that trust is not a novel concept but users need to understand the subjects associated with cloud computing. Technology and business perspective are two sides of trust computing. The emerging technologies that best address these issues must be determined. They believed that trust means an act of confidence, faith, and reliance. For example, if a system gives users insufficient information, they will give less trust to the system. In contrast, if the system gives users sufficient information, they will trust the system well. There are some important issues with trust. Control of our assets is the most important security issue in cloud computing because users trust a system less when users do not have much control of it. For example, bank customers use a confidentiality ATM because when they withdraw money from an ATM machine, they trust it [4]. Cloud computing suppliers should prepare a secure territory for their consumers. A territory is a collection of computing environments connected by one or more networks that control the use of a common security policy. Regulation and standardization are other issues of trust that PGP, X509, and SAML are three examples for establishing trust by using standard. Access management, trust, identity management, single sign-on and single sign-off, audit and compliance, and configuration management are six patterns that identify basic security in cloud computing. Computer
  • 165.
    researchers believe thattrust is one of the most important parts of security in cloud computing. There are three distinct ways to prove trust. First, we trust someone because we know them. Second, we trust someone because the person puts trust on us. Third, sometimes we trust someone because an organization that we trust vouches for that person. In addition, there are three distinct definitions of trust. First, trust means the capability of two special groups to define a trust connection with an authentication authority. Second, trust identifies that the authority can exchange credentials (X.509 Certificates). Third, it uses those credentials to secure messages and create signed security token (SAML). The main goal of trust is that users can access a service even though that service does not have knowledge of the users [5]. Trusted computing grows with technology development and is raised by the Trusted Computing Group. Trusted computing is the industry’s response to growing security problems in the enterprise and is based on hardware root trust. From this, enterprise system, application, and network can be made safer and secure. With trusted computing, the computer or system will reliably act in definite ways and thus works in specific ways, and those performances will be obligated by hardware and software when the owner of those systems enabled these technologies. Therefore, using trust causes computer environments to be safer, less prone to viruses and malware, and thus more consistent. In addition, trusted computing will permit computer systems to propose improved security and efficiency. The main aim of trusted computing has prepared a framework for data and network security that covers data protection, dis- aster recovery, encryption, authentication, layered security, identity, and access control [6].
  • 166.
    1.2. Cloud Computingand Federated Identity. One of the problems with cloud computing is the management and maintenance of the user’s account. Because of the many advantages of cloud such as cost and security, identity man- agement should promise all the cloud advantages. Therefore, the new proposed model and standards should make use of all the cloud advantages and prepare an environment to ease the use of all of them. Private cloud, public cloud, community cloud, and hybrid cloud are four essential types of cloud computing that were named by cloud users and venders. The community cloud definition shares infrastructure for definite community between organizations with common concern and private or internal cloud shares cloud services between a classified set of users. Attending to establish the best digital identity for users to use is the most important concern of cloud providers. Cloud prepares a large amount of various comput- ing resources; consequently, diverse users in the same system can share their information and work together. Classified identity and common authentication are an essential part of cloud authentication and federated identity acts as a service based on a common authentication even though it is broadly used by most important companies like Amazon, Microsoft, Google, IBM, and Yahoo [7]. Today’s internet user has about thirteen accounts that require usernames and passwords and enters about eight passwords per day. It becomes a problem and internet users face the load of managing this huge number of accounts and passwords, which leads to password tiredness; indeed the burden of human memory, password fatigue may cause users to use password management strategies that reduce the security of their secured information. Besides, the site centric Web makes online and each user account is created and
  • 167.
    managed in adistinct administrative domain so that profile management and personal content sharing will be difficult. SSO or Web single sign-on systems are invited to address the mentioned problem with the site centric. Web users, identity provider and service provider are three parts of SSO and they have a distinct role in identity scenario. An IdP collects user identity information and authenticates users, while an RP trusts on the authenticated identity to make authorization decisions. OpenID is a promising and open user centric Web SSO solution. More than one billion OpenID accounts have been permitted to use service providers according to the OpenID Foundation. Furthermore, the US Government has cooperated with the OpenID Foundation in the provision of the Open Government Initiative’s pilot acceptance of OpenID technology [8]. The Scientific World Journal 3 Single username and password is an essential part of single sign-on protocols in terms of authenticate crossway numerous systems and applications, efforts to statement privacy, and security issues for web users. There are three widespread Web SSO implementations that named that OpenID, Passport/Live ID, and SAML will be enclosed along with details of common weaknesses and issues [9]. Web SSO solutions were initially advanced by numer- ous educational institutions in the mid-1990s. For example, Cornell University’s SideCar, Stanford University’s WebAuth and Yale’s Central Authentication System (CAS) were entirely early reformers in the field [10].
  • 168.
    SSO today hasa critical role in cloud security and becomes essential to realize how secure the deployed SSO mechanisms truly are. Nevertheless, it is a new idea and no previous work includes a broad study on commercially deployed web SSO systems, a key to understanding of what extent these real systems are subject to security breaches [11]. In addition, Wang and Shao showed that weaknesses that do not illustrate on the protocol level could be brought in by what the system essentially agrees to each SSO party to do [12]. 1.3. Background of the Problem. Madsen et al. in [13] defined some problems of federated identity and illustrated that FIM or Federated Identity Management established on standards permits and simplifies joining federated organizations in terms of sharing user identity attributes, simplifying authen- tication and allowance, or denying using service access requirements. The definition of SSO is defined as using it facility user authenticates only one time to home identity provider and logging in to access successive service providing service providers within the federated identity. There are some active problems and concerns in a federated identity environment like misuse of user identity information through SSO capability in service providers and identity providers, user’s identity theft, service providers and identity providers, and trustworthiness of the user. Federated identity has iden- tified that regardless of good architectures it still has some security problems that should be considered in the real estate implementation. Wang in [9] discovered another problem of federated identity that is switching authentication mechanisms to an SSO solution. It means further education of the users is
  • 169.
    required and possibleloss of the user base if the transition is not smoothly executed. Also worsening the situation is the lack of demand from users for a Web SSO solution. Studies have shown that users are already satisfied by their own password managers. 1.4. Scope and Limitation. This research focuses on trust in cloud computing. Trust in the cloud is a critical part in cloud security. As it has been mentioned in the introduction, access management, trust, identity management, single sign-on and single sign-off, audit and compliance, and configuration man- agement are six patterns that identify basic security in cloud computing. This study will focus on federated identity which is used for user identity management. In addition, there are several federated identity architectures for managing this problem. Hub and spoke model, free form model, and hybrid model are three models for implementing federated identify in trusted computing. Although, there are some remaining issues and challenges in these management trust models so this study will focus on the hybrid model to enhance and propose architecture. Stopping Phishing attacks completely is very difficult. Therefore, the aim of this study is to decrease the number of identity theft and Phishing attacks. Trusted computing that is applied in this study tries to decrease the success probability of Phishing attack and identity theft. However, as TPM in OpenID has been leveraged, other attacks, apart from Phishing, attempt to deal with sensitive information as an asset of users. Furthermore, in this study attacks that compromise users’ computers have been ignored. Also key loggers and rootkits and cookie attacks that look like cross site request forgery (CSRF) attacks and cross site script (XSS) attacks have been ignored. Finally, attacks which compromise
  • 170.
    the integrity ofthe website will not be discussed. In conclusion, this study’s scope is single sign-on authen- tication by using some of the protocols like SAML, OAuth, and OpenID. As it has been mentioned, user uses these authentication models in many APIs uses these authentica- tion models. Visual Studio 2012 and SQL Server 2012 have been used for simulation of the proposed model. 1.5. Related Work. You and Jun in [14] proposed Internet Personal Identification Number or I-PIN technique to make stronger user authentications with the cloud OpenID envi- ronment against phishing problem of relying parties which means users could obtain internet service with the OpenID, although, forcing in OpenID territory in some countries has been found by them. Besides, they analyze and evaluate their methods in comparison with the existing OpenID method, and they recommend some ways to secure OpenID environment. In their method user has to choose only 1 company out of 3 companies which delivers OpenID and the index to be a member in order to obtain OpenID service. If a user receives I-PIN information from main confirmation authority via IDP with creation OpenID, the problem of main authentication that OpenID has could be solved. In addition, they evaluated their study and after compari- son with an existing OpenID they confirmed that authentica- tion is secure and safe against attack and private information exposition. Also they found the problem of Phishing and shown that the existing OpenID system could be solving Phishing problem and relying party authentication guaran- tees security against the exposition of user’s passwords and ID. Ding and Wei in [15] proposed a multilevel security
  • 171.
    framework of OpenIDauthentication. They claimed that their recommended framework is appropriate for all types of rule-based security model. They presented the basic OpenID infrastructure based on the single sign-on explanations that included hierarchy, OpenID components, and other security issues. Next they proposed a security access control structure for OpenID based on BLP model, which can be used to 4 The Scientific World Journal resolve the difficulty on the access control of multilevel security, and finally they store the security label in the XML document. Mat Nor et al. in [16] took initiative remote user authen- tication protocol with both elements of trust and privacy by using TPM (VTPM). Ding and Wei investigated the security access control scheme for OpenID based on BLP model, which can be used to solve the problem on access control of multilevel security with storing the security label in XML document. Sun et al. [17] discussed the problem of Web SSO adoption for RPs and argued that solutions in this space must offer RPs sufficient business incentives and trustworthy identity services in order to succeed. Fazli BinMat Nor and Ab Manan [18] proposed an enhanced remote authentication protocol with hardware based attestation and pseudonym identity enhancement to mitigate MITM attacks as well as improving user identity privacy. Latze and Ultes-Nitsche in [19] proposed using the trusted TPM (VTPM) for ensuring both an e-commerce application’s integrity and binding user authentication to
  • 172.
    user credentials andthe usage of specific hardware during the authentication process. Urien in [20] illustrated strong authentication (without password) for OpenID, according to a plug and play paradigm, based on SSL smart cards against Phishing attack. Khiabani et al. in [21] discovered previous trust and privacy models in pervasive systems and analyzed them to highlight the importance of trusted computing platforms in tackling their weaknesses. 1.6. Paper Organization. The rest of this paper is organized as follows. In the next section, we discuss the proposed solution in detail. Section 3 provides a simulation of trusted OpenID. In Section 4, we consider issues that may arise when implementing the solution and analysis based on these issues in three approaches, and Section 5 summarizes our conclusions. 2. Proposed Solution In this part, the proposed procedure will be explained. Programming languages help this study to evaluate the efficiency of the procedure. Waterfall model will be utilized to design and implement the prototype. Thus, firstly the requirements will be analyzed and secondly the algorithms based on the requirements will be proposed to solve the problem. Thirdly, the implantation steps and the essential
  • 173.
    functions are explained. Firststep in designing software is requirement analysis. There are a few activities that need to be done before designing the algorithm. This research will be conducted using an experimental and modeling approach. It will begin with in-depth reading to understand the state of the art in the areas of cloud computing authentication. One of the main requirements of this study is a flow diagram of cloud authentication. Some of the best techniques that have been proposed were discussed in the related work. The advantages and disadvantages of those works were discussed. Therefore it seems that there are still open issues that are required to be searched. This model is based ona hybrid model architecture which is a mix of both the hub and spoke and free form model. The hybrid model will also be found in organizations that mix traditional or legacy computing with a public or private cloud environment. The aim is to make a seamless authentication environment that is based on the hybrid model. It has been trying to bring some consideration for time table between private clouds and choose the best way to achieve single sign- on. According to Figure 1, trusted authority to make a good
  • 174.
    relation between IDPand RP to prevent ID theft has been used. This proposed model in comparison with the previous proposed models in the related work should grab authentic cloud services using the user’s privacy policies, an indepen- dent third party, and ensure mutual protection of both clients and providers as main goals to afford identity management and access control. Cloud computing, SSO, and trusted computing are highlighted to support to the tasks of OpenID authentication. To avoid a weak and vulnerable link in enterprise secu- rity, trust must be established in all the components in the infrastructure. This requires establishing a relationship, considering the identity of all of the devices and parties within the trust domain, to exchange essential key secrets that allows signing any messages that are sent back and forth. The novelty lies in the fact that the proposed model has not been offered by OpenID yet. Combining trusted computing, cloud computing, and federated identity, the model can contribute to security and privacy of cloud computing. To our knowledge, this model has not been proposed so far and we intend to pursue this model in our future work.
  • 175.
    Binary attestation andDirect Anonymous Attestation (DAA) are two types of remote attestation. Binary attestation which is static in nature and cryptographic protocol which allows the remote authentication of a trusted platform whilst preserving the user’s privacy is DAA. In our proposed model, binary attestation is used for checking integrity of users, RPs, and IDPs. Cloud computing users, IDPs, and SPs can use a Trusted Platform Module to authenticate their hardware devices. Since each TPM (VTPM) chip or Virtual TPM (VTPM) has a unique and secret RSA key burned in it as it is produced, it is capable of performing platform authentication. On the trusted platform each system should be independent of any trusted third party. Besides Independence, each system should be able to unambiguously identify users that can be trusted both across the Web and within enterprises. In comparison with the related work, the essential goal of the proposed model is to provide access control and identity management which are the following: (i) being an independent third party; (ii) authenticate cloud services using the user’s privacy
  • 176.
    policies, providing minimalinformation to the SP; (iii) ensure mutual protection of both clients and pro- viders. The Scientific World Journal 5 Cloud user Figure 1: Proposed trust based federated identity architecture. Using trusted computing in OpenID should be platform independent and flexible enough to meet the challenges of today’s scalable computing environments. The proposed sce- nario is as in Figure 1. This figure shows that Trusted Author- ity to handles trust relationships between user’s browser, RP, and IDP. The negotiations between them show a six-step scenario. This proposed model highlights the use of the TPM
  • 177.
    (VTPM), Virtual TPM(VTPM), OpenID protocol which try to support the tasks of authentication, authorization and identity federation. Beyond these objectives, the main contribution of our work is the implementation in the cloud computing. In addition, Figure 2 shows the trusted based model sequence diagram based on trusted OpenID proposed model. Besides, Figure 3 illustrates the proposed model flowchart and highlights Trusted Authority (TA) and TPM (VTPM) roles in this model. 2.1. Log on Using the User’s URL. OpenID allows us to sign into websites using a single identifier in the form of a URL. This URL is used instead of our username and password and it has been named Personal Identity Portal (PIP) URL. For example, in the Symantec site for user Bob, the PIP URL is bob.pip.verisignlabs.com and he can use many sites that support OpenID instead of username and password. In this example, these sites are service providers and the Symantec is identity provider. PIP helps us eliminate the need to remember different usernames and passwords for our service providers. PIP also delivers the elasticity and flexibility to share only the information we choose with each
  • 178.
    service provider. Furthermore,by creating a PIP account, we will receive a PIP URL that we can use to sign in or register at any service provider that supports OpenID and displays the OpenID logo. 2.2. Redirection for Client Authentication. In this step, the SP locates the user’s location and creates an authentication token. SP asks the user to prove that he/she is who he/she is. For this reason, after getting the RP’s request, the browser performs the next step. In the OpenID Authentication version 2.0 the IDP and RP establish an association with RSA technique secret key whereas it does not require to associate with RSA by using TPM (VTPM) hash code. 2.3. Client Authentication with IDP. In this scenario, the browser proceeds with token exchange based on SAML pro- tocol. Here, users have three options to prove their identity to IDP, depending on the context of the interaction. Firstly, proving their identity, this is inserting his/her user name and password to prove his/her identity. Secondly, this can be done by using browser cookies if the users activate them. Thirdly, it can be done through using some software like SeatBelt extension in Firefox browser which is a plug-in that helps
  • 179.
    users when signinginto OpenID sites, IDP using PIP URL. SeatBelt extension detects that the user has clicked on an OpenID sign-in field and prompts users to sign in. Finally, after first-time sign in, subsequent requests to SeatBelt will automatically return the user to the OpenID sign-in page with his/her PIP already filled in. M TPM M TPM T 6 The Scientific World Journal
  • 180.
    Figure 2: Trustedbased model sequence diagram. 2.4. Mutual Attestation. Step 4 is the most critical part of our proposed OpenID trust based Federated Identity Architecture. In our environment we have assumed that IDP can collude with SP to connect user’s alliance and collect user’s behaviors. Also we have supposed that there is no Privacy Enhancing Technology (PET) organized in our cloud environment. Using Trusted Authority (TA) as the core, user’s browser, Relying Party (RP) or Service Provider (SP), and IDP must prove their identity based on mutual attestation process using their TPM- (VTPM-) enabled platforms and verified by the TA. We have assumed that the Trusted Authority is trustworthy and must be sanctioned by the country’s government or authorized by all participating parties in the Federated Identity Architecture. In this scenario TA is attester that sends the challenge to attestees (IDP, USER, and SP) to check their integrity. TA by default uses binary remote attestation to check their integrity but in this scenario uses
  • 181.
    DAA technique regardingintegrity checking. 2.5. Authorizing User’s Browser. If and only if the mutual attestation process has been successful, that is, the user and IDP have confidence in each other, then the IDP will deliver SAML token to the user’s browser. Today, some of the IDPs like Google, Facebook, and IBM use SAML token and deliver their users access rights through SAML token. For future work we may consider other mechanisms. 2.6. User’s Browser Request. In this step, the user puts more confidence in the SP for getting a service base on the SAML token. IDP sends a encrypt token by the user’s public key that shows IDP is legitimated and verified by a trusted authority. User decrypts the token by his/her private key. Finally, the user uses the token to get more services from RP or SP. It is important to note that the TA in our proposed architecture will need to deal with Virtual Machine (VM). It will be associated with different services requested by the client. Hence, TA needs to attest each virtual TPM
  • 182.
    (VTPM) that isassociated with each VM on Client Machine. Finally, this paper highlights the use of an OpenID, trusted computing, and federated identity which deliver provision to make secure authentication, authorization, and identity federation. Trusted computing is very flexible with regards to its use in a cloud environment allowing a service to be provided securely and reliably. In comparison with other frameworks in related work, our proposed model has more strength because of better security management, shifted risk management, private remote attestation, and efficient binding of identity assertions. 3. Simulation of Trusted OpenID As mentioned before the overall flow of OpenID authoriza- tion works are signed up for an OpenID Consumer Key, perform discovery, get a pre-approved request token, and continue with the OpenID Flow. In addition to this flow, in order to evaluate the process we could find the vulnerabilities of OpenID flow to better understand the behavior of the proposed model in the real world. In this section we turn our attention to simulate the purpose algorithm and design to develop this model based on simulation in C#. Our system, referred to as Trust OpenID,
  • 183.
  • 184.
    TPM The Scientific WorldJournal 7 Initiation User User- Supplied Identifier
  • 185.
    User request to RP Check usersupplied identifier validation No Inform the user Termination TPM secret key Yes Redirect data
  • 186.
    to IDP via user’sbrowser User’S openID TA database TPM secret key Authentication request IDP database
  • 187.
  • 188.
  • 189.
    No Figure 3: Trustedbased model sequence flow chart. is composed of different component which has its own tasks. For now, based on our scope we run our system on the virtualization system but in the future work we will explain that our system has this ability to be located in the cloud environment and integrated with VMware, Microsoft Azur, XenServer. Our system will observe the data flow of OpenID, request, discover, approve and authorize the user and providers. In this simulation first we fetch all the vendors, process, request, discovery between the user’s browser and service and identity providers. On the other hand we are observing and monitoring data exchange between objects and subjects. As it is illustrated in Figure 4, the main page contains sign-in form, registration form, simulation, and exit. This simulation should cover the communication type, generating
  • 190.
    signature, initiation anddiscovery, establishing association, request authentication, responding to authentication request, and verifying assertion. The first step in using OpenID is to set up an account with one of the many providers on the web. Registration form helps us to register just one time for getting and using OpenID account. Next in sign- in form after getting OpenID account, user can use her or his OpenID account to get the service’s SP by IDP account. Inform the user 8 The Scientific World Journal Figure 4: Main form of simulation. Figure 5: Registration Form.
  • 191.
    If the useris thinking that OpenID sounds a lot like Facebook Connect, he or she can use this credential to use other services of other sites which accept Facebook Connect. It allows users to sign into websites using their Facebook credentials. In order for an OpenID Connect Client to utilize OpenID services for a user, the Client needs to register with the OpenID Provider to acquire a Client ID and shared secret. Figure 5 describes how a new Client can register with the provider, and how a Client already in possession of a Client ID can retrieve updated registration information. The Client Registration Endpoint may be coresident with the token endpoint as an optimization in some deployments. This study assumes that UTM, UPM, UM, MMU, UITM, and USM are identity providers. Figure 6 shows IDP’s database which uses dbo. OpenID to save data. The user sends request encoded as a post with the Name, Last Name, ID Number, Email, username, Password, and
  • 192.
    Identity provider parametersadded to the HTTP request to the IDP. Also Figure 6 illustrates the registration information database which is containing registration information and TPM (VTPM) and OpenID URL of the users. Figure 7 shows confirmation of the registration part and OpenID URL which has been issued by IDP. In this step with OpenID acceptance, the TPM (VTPM) also saves in the dbo.reginfo SQL database as shown in Figure 8. Also it shows the OpenID URL which assigned by the IDP provider. 3.1. Log on Using the User’s URL. Based on the proposed model as mentioned before, the first step is logged on in service provider by OpenID URL. OpenID allows us to sign into websites using a single identifier in the form of a URL. This study assumed that service providers are Skydrive, Dropbox, and MSN which accepted our IDPs. First in RP combo user choose the RP and after that user should enter the OpenID URL or PIP. Figure 9 simulates that user assigned to Skydrive by iqbalqazi.usmopenid.com. 3.2. Redirection for Client Authentication. In this step, the SP redirects user to IDP to prove his or her identity and get appropriate token. For this reason, after getting the RP’s
  • 193.
    request, sign-in formwill appear. Figure 10 shows this step. 3.3. Client Authentication with IDP. In this step based on Figure 10, the user should prove his or her identity. As mentioned before the user has three options to prove identity to IDP, depending on the context of the interaction. In this simulation user is asked to enter username and password. 3.4. Mutual Attestation. Checking user’s integrity is the aim of this step. First of all IDP checks the username and password based on dbo.reginfo. Next, TA checks the integrity based on the TPM (VTPM). In this study it is assumed that TA after attestation has all the TPM (VTPM) in its database. Code in dbo. TA means TPM (VTPM) of the all vendors and users which has been shown in Figure 11. This step is the most important part and has been emphasized by hackers. Simulation after checking username and password checks the TPM (VTPM) after clicking on the Check button. If the result of checking the username and password was correct the continue button will be highlighted otherwise the Phishing page will appear. Figures 12 and 13 show this process.
  • 194.
    3.5. Authorizing User’sBrowser. If and only if the previous process has been successful which means the user and IDP have confidence in each other. Based on OpenID privacy user can determine how long the SP accepts her or his approved identity. Figure 14 illustrates that user can choose Allow forever, Allow once, and Deny. IDP delivers SAML token to RP via user’s browser. 3.6. User’s Browser Request. Figure 15 illustrates the confi- dence between the user and RP; that is, user could get a service based on the SAML token. The Scientific World Journal 9 Figure 6: IDP database.
  • 195.
    Figure 7: Confirmationof registration. In this part we explained trusted based model, sequence and flow chart to mitigate identity theft in OpenID single sign on environment. Next part shows the evaluation and test of this proposed model. 4. Test and Evaluation In this part, the simulation and design of the proposed solution will be discussed in detail. Objectives, subjects, and their attributes are three considerations of each system. The details of the subjects and objects were discussed and then the process was explained; finally the Trusted OpenID will be analyzed by three-approache sample tests. Design phase, architecture, flowchart, and flow diagram of the proposed model have been shown and discussed in Section 3. Then analyzing the results of the proposed design must fulfill the
  • 196.
    objectives of theproject which is using trusted computing against identity theft in a cloud identity environment espe- cially in OpenID authentication. Confidentiality analysis, simulation analysis, and security analysis are three methods to evaluate and test the proposed model. 4.1. Confidentiality Analysis. Key role in establishing com- puter system is the security model which has been used to express the security policy. Availability, integrity, and confidentiality are three most important concerns in security area where confidentiality is the aim of this research and in this part we will explain how our proposed try to confidence authentication in a cloud environment by using Trusted computing. There are some security models which attend to gain security in their enterprises. For example, Biba and Bell&LaPadula (BLP) are two most popular security models; Biba focused on integrity in recognized mathematical terms and BLP is the most classic confidentiality model focused on confidentiality. In this study BPL will be used as a basic theory for confidentiality prove in multilevel security Trusted OpenID and as a way for evaluating the proposed model, it has been focused on confidentiality of trusted OpenID mechanism. Also, in case of adopting the BLP model to
  • 197.
    10 The ScientificWorld Journal Figure 8: Registration database. Figure 9: Simulation of RP website. OpenID we focused on the subjects, objects, and their data exchange of users’ application to enhance the confidentiality and security of Trusted OpenID mechanism besides bring the user convenience. 4.1.1. BLP Model. Military affair is a good example for using BLP model all the same the designer of computer security
  • 198.
    utilized and appliedto contribute to computer security. While this model has so many advantages, but there are some Figure 10: IDP Sign on. disadvantages such as forgetting access history [15]. Security level and access matrix are two main parts in BLP model which define the access permissions. Security policies accept information flowing upwards from a low security level to a high security level and also prevent information flowing downwards from a high security level to a low security level. 4.1.2. Terminology Identifier (I): an Identifier is either an “http” or “https” URI, commonly referred to as a “URL” within this document or an
  • 199.
    The Scientific WorldJournal 11 Figure 11: TA database. Figure 12: TPM checking. XRI (XRI Syntax 2.0). This document defines various kinds of Identifiers, designed for use in different contexts. User-Agent (UA): the end user’s Web browser which imple- ments HTTP/1.1. User-Agents will primarily be common Web browsers Relying Party (RP): a Web application that wants proof that the end user controls an Identifier. OpenID Provider (IDP): an OpenID Authentication server on which a Relying Party relies for an assertion that the end user controls an Identifier.
  • 200.
    Figure 13: PassTPM checking. IDP Endpoint URL (IEU): the URL which accepts OpenID Authentication protocol messages, obtained by performing discovery on the User-Supplied Identifier. This value must be an absolute HTTP or HTTPS URL. IDP Identifier (II): an Identifier for an OpenID Provider. User-Supplied Identifier (USI): an Identifier that was presented by the end user to the Relying Party, or selected by the user 12 The Scientific World Journal Figure 14: User authorization.
  • 201.
    Figure 15: RPaccepts user’s credential. at the OpenID Provider. During the initiation phase of the protocol, an end user may enter either their own Identifier or an IDP Identifier. If an IDP Identifier is used, the IDP may then assist the end user in selecting an Identifier to share with the Relying Party. Claimed Identifier (CI): an Identifier that the end user claims to own; the overall aim of the protocol is verifying this claim. IDP-Local Identifier (ILI): an alternate Identifier for an end user that is local to a particular IDP and thus not necessarily under the end user’s control. http://eqh- ball.pip.verisignlabs.com. Generate (G): an identifier which has been generated by TPM or VTPM. Challenge (CH): TA is attester that sends the challenge to attestees (IDP, USER, and RP) to check their integrity.
  • 202.
    As mentioned beforeTA by default uses binary remote attestation to check their integrity but in this scenario uses DAA technique regarding integrity checking. Trusted Authority (TA): an enterprise to verify mutual attes- tation process using the TPM- (VTPM-)enabled platforms. Figure 16 shows completely the objects and subjects of the proposed model and the place based on the final OpenID authentication architecture. These subjects and objects help to end user to prove that he or she owns a URL or XRI through a series of communications between the user and the website to which he/she is authenticated. The user submits the URI/XRI identifier at OpenID Relying Party through User- Agent’s browser. 4.1.3. Protocol Overview. Based on the OpenID authentica- tion [22] in the first step, the end user initiates authentication by presenting a USI to the RP via their UA. To initiate OpenID Authentication, the RP should present the end user with a form that has a field for entering a USI. Next, after normalizing the USI, RP performs discovery on USI and establishes the IEU that the end user uses for authentication. Upon successful completion of discovery, RP
  • 203.
    will have IEUand Protocol Version but if the end user did not enter II the CI and ILI will be present. In this scenario it has been assumed that the user entered the II and therefore CI and ILI will be omitted. The RP and the IDP establish an association based on their TPM (VTPM). The IDP uses an association to sign subsequent messages and the RP to verify those messages whereas OpenID Authentication supports two signature algorithms which are HMAC-SHA256 with 256 bit key length and HMAC-SHA1 with 160 bit key length. Once the RP has successfully performed discovery and created an association with the discovered IDU, it can send an authentication request to the IDP to obtain an assertion. An authentication request is an indirect request. In step three, RP redirects the end user’s UA to the IDP with an OpenID Authentication request. This model is indirect communication in which messages are passed through the User-Agent. This can be initiated by either the Relying Party or the IDP. Indirect communica- tion is used for authentication requests and authentication responses. There are two methods for indirect communica- tion: HTTP redirections and HTML form submission
  • 204.
    When an authenticationrequest comes from the UA via indirect communication, the IDP should determine that an authorized end user wishes to complete the authentication. If an authorized end user wishes to complete the authentication, the IDP should send a positive assertion to the RP. Next in step four, the IDP establishes whether the end user is authorized to perform OpenID Authentication and wishes to do so. In step five, the TA checks the trust’s objectives based on their TPM (VTPM). IDP redirects the end user’s UA back to the RP with either an assertion that authentication is approved or a message that authentic- ation failed. This identifier includes openid.ns, openid.mode, openid.op endpoint, openid.claimed id, openid.identity, http://eqh-/ The Scientific World Journal 13
  • 205.
    Registration back channel ReplyingParty (RP) OpenID Provider (IDP) TPM TPM 5 2 4 3 1 Trust Authority (TA) 6
  • 206.
    Assertion, identifier (I)Assertion, identifier (I) Claim Identifier (CI) IDP Local Identifier (ILI) Figure 16: Objects and subjects of the proposed model. openid.return to, openid.response nonce, openid.invalidate handle, openid.assoc handle, openid.signed, and openid.sig. If the IDP is unable to identify the end user or the end user does not or cannot approve the authentication request, the IDP should send a negative assertion to the RP as an indirect response. Step six and the last step which the RP verifies the information received from the IDP including checking the Return URL, verifying the discovered information, checking the nonce, and verifying the signature by using the TPM (VTPM) established during the association to the IDP. 4.1.4. Phishing Attack. A special type of Phishing attack is
  • 207.
    where the RPis a rogue party acting as a fake RP. The RP would perform discovery on the EUI and instead of redirecting the UA to the IDP, it would proxy the IDP through itself. This would thus allow the RP to capture credentials that the end user provides to the IDP. While TA prevents this sort of attack by checking the integrity of the RP and IDP. 4.1.5. Definition of System State. Our proposed model is a state machine model, and the state is expressed by Figures 16, 17, and 18. It consists of subjects, objects, access attribute, access matrix, subject functions, and object functions. It has been defined that the subjects of the proposed model are Challenge (CH), Generate TPM (VTPM) Identifier (G), User-Supplied Identifier (USI), IDP Identifier (II), IDP End- point URL (IEU), Identifier (I), Claim identifier (CI), and IDP Local Identifier (ILI). Also objects of the proposed model are Relying Party (RP), OpenID Provider (IDP), User- Agent (UA), Trust Authority (TA), Trusted Platform Module (TPM), and Virtual TPM (vTPM). Access attributes are Read, Write, Read and write, and execute. Access Matrix is access Figure 17: Access class matrix and relationship between objects and subjects.
  • 208.
    matrix, where eachmember represents the access authority of subject to object. The proposed model state machine is �� where each member of �� is ��. (1) �� ∈ �� where �� is sorted quaternion. (2) �� = (��, ��, ��, ��), where, (3) �� ⊆ (�� × �� × ��), (4) �� is a set of subjects. (5) �� is a set of objects. (6) �� = [��, ��, ��, ��] is the set of access attribute. (7) �� is access matrix, where ���� ⊆ �� signifies the access authority of ���� to ����. (8) ∈ �� is the access class function, denoted as �� = (����, ����). TPM
  • 209.
    T Read/wri 14 The ScientificWorld Journal CH CH
  • 210.
    CI and ILI IEU Securitylevel TPM TA IDP RP state 4 as shown in Figure 1. Therefore, if state 4 is secure all the states will be secure and based on the Section 4.1.4 this step also is secure to gain the secure state.
  • 211.
    G G (19) Σ(��,��, ��, �� 0)⊂ �� × �� × ��. G (20) (��, ��, ��) ∈ Σ(��, ��, ��, ��0). (21) If and only if (�� ��, �� ��, �� ��, �� �� −1) ∈ �� for each �� ∈ ��, where �� 0 is the initial state. Based on the above definition, Σ(��, ��, ��, ��0) is secure iff all states of the I system, for example, (��0 , ��1 , . . . , ����), are secure states. USI IEU BLP model has several axioms (properties) which can CH G (ILI) UA Figure 18: Security level, Subject, and object of the proposed model.
  • 212.
    (9) ���� isthe security level of the subject and includes the confidentiality level ��1 (��) and category level ��4 (��). Figure 17 shows the security level in BLP and relation between subjects and objects. (10) �� signifies the security function of objects. Figure 18 be used to limit and restrict the state transformation. If the arbitrary state of the system is secure, then the system is secure. In this study the simple-security property will be adopted. 4.1.6. Simple-Security Property (SSP) (22) �� = (��, ��, ��, ��) (23) Satisfies SSP if and only if, (24) For all �� ∈ ��, (25) �� ∈ �� ⇒ [(�� ∈ (�� : ��, ��)) ⇒ (���� (��), >
  • 213.
    ���� (��))], shows therelation the entire subjects, objects, security functions, security level of the proposed model. As shown in this figure confidentiality of the TPM (VTPM) is highest in the state machine and the lowest is the user agent. Therefore, (11) Confidentiality level is ��1(������), ��2(����), ��3(������), ��4(����), and level ��5(����). (12) �� signifies the existing Hierarchy on the proposed model as shown in Figure 18. (13) ��: �� × �� → × �� shows all therolese in the proposed model in which �� is the system response and the next state. �� is the requests set and �� is the arbitrate set of the request which is yes, no, error, and question. In this study question is important because if the response equal to question, it means that the current rule cannot transact with this request. (14) �� = [��1 , ��2 , . . . , ����], where �� is the list
  • 214.
    exchange data between objects. (15)�� (��) ⊆ �� × �� × �� × ��. (16) (����, ����, ��∗ , ��) ∈ (��). (26) that is, ��1 (��) ≥ ��2 (��)), ��3 (��) ⊇ ��4 (��). (27) ��1(��) ≥ ��2(TPM), (28) ��1(IEU) ≥ ��2(RP), Based on the figure 18 and the SSP, all the objects of the proposed model which has been categorized in the proposed security level can write in the lower state. As well as all the objects can read at a higher level, but cannot read at a lower state. Star property, discretionary security, and compatibility property are other models which can be used to limit and restrict the state transformation and they will be used in future work. 4.2. Simulation Environment and Evaluation. In software development lifecycle, testing and evaluation is the final step
  • 215.
    in developing thesoftware. In fact, testing process verifies that whether the system has achieved its aim, objectives, and determined scope. Therefore, proposed model will be evaluated to show whether it could gain the mentioned objective. To do this we need to conduct a test. (17) If and only if ���� ≠ question and exit a unique ��, 1 ≤ �� ≤ ��, it means that the current rule is valid and subject and object also are valid, because object verifies the TPM (VTPM) of the other object (attestee) by the request (challenge) for integrity checking. (18) Consequently, the result is (����, ��∗ ) = ���� (����, ��), which shows that for all the requests in the �� there is unique response which is valid. We should prove that each state of the proposed model is secure. It has been assumed that each state is secure except Based on the objectives the aim of this study is to mitigate the identity theft and Phishing attack. Thus wrong
  • 216.
    RP will beinputted to the system and then it will analyze the dataset based on the username and TPM (VTPM) conditions that have been discussed in Section 3. Figures 19, 20, and 21 show this evaluation. First, user goes in the fake service provider which is http://www.skkydrive.com/. Next, after passing username checking the TPM (VTPM) will be checked. After checking Skkydrive’s TPM (VTPM) by TA, TA sends a Phishing message via user browser and reports false integrity of the fake RP. http://www.skkydrive.com/ The Scientific World Journal 15 Figure 19: Phishing Relying Party website. Figure 20: Checking identity based on TPM.
  • 217.
    4.3. Security Analysis.In this section, we examine the strengths and weaknesses of the proposed solution in terms of security and consider possible improvements. 4.3.1. Mutual Authentication. Mutual authentication, also called two-way authentication, is a process or technology in which both entities in a communications link authenticate each other. In our proposed model, IDP authenticates a User-Agent by asking him/her to input the username and password. As the TA by using TPM (VTPM) approved the IDP which is only known to the user, other users cannot guess the correct TPM (VTPM) because of the TPM (VTPM) characteristics. Meanwhile, the user authenticates the RP website and deals with it presenting a USI to the RP. We assume that an attacker can send the UA to the fake IDP; thus, TA reports to the end user that he/she is on the Phishing website. The absence of a TA will cause authentication of the OpenID to fail. 4.3.2. Compromising the TA. The strength of the proposed model lies in TPM (vTPM) which is a secure vault for
  • 218.
    Figure 21: TAreports phishing message. certificates and the fact that it is difficult to compromise the components. The attacker is assumed to call TPM (vTPM) commands without bounds and without knowing the TPM (vTPM) root key, expecting to obtain or replace the user key. The analysis goal in TPM (vTPM) study is to guarantee the corresponding property of IDP, the user, RP execution, and the integrity of them [21, 23]. Also, the overall aim of the proposed model is verifying the TA. Therefore, to compromise the solution, an attacker must at least know TPM content and then target the components accordingly. An essential axiom is TPM (vTPM) and is bound to one and only one Platform which has been used in this study to check the integrity. Hence in [24] Black Hat showed how one TPM could be physically compromised to gain access to the secrets stored inside. But Microsoft believed that using a TPM is still an effective means to help protect sensitive information and accordingly take advantage of a TPM. The attack shown requires physical possession of the PC and requires someone with specialized equipment,
  • 219.
    intimate knowledge ofsemiconductor design, and advanced skills. While this attack is certainly interesting, these methods are difficult to duplicate, and as such, pose a very low risk in our proposed model. Furthermore, IDP asks the user for his/her credential to gain security assurance. As a result, an attacker must not only be able to retrieve the appropriate secret from TPM (vTPM), but also find the user credential (username and password). If the credential is sufficiently complex, this poses a hard, if not infeasible, problem to solve in order to obtain the required key to Phishing attack in OpenID environment. 4.3.3. Authentication in an Untrustworthy Environment. Sometimes users have to sign into their web accounts in an untrustworthy environment, for example, accessing a credit card account using a public internet in university, shared computer, or pervasive environment. Our solution is also applicable to such cases. In this case, the user must sign into his/her OpenID account via the trusted device, and then just follow the OpenID login instructions. In our proposed model required
  • 220.
    16 The ScientificWorld Journal checking the TPM for each authentication regardless of cookies attesting the integrity of the trusted device. Because of the using TPM and trusted device, the user can read the authentication information from the device and then log into the website on the untrustworthy device. Khiabani et al. [21] described that pervasive systems are weaving themselves in our daily life, making it possible to collect user information invisibly, in an unobtrusive manner by even unknown parties. So OpenID as a security activity would be a major issue in these environments. The huge number of interactions between users and pervasive devices necessitates a comprehensive trust model which unifies different trust factors like context, recommendation, and history to calculate the trust level of each party precisely. Trusted computing enables effective solutions to verify the trustworthiness of computing platforms in untrustworthi- ness environment.
  • 221.
    4.3.4. Insider Attack.Weak client’s password or server secret key stored in server side is vulnerable to any insider who has access to the server. Thus, in the event that this information is exposed, the insider is able to impersonate either party. The strength of our proposed protocol is that TPM key is the essential part in the Trust OpenID protocol and TA stores TPM database with its TPM key which cannot be accessed by attackers. Therefore, our scheme can prevent the insider from stealing sensitive authentication information. 4.3.5. The Man in the Middle Attack. In the man in the middle (MITM) attack, a malicious user located between two communication devices can monitor or record all messages exchanged between the two devices. Suppose an attacker tries to launch an MITM attack on a user and a website, and that the attacker can monitor all messages sent to or received by the user. The idea of making conventional Phishing, pharming, and MITM attacks useless will apply to private users, which usually are not connected to a well-configured network. Fur- thermore private users often administrate their computers by
  • 222.
    themselves. Using publickey infrastructures (PKI), stronger mutual authentication such as secret keys and password, latency examination, second channel verification, and one time password are some of the ways which have been released to prevent MITM in the network area. Mat Nor et al. in [16] described that many security measures have been implemented to prevent MITM attacks such as Secure Sockets Layer (SSL) or Transport Layer Security (TLS) protocol; adversaries have come out with a new variant of MITM attack which is known as the Man- in-the-Browser (MitB) attack which tries to manipulate the information between a user and a browser and is much harder to detect due to its nature of attacks. Trust relationship between interacting platforms has become a major element in increasing the user confidence when dealing with Internet transactions especially in online banking and electronic commerce. Therefore, in our pro- posed model in order to ensure the validity of the integrity measurement from the genuine TPM (vTPM), the Attestation Identity Key (AIK) is used to sign the integrity measurement. AIK is an asymmetric key and is derived from the unique Endorsement Key (EK) certified by its manufacturer which
  • 223.
    can identify theTPM identity and represent the Certificate Authority role against MITM attack. 5. Conclusion In this research first we did a thorough literature review to be familiar with cloud computing, cloud architecture, federated identity, SAML, OAuth, OpenID, Trusted computing, and problems of authentication in cloud computing. Next, it has been defined that Phishing attack, identity theft, MITM attack, DNS attack are common attacks and vulnerabilities in the cloud. The related work emphasized on those researches that found any way against identity theft and Phishing attack in the federated identity environment. The proposed algorithm that combined trusted com- puting with OpenID and adopted trust in federated area was explained. TPM (vTPM) is an essential part of trusted computing and also in this research; TPM (vTPM) are the main part against identity theft. The proposed model has six steps based on OpenID exchange data flow. For implementation, Visual Studio and SQL are good tools to simulate the proposed model. The last part of each project tested and evaluated the project with real world datasets. In this part, based on the statistics and security model, we
  • 224.
    proposed BLP andadopted it for the Trusted OpenID model. The main goal of this research was to gain some predefined security objectives based on the problem statement. Besides, the simulation method has been evaluated against Phishing attack and also has evaluated the strengths and weaknesses of the proposed solution in terms of security and considered possible improvements. Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper. Acknowledgments The authors would like to express greatest appreciation to the Ministry of High Education (MOHE) Malaysia through Prototype Development Research Grant Scheme (R.K130000.7338.4L622) for financial support of the research done in Advanced Informatics School (AIS), Universiti Teknologi Malaysia (UTM). References [1] P. Mell and T. Grance, “The NIST definition of cloud
  • 225.
    computing (draft),” NIST SpecialPublication, vol. 800, p. 145, 2011. [2] A. Earls, “Gartner takes on cloud identity management,” 2012, http://searchsoa.techtarget.com/tip/Gartner-takes-on-cloud- identity-management. [3] U. F. Rodriguez, M. Laurent-Maknavicius, and J. Incera- Dieguez, Federated Identity Architectures, 2006. http://searchsoa.techtarget.com/tip/Gartner-takes-on-cloud- The Scientific World Journal 17 [4] I. M. Abbadi and A. Martin, “Trust in the cloud,” Information Security Technical Report, vol. 16, no. 3-4, pp. 108–114, 2011. [5] A. Carmignani, Identity Federation Using SAML and WebSphere
  • 226.
    Software, 2010. [6] TCG,“Trusted computing,” 2012, http://www.trustedcomput- inggroup.org/trusted computing. [7] L. Yan, C. Rong, and G. Zhao, “Strengthen cloud computing security with federal identity management using hierarchical identity-based cryptography,” Cloud Computing, pp. 167–177, 2009. [8] S.-T. Sun, E. Pospisil, I. Muslukhov, N. Dindar, K. Hawkey, and K. Beznosov, “What makes users refuse web single sign-on?: an empirical investigation of OpenID,” in Proceedings of the 7th Symposium on Usable Privacy and Security (SOUPS ’11), p. 4, July 2011. [9] S. Wang, An Analysis of Web Single Sign-on, 2011. [10] H. Hodges, Johansson, and Morgan, Towards Kerberizing Web
  • 227.
    Identity and Services,Kerberos Consortium, 2008. [11] A. Armando, R. Carbone, L. Compagna, J. Cuellar, and L. Tobarra, “Formal analysis of SAML 2.0 web browser single sign- on: breaking the SAML-based single sign-on for google apps,” in Proceedings of the 6th ACM Workshop on Formal Methods in Security Engineering (FMSE ’08), pp. 1–9, October 2008. [12] K. Wang and Q. Shao, “Analysis of cloud computing and information security,” in Proceedings of the 2nd International Conference on Frontiers of Manufacturing and Design Science (ICFMD ’11), pp. 3810–3813, Taichung, Taiwan, December 2011. [13] P. Madsen, Y. Koga, and K. Takahashi, “Federated identity management for protecting users from ID theft,” in Proceedings of the Workshop on Digital Identity Management, pp. 77–83, 2005. [14] J. H. You and M. S. Jun, “A mechanism to prevent RP phishing in OpenID system,” in Proceedings of the IEEE/ACIS 9th Inter- national Conference on Computer and Information Science (ICIS
  • 228.
    ’10), pp. 876–880,2010. [15] X. Ding and J. Wei, “A scheme for confidentiality protection of OpenID authentication mechanism,” in Proceedings of the International Conference on Computational Intelligence and Security (CIS ’10), pp. 310–314, December 2010. [16] F. B. Mat Nor, K. Abd Jalil, and J.-L. Ab Manan, “Remote user authentication scheme with hardware-based attestation,” Communications in Computer and Information Science, vol. 180, no. 2, pp. 437–447, 2011. [17] S.-T. Sun, Y. Boshmaf, K. Hawkey, and K. Beznosov, “A billion keys, but few locks: the crisis of web single sign-on,” in Proceedings of the New Security Paradigms Workshop (NSPW ’10), pp. 61–71, September 2010. [18] K. A. J. Fazli Bin Mat Nor and J. l. Ab Manan, “Mitigating man- in-the-browser attacks with Hardware-based authentication scheme,” International Journal of Cyber-Security and Digital Forensics, vol. 1, no. 3, p. 6, 2012.
  • 229.
    [19] C. Latzeand U. Ultes-Nitsche, “Stronger authentication in e- commerce: how to protect even Näıve user against Phishing, pharming, and MITM attacks,” in Proceedings of the Interna- tional Association of Science and Technology for Development (IASTED ’07), pp. 111–116, October 2007. [20] P. Urien, “An OpenID provider based on SSL smart cards,” in Proceedings of the 7th IEEE Consumer Communications and Networking Conference (CCNC ’10), pp. 1–2, Las Vegas, Nev, USA, 2010. [21] H. Khiabani, J.-L. A. Manan, and Z. M. Sidek, “A study of trust & privacy models in pervasive computing approach to trusted computing platforms,” in Proceedings of the International Con- ference for Technical Postgraduates (TECHPOS ’09), pp. 1–5, December 2009. [22] B. Ferg, OpenID Authentication 2. 0-Final, OpenID Commu- nity, 2007.
  • 230.
    [23] M. Donovanand E. Visnyak, “Seeding the cloud with trust: real world trusted multi-tenancy use cases emerge,” 2011, http://www.ittoday.info/Articles/Trust/Trust.htm. [24] P. Cooke, “Black Hat TPM Hack and BitLocker,” 2010, http://bl- ogs.windows.com/windows/b/windowssecurity/archive/2010/ 02/10/black-hat-tpm-hack-and-bitlocker.aspx. http://www.ittoday.info/Articles/Trust/Trust.htm http://bl-/ Eghbal Ghazizadeh,1 Mazdak Zamani,1 Jamalul-lail Ab Manan,2 and Mojtaba Alizadeh11. Introduction2. Proposed