2. WhoAreYou
going to listen
for the next 69
minutes?
– Work at Symantec
– Security Researcher and Developer
– IWork on primarily SSDLC implementation but not just limited to it –
– Web ApplicationVulnerability Assessments- Pen-Tests,
– Secure Code Reviews,
– Architecture Risk Assessments,
– Threat Modeling,
– Secured Software Architecture,
– Training,
– Mobile-security assessments,
– Threat telemetry- maintenance & automation,
– Remediation Consulting,
– Security Automation,
– DevOps- Security In the Build,
– Security Automation…
– Java, Python JS, BashS, and PHP
– “Consultant”
– You can reach me @
– Twitter - @deepamkanjani
– mailto:deepamkanjani (at) gmail (dot) com
null/OWASP/G4H meet - August 2017
3. 3
No matter how much care you take during development of any software, security
issues creep in.
4. What this talk
is not about?
– Learning In-Depth Code Reviews orThreat Modeling
– Getting in to details of how a particular language or an
architecture can lead to security issues.
– To help you confirm on an exploit of an issue
– Improve your code review process
– Ground Breaking Research or a NewTool
– Learning how to fix issues.
– Answering Questions (if any)
null/OWASP/G4H meet - August 2017
6. WhyShould
We talk about
it?
– Code is the only advantage for organizations over the hackers and
they need to utilize this fact in a planned way.
– Relying only on penetration testing is definitely not a good
idea.
– When you have the code, use the
code!
null/OWASP/G4H meet - August 2017
7. 6 Bubbles of
Code Review
Observations
null/OWASP/G4H meet - August 2017
Tribal
Knowledge
Configuration
Errors
Stupid
Mistakes
Learning
Opportunities
and Re-
Design
Functional
Leaks
System
Integration –
Miss
(Overlook)
Ref: Independent Research and Excella Results
8. 6 Drops of
Code Review
Observations
null/OWASP/G4H meet - August 2017
9. Mechanics of
code reviews-
Simplified
– Identify the objectives of review
– Identifying areas / components of interest OR Points of Interest.
– Reviewing the code
null/OWASP/G4H meet - August 2017
10. So HowCan
you go about
it?
– Identify what are we missing from a SECURITY Standpoint?
– AutomateWhat Can be Automated so that you can concentrate
on manual checks.
null/OWASP/G4H meet - August 2017
11. See If you
See…
string query = "SELECT * FROM itemsWHERE username = '" +
userName + "' AND password = '" + password.Text + "'";
null/OWASP/G4H meet - August 2017
$command = 'ls -l /home/' . $userName;
system($command);
char buf[24];
printf("Please enter your name n");
gets(buf);
$username = $_GET['username'];
echo '<div class="header">Welcome, ' . $username . '</div>';
BankAccount account = null;
Account = new BankAccount();
return account;
12. See If you
See…
SELECT * FROM usersWHERE username = ‘Administrator' AND
password = ‘secret'; DELETE FROM users; --';
null/OWASP/G4H meet - August 2017
ls -l /home/; rm -rf /
char buf[24];
printf("Please enter your name n");
gets("xebx1fx5ex89x76x08x31xc0x88x46x07x89x46x0cxb0x
0bx89xf3x8dx4ex08x8dx56x0cxcdx80x31xdbx89xd8x40xcd
x80xe8xdcxffxffxff/bin/sh"
);
$username = $_GET['username'];
echo '<div class="header">Welcome, <script
language="Javascript">alert("You've been attacked!");</script>
'</div>';
BankAccount account = null;
Account = new BankAccount();
return account;
13. In general
there are 2
approaches
– Control Flow Analysis:
– Reviewer sees through the logical conditions in the code.
null/OWASP/G4H meet - August 2017
14. In general
there are 2
approaches
– Data Flow Analysis:
– Dataflow analysis is the mechanism used to trace data from the
points of input to the points of output.
– This will help you find bugs associated with poor input handling.
null/OWASP/G4H meet - August 2017
15. In general
there are 2
approaches:
Then where
did the third
come from?
– Taint Analysis:
– Taint Analysis attempts to identify variables that have been 'tainted'
with user controllable input and traces them to possible vulnerable
functions also known as a 'sink'.
– If the tainted variable gets passed to a sink without first being
sanitized it is flagged as a vulnerability.
null/OWASP/G4H meet - August 2017
16. There is
another one.
– Lexical Analysis: The Process converts source code syntax into
‘tokens’ of information in an attempt to abstract the source code
and make it easier to manipulate.
null/OWASP/G4H meet - August 2017
18. A Deeper Look in the
code…
request.form
request.querystring
request.url
request.httpmethod
request.headers
request.cookies
TextBox.Text
HiddenField.Value
null/OWASP/G4H meet - August 2017
Accepting User Input [Others]:
InputStream
request.accepttypes
request.browser
request.files
request.item
request.certificate
request.rawurl
request.servervariables
request.urlreferrer
request.useragent
request.userlanguages
request.IsSecureConnection
request.TotalBytes
request.BinaryRead
recordSet
19. Identify what
are we
missing?
null/OWASP/G4H meet - August 2017
– “The Inspection of Code to identify SecurityWeakness”
– “ Systematic Approach to find SecurityVulnerabilities”
– Code Reviews- Effectiveness of Security Controls, Exercise All
Code Paths, All instances of aVulnerability, Find Design Flaws,
Learn Remediation
20. null/OWASP/G4H meet - August 2017
Ref: https://www.slideshare.net/skoussa/simplified-security-code-review-process
21. Strengths
null/OWASP/G4H meet - August 2017
– Scalability
– Code oriented bugs a.k.a mal-coded problems like Buffer
Overflow, SQL Injections can be reported with higher confidence
– All Instances of a particular vulnerability can be discovered (In
most cases)
– Easier RCA’s – Root Cause Analysis (Source – Sink)
– Uncommon Security Flaws
– Discovery of Usage for Existing Security Controls like Global
blacklists
22. Weaknesses
null/OWASP/G4H meet - August 2017
– Several security vulnerabilities are very difficult to find
automatically, such as authentication problems, access control
issues, insecure use of cryptography, etc.
– High numbers of false positives from tools.
– Could not discover most of the configuration issues as they are not
bundled with the code
– Difficult to 'prove' that an identified security issue is an actual
vulnerability.
– Many of these tools have difficulty analyzing code that can't be
compiled. Analysts frequently can't compile code because they
don't have the right libraries, all the compilation instructions, all
the code, etc.
– Limitations – False Positives and False Negatives
23. Which Brings us to
Threat Modeling 101
null/OWASP/G4H meet - August 2017
25. Threat
Modeling
null/OWASP/G4H meet - August 2017
– The main aim of threat modeling is to identify the important
assets/functionalities of the application and to protect them.
26. Terms
null/OWASP/G4H meet - August 2017
– Asset. A resource of value, such as the data in a database or on the file
system. A system resource.
– Threat. A potential occurrence, malicious or otherwise, that might
damage or compromise your assets.
– Vulnerability. A weakness in the system that makes a threat possible in
other words aid the attacker to exploit a particular threat.
– Attack (or exploit). An action taken by someone or something that
harms an asset.This could be someone following through on a threat or
exploiting a vulnerability.
– Countermeasure. A safeguard that addresses a threat and mitigates
risk.
27. STRIDE
null/OWASP/G4H meet - August 2017
– A threat categorization such as STRIDE is
useful in the identification of threats by
classifying attacker goals such as:
– Spoofing
– Tampering
– Repudiation
– Information Disclosure
– Denial of Service
– Elevation of Privilege.
28. Security
Controls
null/OWASP/G4H meet - August 2017
S
A
D
S
C
A
L
E
C
S
Session Management
Authentication
Data/InputValidation
Secure Code Environment
Cookie Management
Authorization
Logging/Auditing
Error Handling/Exception Handling
Cryptography
Session Management
37. DREAD and
Generic Risk
Model
– For Damage: How big would the damage be if the attack
succeeded?
– For Reproducibility: How easy is it to reproduce an attack to work?
– For Exploitability: How much time, effort, and expertise is needed
to exploit the threat?
– For Affected Users: If a threat were exploited, what percentage of
users would be affected?
– For Discoverability: How easy is it for an attacker to discover this
threat?
– Generic Risk Model: Risk = Likelihood x Impact
null/OWASP/G4H meet - August 2017
38. Countermeasu
re
Identification -
STRIDE
null/OWASP/G4H meet - August 2017
STRIDEThreat & MitigationTechniques List
ThreatType MitigationTechniques
Spoofing Identity
1.Appropriate authentication
2.Protect secret data
3.Don't store secrets
Tampering with data
1.Appropriate authorization
2.Hashes
3.MACs
4.Digital signatures
5.Tamper resistant protocols
Repudiation
1.Digital signatures
2.Timestamps
3.Audit trails
Information Disclosure
1.Authorization
2.Privacy-enhanced protocols
3.Encryption
4.Protect secrets
5.Don't store secrets
Denial of Service
1.Appropriate authentication
2.Appropriate authorization
3.Filtering
4.Throttling
5.Quality of service
Elevation of privilege 1.Run with least privilege
39. Categorize
– Non mitigated threats
– Partially mitigated threats
– Fully mitigated threats
null/OWASP/G4H meet - August 2017
40. Mitigation
Strategies
– Do nothing: for example, hoping for the best
– Inform about the risk: for example, warning user population
about the risk
– Mitigate the risk: for example, by putting countermeasures in
place
– Accept the risk: for example, after evaluating the impact of the
exploitation (business impact)
– Transfer the risk: for example, through contractual agreements
and insurance
– Terminate the risk: for example, shutdown, turn-off, unplug or
decommission the asset
null/OWASP/G4H meet - August 2017
47. Then,What’s
Next? –Where
is theStrategic
Path
null/OWASP/G4H meet - August 2017
What?
Ref: http://a.espncdn.com/combiner/i?img=/media/motion/2016/0323/dm_160323_wt20_Mar23_India_v_Bangladesh_Dhoni_PC_NRH/dm_160323_wt20_Mar23_India_v_Bangladesh_Dhoni_PC_NRH.jpg
–Model-Security-DevOps
AutomateWhat Can Be
Automated
PerformValidation Exercises like
Secure Development Reviews
Model