KEVIN BACKHOUSE
With the increasing awareness and adoption of DevSecOps, organisations are beginning to fully understand the crucial role that security plays, integrating it into every part of the development and deployment process, from start to finish. New processes such as vulnerability disclosures & bug bounty programs, red team exercises, pen-testing initiatives and static & dynamic code analysis are putting security front and center. These initiatives are proving to be an incredible source for discovering previously unknown vulnerabilities, and fixes are generally implemented and deployed pretty quickly. However, this response is often not quite enough.
In software development, we frequently see the same logical coding mistakes being made repeatedly over the course of a project’s lifetime, and sometimes across multiple projects. Sometimes there are a number of simultaneously active instances of these mistakes, and sometimes there’s only ever one active instance at a time, but it keeps reappearing. When these mistakes lead to security vulnerabilities, the consequences can be severe.
With each vulnerability discovered or reported, if the root cause was a bug in the code, we’re presented with an opportunity to investigate how often this mistake is repeated, whether there are any other unknown vulnerabilities as a result, and implement a process to prevent it reappearing. In this talk, I’ll be introducing Variant Analysis, a process for doing just this, and discuss how it can be integrated into your development and security operations. I’ll also be sharing real-world stories of what has happened when variant analysis was neglected, as well as stories of when it’s saved the day.
2011 CodeEngn Conference 05
DBI 란 Dynamic Binary Instrumentation 의 약자이다. 이는 실행 중인 어떤 Process 또는 Program 에 특수한 목적으로 사용될 임의의 코드를 삽입하는 방법이다. 이를 이용하여 동적으로 생성된 Code 처리, 특정 코드의 발견, 실행중인 Process 분석 등을 할 수 있다. 주로 컴퓨터 구조 연구, 프로그램, 스레드 분 석에 이용되며, Taint Analysis 에 대한 개념, 각종 Tool 과 사용 방법, 간단한 예제, 최신 취약점 분석 등 을 통하여 DBI 를 알아보도록 한다.
http://codeengn.com/conference/05
Using static code analysis tools and detecting and fixing identified issues is very important in order to improve the quality and security of the code baseline.
CodeChecker (https://github.com/Ericsson/codechecker ) is an open source analyzer tooling, defect database and viewer extension for the Clang Static Analyzer and Clang Tidy.
It provides a number of additional features:
- Good visualization of problems in the code
- Overview of results for the whole product
- Filtering
- Cross translational unit analysis and statistical checkers support
- Suppression handling
- And many others...
These features simplify the follow up of results and make it more efficient.
In the video, an overview of features and capabilities of CodeChecker is demonstrated as well as a description and recommendation of how to introduce new tools.
Recording of the demo: https://youtu.be/sQ2Qj0kHoRY published in C++ Dublin User group https://www.youtube.com/channel/UCZ4UNE_1IMUFfAhcdq7CMOg/
Useful links:
open source project: https://github.com/Ericsson/codechecker
http://codechecker-demo.eastus.cloudapp.azure.com/login.html#
demo/demo
https://codechecker.readthedocs.io/en/latest/
http://clang-analyzer.llvm.org/available_checks.html
http://clang.llvm.org/extra/clang-tidy/checks/list.html
Other related videos about Clang Static Analyzer and CodeChecker that goes a bit more deeply into how Clang Static Analyzer works:
Clang Static Analysis - Meeting C++ 2016 Gabor Horvath
https://www.youtube.com/watch?v=UcxF6CVueDM
CppCon 2016: Gabor Horvath “Make Friends with the Clang Static Analysis Tools"
https://www.youtube.com/watch?v=AQF6hjLKsnM
2011 CodeEngn Conference 05
DBI 란 Dynamic Binary Instrumentation 의 약자이다. 이는 실행 중인 어떤 Process 또는 Program 에 특수한 목적으로 사용될 임의의 코드를 삽입하는 방법이다. 이를 이용하여 동적으로 생성된 Code 처리, 특정 코드의 발견, 실행중인 Process 분석 등을 할 수 있다. 주로 컴퓨터 구조 연구, 프로그램, 스레드 분 석에 이용되며, Taint Analysis 에 대한 개념, 각종 Tool 과 사용 방법, 간단한 예제, 최신 취약점 분석 등 을 통하여 DBI 를 알아보도록 한다.
http://codeengn.com/conference/05
Using static code analysis tools and detecting and fixing identified issues is very important in order to improve the quality and security of the code baseline.
CodeChecker (https://github.com/Ericsson/codechecker ) is an open source analyzer tooling, defect database and viewer extension for the Clang Static Analyzer and Clang Tidy.
It provides a number of additional features:
- Good visualization of problems in the code
- Overview of results for the whole product
- Filtering
- Cross translational unit analysis and statistical checkers support
- Suppression handling
- And many others...
These features simplify the follow up of results and make it more efficient.
In the video, an overview of features and capabilities of CodeChecker is demonstrated as well as a description and recommendation of how to introduce new tools.
Recording of the demo: https://youtu.be/sQ2Qj0kHoRY published in C++ Dublin User group https://www.youtube.com/channel/UCZ4UNE_1IMUFfAhcdq7CMOg/
Useful links:
open source project: https://github.com/Ericsson/codechecker
http://codechecker-demo.eastus.cloudapp.azure.com/login.html#
demo/demo
https://codechecker.readthedocs.io/en/latest/
http://clang-analyzer.llvm.org/available_checks.html
http://clang.llvm.org/extra/clang-tidy/checks/list.html
Other related videos about Clang Static Analyzer and CodeChecker that goes a bit more deeply into how Clang Static Analyzer works:
Clang Static Analysis - Meeting C++ 2016 Gabor Horvath
https://www.youtube.com/watch?v=UcxF6CVueDM
CppCon 2016: Gabor Horvath “Make Friends with the Clang Static Analysis Tools"
https://www.youtube.com/watch?v=AQF6hjLKsnM
Temperature sensor with a led matrix display (arduino controlled)TechLeap
Basic idea was to build a simple, cheap temperature sensing circuit-manually calibrated. then display the temperature in real-time on an 8x8 LED matrix.
Approaches and techniques for statically finding a multitude of issues in source code have been developed in the past. A core property of these approaches is that they are usually targeted towards finding only a very specific kind of issue and that the effort to develop such an analysis is significant. This strictly limits the number of kinds of issues that can be detected.
In this paper, we discuss a generic approach based on the detection of infeasible paths in code that can discover a wide range of code smells ranging from useless code that hinders comprehension to real bugs. Code issues are identified by calculating the difference between the control-flow graph that contains all technically possible edges and the corresponding graph recorded while performing a more precise analysis using abstract interpretation.
We have evaluated the approach using the Java Development Kit as well as the Qualitas Corpus (a curated collection of over 100 Java Applications) and were able to find thousands of issues across a wide range of categories.
Design and implementation of single bit error correction linear block code sy...TELKOMNIKA JOURNAL
Linear block code (LBC) is an error detection and correction code that is widely used in
communication systems. In this paper a special type of LBC called Hamming code was implemented and
debugged using FPGA kit with integrated software environments ISE for simulation and tests the results of
the hardware system. The implemented system has the ability to correct single bit error and detect two bits
error. The data segments length was considered to give high reliability to the system and make an
aggregation between the speed of processing and the hardware ability to be implemented. An adaptive
length of input data has been consider, up to 248 bits of information can be handled using Spartan 3E500
with 43% as a maximum slices utilization. Input/output data buses in FPGA have been customized to meet
the requirements where 34% of input/output resources have been used as maximum ratio. The overall
hardware design can be considerable to give an optimum hardware size for the suitable information rate.
Miranda NG Project to Get the "Wild Pointers" Award (Part 1) Andrey Karpov
I have recently got to the Miranda NG project and checked it with the PVS-Studio code analyzer. And I'm afraid this is the worst project in regard to memory and pointers handling issues I've ever seen. Although I didn't study the analysis results too thoroughly, there still were so many errors that I had to split the material into 2 articles. The first of them is devoted to pointers and the second to all the rest stuff. Enjoy reading and don't forget your popcorn.
We all make mistakes while programming and spend a lot of time fixing them.
One of the methods which allows for quick detection of defects is source code static analysis.
We all make mistakes while programming and spend a lot of time fixing them.
One of the methods which allows for quick detection of defects is source code static analysis.
Oczyszczacz powietrza i stos sieciowy? Czas na test! Semihalf Barcamp 13/06/2018Semihalf
Podczas wykładu pomijamy jakość filtracji powietrza natomiast skupiamy się na metodach testowania protokołów sieciowych przy wykorzystaniu języka TTCN-3. Sprawdzamy jakie dane nasze domowe urządzenia wysyłają w świat oraz jak można przejąć nad nimi kontrolę.
The following code is an implementation of the producer consumer pro.pdfmarketing413921
The following code is an implementation of the producer consumer problem using a software
locking mechanism. Your tasks here require you to debug the code with the intent of achieving
the following tasks:
Task 1: Identifying the critical section
Task 2: Identify the software locks and replace them with a simplified mutex lock and unlock.
HINT: The code provided relies heavily on the in and out pointers of the buffer. You should
make the code run on a single count variable.
#include
#include
#include
#include
#define MAXSIZE 100
#define ITERATIONS 1000
int buffer[MAXSIZE]; // buffer
int nextp, nextc; // temporary storage
int count=0;
void printfunction(void * ptr)
{
int count = *(int *) ptr;
if (count==0)
{
printf(\"All items produced are consumed by the consumer \ \");
}
else
{
for (int i=0; i<=count; i=i+1)
{
printf(\"%d, \\t\",buffer[i]);
}
printf(\"\ \");
}
}
void *producer(void *ptr)
{
int item, flag=0;
int in = *(int *) ptr;
do
{
item = (rand()%7)%10;
flag=flag+1;
nextp=item;
buffer[in]=nextp;
in=((in+1)%MAXSIZE);
while(count <= MAXSIZE)
{
count=count+1;
printf(\"Count = %d - incremented at producer\ \", count);
}
} while (flag<=ITERATIONS);
pthread_exit(NULL);
}
void *consumer(void *ptr)
{
int item, flag=ITERATIONS;
int out = *(int *) ptr;
do
{
while (count >0)
{
nextc = buffer[out];
out=(out+1)%MAXSIZE;
printf(\"\\tCount = %d - decremented at consumer\ \", count, flag);
count = count-1;
flag=flag-1;
}
if (count <= 0)
{
printf(\"consumer made to wait...faster than producer.\ \");
}
}while (flag>=0);
pthread_exit(NULL);
}
int main(void)
{
int in=0, out=0; //pointers
pthread_t pro, con;
// Spawn threads
pthread_create(&pro, NULL, producer, &count);
pthread_create(&con, NULL, consumer, &count);
if (rc1)
{
printf(\"ERROR; return code from pthread_create() is %d\ \", rc1);
exit(-1);
}
if (rc2)
{
printf(\"ERROR; return code from pthread_create() is %d\ \", rc2);
exit(-1);
}
// Wait for the threads to finish
// Otherwise main might run to the end
// and kill the entire process when it exits.
pthread_join(pro, NULL);
pthread_join(con, NULL);
printfunction(&count);
}
Solution
#include
#include
#include
#include
#define MAXSIZE 100
#define ITERATIONS 1000
int buffer[MAXSIZE]; // buffer
int nextp, nextc; // temporary storage
int count=0;
void printfunction(void * ptr)
{
int count = *(int *) ptr;
if (count==0)
{
printf(\"All items produced are consumed by the consumer \ \");
}
else
{
for (int i=0; i<=count; i=i+1)
{
printf(\"%d, \\t\",buffer[i]);
}
printf(\"\ \");
}
}
void *producer(void *ptr)
{
int item, flag=0;
int in = *(int *) ptr;
do
{
item = (rand()%7)%10;
flag=flag+1;
nextp=item;
buffer[in]=nextp;
in=((in+1)%MAXSIZE);
while(count <= MAXSIZE)
{
count=count+1;
printf(\"Count = %d - incremented at producer\ \", count);
}
} while (flag<=ITERATIONS);
pthread_exit(NULL);
}
void *consumer(void *ptr)
{
int item, flag=ITERATIONS;
int out = *(int *) ptr;
do
{
while (count >0)
{
nextc = buffer[out];
out=(out+1)%MAXSIZE;
printf(\"\\tCount = %d - decreme.
Please help with the below 3 questions, the python script is at the.pdfsupport58
Please help with the below 3 questions, the python script is at the bottom, I cannot get it to work
correctly please indicate where the error is. Thanks
Question-01: Approximately how much longer does it take to do a round-trip ping from/to a
remote machine than from/to localhost? (Note, answers may vary if you are doing the
experiment from your home or from the CS building itself and whether the destination is in
North America or some other continent).
Question-02: Currently, the program calculates the round-trip time for each packet and prints it
out individually. Modify this to correspond to the way the standard ping program works. You
will need to report the minimum, maximum, and average RTTs at the end of all pings from the
client. In addition, calculate the packet loss rate (in percentage).
Question-03: Your program can only detect timeouts in receiving ICMP echo responses. Modify
the Pinger program to parse the ICMP response error codes and display the corresponding error
results to the user. Examples of ICMP response error codes are 0: Destination Network
Unreachable, 1: Destination Host Unreachable.
In this lab, you will gain a better understanding of Internet Control Message Protocol (ICMP).
You will learn to implement a Ping application using ICMP request and reply messages. Ping is
a computer network application used to test whether a particular host is reachable across an IP
network. It is also used to self-test the network interface card of the computer or as a latency test.
It works by sending ICMP echo reply packets to the target host and listening for ICMP echo
reply replies. The "echo reply" is sometimes called a pong. Ping measures the round-trip time,
records packet loss, and prints a statistical summary of the echo reply packets received (the
minimum, maximum, and the mean of the round-trip times and in some versions the standard
deviation of the mean).
Your task is to develop your own Ping application in Python. Your application will use ICMP
but, in order to keep it simple, will not exactly follow the official specification in RFC 1739.
Note that you will only need to write the client side of the program, as the functionality needed
on the server side is built into almost all operating systems. You should complete the Ping
application so that it sends ping requests to a specified host separated by approximately one
second. Each message contains a payload of data that includes a timestamp. After sending each
packet, the application waits up to one second to receive a reply. If one second goes by without a
reply from the server, then the client assumes that either the ping packet or the pong packet was
lost in the network (or that the server is down).
This lab requires you to compose new python code. A skeleton framework is given, you will
need to fill in the blanks.
This lab will require you to build and/or decode a packed binary array of data that is specified by
the ICMP protocol. To assist you, the ICMP protocol speci.
Optimization in the world of 64-bit errorsPVS-Studio
In the previous blog-post I promised to tell you why it is difficult to demonstrate 64-bit errors by simple examples. We spoke about operator[] and I told that in simple cases even incorrect code might work.
Temperature sensor with a led matrix display (arduino controlled)TechLeap
Basic idea was to build a simple, cheap temperature sensing circuit-manually calibrated. then display the temperature in real-time on an 8x8 LED matrix.
Approaches and techniques for statically finding a multitude of issues in source code have been developed in the past. A core property of these approaches is that they are usually targeted towards finding only a very specific kind of issue and that the effort to develop such an analysis is significant. This strictly limits the number of kinds of issues that can be detected.
In this paper, we discuss a generic approach based on the detection of infeasible paths in code that can discover a wide range of code smells ranging from useless code that hinders comprehension to real bugs. Code issues are identified by calculating the difference between the control-flow graph that contains all technically possible edges and the corresponding graph recorded while performing a more precise analysis using abstract interpretation.
We have evaluated the approach using the Java Development Kit as well as the Qualitas Corpus (a curated collection of over 100 Java Applications) and were able to find thousands of issues across a wide range of categories.
Design and implementation of single bit error correction linear block code sy...TELKOMNIKA JOURNAL
Linear block code (LBC) is an error detection and correction code that is widely used in
communication systems. In this paper a special type of LBC called Hamming code was implemented and
debugged using FPGA kit with integrated software environments ISE for simulation and tests the results of
the hardware system. The implemented system has the ability to correct single bit error and detect two bits
error. The data segments length was considered to give high reliability to the system and make an
aggregation between the speed of processing and the hardware ability to be implemented. An adaptive
length of input data has been consider, up to 248 bits of information can be handled using Spartan 3E500
with 43% as a maximum slices utilization. Input/output data buses in FPGA have been customized to meet
the requirements where 34% of input/output resources have been used as maximum ratio. The overall
hardware design can be considerable to give an optimum hardware size for the suitable information rate.
Miranda NG Project to Get the "Wild Pointers" Award (Part 1) Andrey Karpov
I have recently got to the Miranda NG project and checked it with the PVS-Studio code analyzer. And I'm afraid this is the worst project in regard to memory and pointers handling issues I've ever seen. Although I didn't study the analysis results too thoroughly, there still were so many errors that I had to split the material into 2 articles. The first of them is devoted to pointers and the second to all the rest stuff. Enjoy reading and don't forget your popcorn.
We all make mistakes while programming and spend a lot of time fixing them.
One of the methods which allows for quick detection of defects is source code static analysis.
We all make mistakes while programming and spend a lot of time fixing them.
One of the methods which allows for quick detection of defects is source code static analysis.
Oczyszczacz powietrza i stos sieciowy? Czas na test! Semihalf Barcamp 13/06/2018Semihalf
Podczas wykładu pomijamy jakość filtracji powietrza natomiast skupiamy się na metodach testowania protokołów sieciowych przy wykorzystaniu języka TTCN-3. Sprawdzamy jakie dane nasze domowe urządzenia wysyłają w świat oraz jak można przejąć nad nimi kontrolę.
The following code is an implementation of the producer consumer pro.pdfmarketing413921
The following code is an implementation of the producer consumer problem using a software
locking mechanism. Your tasks here require you to debug the code with the intent of achieving
the following tasks:
Task 1: Identifying the critical section
Task 2: Identify the software locks and replace them with a simplified mutex lock and unlock.
HINT: The code provided relies heavily on the in and out pointers of the buffer. You should
make the code run on a single count variable.
#include
#include
#include
#include
#define MAXSIZE 100
#define ITERATIONS 1000
int buffer[MAXSIZE]; // buffer
int nextp, nextc; // temporary storage
int count=0;
void printfunction(void * ptr)
{
int count = *(int *) ptr;
if (count==0)
{
printf(\"All items produced are consumed by the consumer \ \");
}
else
{
for (int i=0; i<=count; i=i+1)
{
printf(\"%d, \\t\",buffer[i]);
}
printf(\"\ \");
}
}
void *producer(void *ptr)
{
int item, flag=0;
int in = *(int *) ptr;
do
{
item = (rand()%7)%10;
flag=flag+1;
nextp=item;
buffer[in]=nextp;
in=((in+1)%MAXSIZE);
while(count <= MAXSIZE)
{
count=count+1;
printf(\"Count = %d - incremented at producer\ \", count);
}
} while (flag<=ITERATIONS);
pthread_exit(NULL);
}
void *consumer(void *ptr)
{
int item, flag=ITERATIONS;
int out = *(int *) ptr;
do
{
while (count >0)
{
nextc = buffer[out];
out=(out+1)%MAXSIZE;
printf(\"\\tCount = %d - decremented at consumer\ \", count, flag);
count = count-1;
flag=flag-1;
}
if (count <= 0)
{
printf(\"consumer made to wait...faster than producer.\ \");
}
}while (flag>=0);
pthread_exit(NULL);
}
int main(void)
{
int in=0, out=0; //pointers
pthread_t pro, con;
// Spawn threads
pthread_create(&pro, NULL, producer, &count);
pthread_create(&con, NULL, consumer, &count);
if (rc1)
{
printf(\"ERROR; return code from pthread_create() is %d\ \", rc1);
exit(-1);
}
if (rc2)
{
printf(\"ERROR; return code from pthread_create() is %d\ \", rc2);
exit(-1);
}
// Wait for the threads to finish
// Otherwise main might run to the end
// and kill the entire process when it exits.
pthread_join(pro, NULL);
pthread_join(con, NULL);
printfunction(&count);
}
Solution
#include
#include
#include
#include
#define MAXSIZE 100
#define ITERATIONS 1000
int buffer[MAXSIZE]; // buffer
int nextp, nextc; // temporary storage
int count=0;
void printfunction(void * ptr)
{
int count = *(int *) ptr;
if (count==0)
{
printf(\"All items produced are consumed by the consumer \ \");
}
else
{
for (int i=0; i<=count; i=i+1)
{
printf(\"%d, \\t\",buffer[i]);
}
printf(\"\ \");
}
}
void *producer(void *ptr)
{
int item, flag=0;
int in = *(int *) ptr;
do
{
item = (rand()%7)%10;
flag=flag+1;
nextp=item;
buffer[in]=nextp;
in=((in+1)%MAXSIZE);
while(count <= MAXSIZE)
{
count=count+1;
printf(\"Count = %d - incremented at producer\ \", count);
}
} while (flag<=ITERATIONS);
pthread_exit(NULL);
}
void *consumer(void *ptr)
{
int item, flag=ITERATIONS;
int out = *(int *) ptr;
do
{
while (count >0)
{
nextc = buffer[out];
out=(out+1)%MAXSIZE;
printf(\"\\tCount = %d - decreme.
Please help with the below 3 questions, the python script is at the.pdfsupport58
Please help with the below 3 questions, the python script is at the bottom, I cannot get it to work
correctly please indicate where the error is. Thanks
Question-01: Approximately how much longer does it take to do a round-trip ping from/to a
remote machine than from/to localhost? (Note, answers may vary if you are doing the
experiment from your home or from the CS building itself and whether the destination is in
North America or some other continent).
Question-02: Currently, the program calculates the round-trip time for each packet and prints it
out individually. Modify this to correspond to the way the standard ping program works. You
will need to report the minimum, maximum, and average RTTs at the end of all pings from the
client. In addition, calculate the packet loss rate (in percentage).
Question-03: Your program can only detect timeouts in receiving ICMP echo responses. Modify
the Pinger program to parse the ICMP response error codes and display the corresponding error
results to the user. Examples of ICMP response error codes are 0: Destination Network
Unreachable, 1: Destination Host Unreachable.
In this lab, you will gain a better understanding of Internet Control Message Protocol (ICMP).
You will learn to implement a Ping application using ICMP request and reply messages. Ping is
a computer network application used to test whether a particular host is reachable across an IP
network. It is also used to self-test the network interface card of the computer or as a latency test.
It works by sending ICMP echo reply packets to the target host and listening for ICMP echo
reply replies. The "echo reply" is sometimes called a pong. Ping measures the round-trip time,
records packet loss, and prints a statistical summary of the echo reply packets received (the
minimum, maximum, and the mean of the round-trip times and in some versions the standard
deviation of the mean).
Your task is to develop your own Ping application in Python. Your application will use ICMP
but, in order to keep it simple, will not exactly follow the official specification in RFC 1739.
Note that you will only need to write the client side of the program, as the functionality needed
on the server side is built into almost all operating systems. You should complete the Ping
application so that it sends ping requests to a specified host separated by approximately one
second. Each message contains a payload of data that includes a timestamp. After sending each
packet, the application waits up to one second to receive a reply. If one second goes by without a
reply from the server, then the client assumes that either the ping packet or the pong packet was
lost in the network (or that the server is down).
This lab requires you to compose new python code. A skeleton framework is given, you will
need to fill in the blanks.
This lab will require you to build and/or decode a packed binary array of data that is specified by
the ICMP protocol. To assist you, the ICMP protocol speci.
Optimization in the world of 64-bit errorsPVS-Studio
In the previous blog-post I promised to tell you why it is difficult to demonstrate 64-bit errors by simple examples. We spoke about operator[] and I told that in simple cases even incorrect code might work.
DevSecCon London 2019: Workshop: Cloud Agnostic Security Testing with Scout S...DevSecCon
Xavier Garceau-Aranda
Senior Security Consultant at NCC Group
With the steady rise of cloud adoption, a number of organizations find themselves splitting their resources between multiple cloud providers. While the readiness to deal with security in cloud native environments has been improving, the multi-cloud paradigm poses new challenges.
The workshop will aim to familiarize attendees with Scout Suite (https://github.com/nccgroup/ScoutSuite), a key component of NCC Group’s cloud agnostic approach to security assurance.
Scout Suite is an open source multi-cloud security-auditing tool, which enables security posture assessment of cloud environments. Using the APIs exposed by cloud providers, Scout Suite gathers configuration data for manual inspection and highlights risk areas. Rather than pouring through dozens of pages on the web consoles, Scout Suite provides a clear view of the attack surface automatically.
The following cloud providers are currently supported:
- Amazon Web Services
- Microsoft Azure
- Google Cloud Platform
- Oracle Cloud Infrastructure
- Alibaba Cloud
During the workshop, attendees will leverage Scout Suite to assess a number of cloud environments designed to simulate typical flaws. We will display how the tool can be leveraged to quickly identify and help with remediation of security misconfigurations.
DevSecCon London 2019: Are Open Source Developers Security’s New Front Line?DevSecCon
Mitun Zavery
Senior Engineer at Sonatype
Bad actors have recognized the power of open source and are now beginning to create their own attack opportunities. This new form of assault, where OSS project credentials are compromised and malicious code is intentionally injected into open source libraries, allows hackers to poison the well. In this session, Mitun will explain how both security and developers must work together to stop this trend. Or, risk losing the entire open source ecosystem.
Analyze, and detail, the events leading to today’s “all-out” attack on the OSS industry
Define what the future of open source looks like in today’s new normal
Outline how developers can step into the role of security, to protect themselves, and the millions of people depending on them
DevSecCon London 2019: How to Secure OpenShift Environments and What Happens ...DevSecCon
Jan Harrie
Security Analyst at ERNW GmbH
OpenShift by Red Hat is one of the major Platform as a Service (PaaS) solutions on the market. It is used to automatically deploy Kubernetes clusters and provides useful extensions for cluster management mixed with some magic under the hood.
Instantiating a Kubernetes cluster is often a crucial step in setting up a modern application stack. But be aware – a lot of configuration parameters are awaiting you. And here several misconfigurations may occur that can lead up to a compromise of the cluster. Privileged containers, tainting of masters and executing workloads on them, missing role-based access controls, and misconfigured Service Accounts are part of the problem.
In this talk, I will explain which configuration parameters of an OpenShift environment are critical to ensure the overall security of the deployed Kubernetes clusters. Implications of misconfigurations will be demonstrated during live demos. Finally, recommendations for a secure configuration are provided.
DevSecCon London 2019: A Kernel of Truth: Intrusion Detection and Attestation...DevSecCon
Matt Carroll
Infrastructure Security Engineer at Yelp
"Attestation is hard" is something you might hear from security researchers tracking nation states and APTs, but it's actually pretty true for most network-connected systems!
Modern deployment methodologies mean that disparate teams create workloads for shared worker-hosts (ranging from Jenkins to Kubernetes and all the other orchestrators and CI tools in-between), meaning that at any given moment your hosts could be running any one of a number of services, connecting to who-knows-what on the internet.
So when your network-based intrusion detection system (IDS) opaquely declares that one of these machines has made an "anomalous" network connection, how do you even determine if it's business as usual? Sure you can log on to the host to try and figure it out, but (in case you hadn't noticed) computers are pretty fast these days, and once the connection is closed it might as well not have happened... Assuming it wasn't actually a reverse shell...
At Yelp we turned to the Linux kernel to tell us whodunit! Utilizing the Linux kernel's eBPF subsystem - an in-kernel VM with syscall hooking capabilities - we're able to aggregate metadata about the calling process tree for any internet-bound TCP connection by filtering IPs and ports in-kernel and enriching with process tree information in userland. The result is "pidtree-bcc": a supplementary IDS. Now whenever there's an alert for a suspicious connection, we just search for it in our SIEM (spoiler alert: it's nearly always an engineer doing something "innovative")! And the cherry on top? It's stupid fast with negligible overhead, creating a much higher signal-to-noise ratio than the kernels firehose-like audit subsystems.
This talk will look at how you can tune the signal-to-noise ratio of your IDS by making it reflect your business logic and common usage patterns, get more work done by reducing MTTR for false positives, use eBPF and the kernel to do all the hard work for you, accidentally load test your new IDS by not filtering all RFC-1918 addresses, and abuse Docker to get to production ASAP!
As well as looking at some of the technologies that the kernel puts at your disposal, this talk will also tell pidtree-bcc's road from hackathon project to production system and how focus on demonstrating business value early on allowed the organization to give us buy-in to build and deploy a brand new project from scratch.
DevSecCon Seattle 2019: Containerizing IT Security KnowledgeDevSecCon
Kristóf Tóth
Software Engineer at Avatao
The world is getting eaten alive by software. At this point, almost nothing can be done without interacting with some sort of software system. Not even buying your groceries.
As we keep dumping out huge piles of code like there is no tomorrow, our far from perfect systems keep getting worse and worse from a security standpoint.
What could possibly go wrong?
We believe that education is the missing link.
As appsec is still a curiosity topic on top universities, freshly graduated engineers simply have no clue. And how could they?
The number of programmers keeps on doubling every few years and generations of software professionals are stuck without a proper background in ITSec.
As this trend continues, our responsibility to do something about this is on the rise.
In hopes of fighting this trend, we, at Avatao, have decided to share some of our dreams with the community.
Our Tutorial Framework allows you to easily create interactive learning environments running inside Docker containers.
These environments are capable of automatically guiding users through a set of topics by allowing them to interact with real software through a simple web browser.
Users can attack webservices, write code to fix them or use a terminal to deploy websites by creating and pushing git tags.
Nothing here is a mock-up: Every software component is real.
In this talk, I am going to demonstrate the capabilities of the framework, talk about the technology behind it and explore some use cases for it.
During the session we will open source the framework with the hope of creating a better, secure future together.
DevSecCon Seattle 2019: Decentralized Authorization - Implementing Fine Grain...DevSecCon
Sitaraman Lakshminarayanan
Sr Security Architect at Pure Storage
Authorization has two components – Policy Definition and Policy Enforcement. Traditionally both used to be centralized and we spent all the time Integrating products- Built or Bought with Centralized Access Management. This typically led to increased cycle time to change any access policy or change software/deployment to fit into one particular authorization model. When that doesn’t fit, we would end up with multiple authorization enforcements written in different languages with or without any adherence to any standards such as XACML or others.
Imagine building few different or hundreds of products or services or micro services and you have to centrally manage all possible access policies. It’s definitely not a scalable solution in fast moving CI/CD world.
Now imagine a way where every developers or products can externalize its authorization and we can modify authorization enforcement in a consistent manner? Imagine where developers can write their own implementation of how authorization should be enforced for their environment? Remember there is no one size fits all authorization policy. A policy that works for your environment does not work for my environment – for any number of reasons from Risk management to type of business applications.
Open Policy Agent provides a consistent way to write authorization logic and expose it as REST API. Applications can easily integrate with OPA and can also write their own authroziation logics. Whether you are shipping products to customers or integrating a Product or Service into your environment, how awesome it would be to enforce your own authorization rules instead of changing your business process of who can gain access to what features.
In this talk we will explore the benefits of Decentralized Authorization and how to use Open Policy Agent to achieve decentralized authorization. A closer look at few applications /integrations whether it is REST API /Micro Services, or Kubernetes to control various authorization policies as to who can deploy/what can you deploy. We will also look at how to build Integration tests to check our authorization policies.
DevSecCon Seattle 2019: Fully Automated production deployments with HIPAA/HIT...DevSecCon
Matt Lavin
Software Architect at LifeOmic
It's possible to have rapid feature delivery and happy developers without sacrificing high security and compliance. At LifeOmic, we've built an automated change management system that allows production deployments without slow human approval. We maintain HIPAA and HITRUST compliance while still allowing continuous delivery. I'll show how to collect data from BitBucket, Jenkins, and security scan tools to ensure that the approved processes have been followed.
You'll hear how fast production approval incentivizes developers to follow good practices, and become advocates for following the process instead of pushing against it. Automating process checks as a gate to deployments is a great framework for promoting the behavior you want in your organization. Don't give up on rapid feature delivery just because you work in a regulated industry.
DevSecCon Singapore 2019: Four years of reflection: How (not) to secure Web A...DevSecCon
Julian Berton
Preventing a company from becoming the newest data breach statistic can be a daunting prospect. Especially working within a company that employs hundreds of engineers pushing code to production daily, it often feels like everything is on fire and the holy grail of producing a security inspired product is but a dim light growing further and further away. The same feeling is true for security aware engineers being pushed to develop products quickly but also expected to consider quality assurance, operations, security and the reliability of their application or service.
To help reduce the bleeding and build more security aware applications at scale, a balance of firefighting, preventative initiatives, automation and "JIT" education is required. So strap yourself in while we take you on a journey through 4 years of security successes and epic failures:
* Automation - Implementing a secure-by-default build system (Buildkite) that makes detecting vulnerable dependencies (Snyk), storing secrets (AWS Secrets Manager) and scanning Docker containers, an effortless process.
* Prevention - Eradicate several classes of bugs by selecting secure architectural patterns and using automated scripts to detect operational misconfigurations like dangling DNS entries, open S3 buckets, secrets checked into source code and repositories that have been made accidentally public.
* "JIT” Education - Changing a companies security culture with RFC's for security standards, security integrated PIR via bug bounty program reports, visibility through security maturity frameworks (BSIMM).
DevSecCon Singapore 2019: crypto jacking: An evolving threat for cloud contai...DevSecCon
Rahul Kumar & Rupali Dash
In the current era of blockchain technology, mining crypto currency is one of the biggest hit. The talk covers how the attackers use the insecure containers to mine crypto currency and earn million dollar profits. Cryptojacking activity surged to its peak in December 2017, when more than 8 million cryptojacking events were blocked by many intrusion detection companies. While there have seen a slight fall in activity in 2018, it is still at an elevated level, with total cryptojacking events blocked in July 2018 totalling just less than 5 million.
The talk will cover how the mining activities has been done using browsers as well as cloud containers. We will also discuss how the cloud provides like amazon, azure and go are detecting such kind of activities and how minor misconfigurations leads to million dollar currency mining. The talk will also cover how 3rd party security providers like symantec and z-scalar and other intrusion detection system has configured signatures to block such kind of attacks. As well as from a sec-ops prospective what configuration checks should be done to prevent against such kind of attacks as well as detection of attacks. It will also cover some case studies and attack scenarios of mining Monero and the huge financial losses because of this attacks.
DevSecCon Singapore 2019: Can "dev", "sec" and "ops" really coexist in the wi...DevSecCon
Trinh Tran & Dennis Stötzel
Are you trying to stay secure while developing and running a bunch of services and applications every day? So are we and it’s a huge pain in the… pipeline. We have been juggling these aspects while working with one of the biggest insurance companies in the world.
In this talk, we will share our experiences of the last three years: Trinh, as a software engineer in Vietnam and Dennis, as a security engineer in Germany. We will present our experiences of making "dev", "sec" and "ops" coexist – without sparing any dirty details. Our goal has always been fast delivery and secure applications using pipelines, containers, orchestration, and the cloud. Let us explain which of these goals we have met and which remain goals, where we messed up and where we found glory.
We will cover the following topics in our talk:
* Evolution of our project, from beginning with four engineers running in one office, to expanding to fifty engineers coming from three continents and different backgrounds,
* Development, delivery and security as a requirement in an agile project,
* The good, the bad and the ugly in technology, architecture and infrastructure.
Sanoop Thomas & Samandeep Singh
Burp suite is the de-facto proxy application for web security testers. This hands-on workshop will explore the different capabilities of burp proxy application, also dive into the extensions and tooling options to perform improved application security test cases.
The workshop will start with a quick overview of burp usage, different settings, features, some commonly useful extensions and then explore deep into its extension APIs to build your own custom extensions. We will provide a suitable development environment in Java and Python platforms. This will be a hands-on workshop and participants will learn how to automate different application security test scenarios and build burp extensions with the help of templates.
DevSecCon Singapore 2019: Embracing Security - A changing DevOps landscapeDevSecCon
Cameron Townshend
Today’s pace of innovation and need to out “innovate” competitors can often cause developers to bypass key portions of Gene Kim’s Three Ways of DevOps - specifically to never pass a known defect downstream and emphasize performance of the entire system.
As we embrace movements like CI, CD and Devops to cut down on release cycles - and innovate faster, we as developers must also embrace the reality that the risk landscape is too complex to leave “security” to just those with security in their title. Traditional methods do not cut it anymore – it’s time for DevSecOps.
Instinctively, we understand how critical this is. In Sonatype’s recent 2018 DevSecOps Community report, where 2,076 IT professionals were surveyed, 48% of respondents admitted that developers know application security is important, but they don’t have the time to spend on it.
Done properly, DevSecOps practices shouldn’t interrupt the DevOps pipeline - but instead aid it - preventing costly rebuilds and build breaks, down the road. By creating automated governance and compliance guardrails that are embedded early and throughout the software development lifecycle, developers have transparent access to digital guardrails integrated within our native tools — an approach that ensures security is being built in without slowing us down. These instant feedback loops detailing good or bad components have been shown to increase developer productivity by as much as 48%.
Over time, this approach ensures developers procure the best components from the best suppliers, while continuously tracking components across the entire lifecycle.
Attendees of this session will walk away with:
Real-world examples of how large and small companies are implementing DevSecOps practices in their own delivery pipelines, and increasing developer awareness to risks
Key insights from 2,076 of their peers who participated in the 2018 DevSecOps community report - including where most mature DevOps practices are focusing their security efforts
A walkthrough of how security principles have been embedded in a CICD pipeline and what standards for implementation are beginning to follow suite
DevSecCon Singapore 2019: Web Services aren’t as secure as we thinkDevSecCon
Tilak T
Web-Services are taking over the world. Rest-framework is accelerating this development, because of its ease and flexibility. Developers often use and develop REST-based applications because it's exciting to work with. But they forget about security which leads to compromised and exploited applications. For instance, in more recent security tests against Web Services that my team executed, we found that vulnerabilities like Insecure Deserialization, XML External Entities, Server-Side Template Injection and Authorization Flaws are quite prevalent. I have found some simple steps that engineering teams can take towards finding and fixing such vulnerabilities with Web Services. This talk is offering a holistic perspective on finding and fixing some uncommon flaws that will be replete with anecdotes and examples of secure and insecure code. I will also delve into automating SAST and DAST tools using Robot-Framework to identify such flaws in Web-Services.
DevSecCon Singapore 2019: An attacker's view of Serverless and GraphQL apps S...DevSecCon
Sharath Kumar Ramadas
Serverless Technology (Functions as a Service) is fast becoming the next "big thing" in the world of distributed applications. Organizations are investing a great deal of resources in this technology as a force-multiplier, cost-saver and ops-simplification cure-all. Especially with widespread support from cloud vendors, this technology is going to only become more influential. However, like everything else, Serverless apps are subject to a wide variety of attack possibilities, ranging from attacks against access control tech like JWTs, to NoSQL Injection, to exploits against the apps themselves (deserialization, etc) escalating privileges to other cloud components.
On the other hand GraphQL (API Query Language) is the natural companion to serverless apps, where traditional REST APIs are replaced with GraphQL to provide greater flexibility, greater query parameterization and speed. GraphQL is slowly negating the need for REST APIs from being developed. Combined with Serverless tech/Reactive Front-end frameworks, GraphQL is very powerful for distributed apps. However, GraphQL can be abused with a variety of attacks including but not limited to Injection Attacks, Nested Resource Exhaustion attacks, Authorization Flaws among others.
This talk presents a red-team perspective of the various ways in which testers can discover and exploit serverless and/or GraphQL driven applications to compromise sensitive information, and gain a deeper foothold into database services, IAM services and other other cloud components. The talk will have some demos that will demonstrate practical attacks and attack possibilities against Serverless and GraphQL applications. The author will release an intentionally vulnerable Serverless and GraphQL app at the end of the talk for the benefit of the audience and the security community at large.
DevSecCon Singapore 2019: The journey of digital transformation through DevSe...DevSecCon
Nadira Bajrei
IT Continuous Improvement and Knowledge Management at Bank Mandiri Tbk
We all know that the Banking industry is highly regulated. But due to recent changing factors, we had to trigger something we call transformation. Two of the most important reasons why we need transformation are firstly digital disruption, a wave our industry is hard pushed to follow, and secondly the evolving customer expectation and competitive environment, which are impacting the way organisations are delivering value. We need a new way of working to help us stay relevant in the market.
This session will focus on our journey as one of the biggest banks in Indonesia to do digital transformation into DevOps while maintaining security compliance requirements. I will elaborate on the main reason why we need transformation, our journey roadmap, the step by step adoption of CALMS Values in our organisation and how we faced challenges from internal and external site.
DevSecCon Singapore 2019: Preventative Security for KubernetesDevSecCon
Liz Rice
The latest Kubernetes version provides many security-related enhancements and controls, but it is far from being secure by default. Kubernetes is a complex orchestration platform with many different implementations, across multi-cloud/hybrid environments. Configuring it to comply with security best practices and specific security requires time and expertise that most organizations don’t possess.
Aqua’s open source tools arm Kubernetes administrators and developers with an easy way to identify weaknesses in their deployments so that they can address those issues before they are exploited by attackers.
During this presentation, we’ll review how these open source tools offer preventive security for Kubernetes:
Kube-Bench: checks a Kubernetes cluster against 100+ checks documented in the CIS Kubernetes Benchmark.
Kube-Hunter: conducts penetration tests against Kubernetes clusters that hunt for exploitable vulnerabilities and misconfiguration - both from outside the cluster as well as inside it (running as a pod)
DevSecCon London 2018: Is your supply chain your achille's heelDevSecCon
COLIN DOMONEY
The advent of DevOps and large scale automation of software construction and delivery has elevated the software supply chain – and its underpinning delivery pipeline – to mission critical status in any modern enterprise. The increased velocity of modern pipelines and the removal of manual checks and balances has meant that modern pipelines are potential single points of failure in the delivery of secure software.
Automotive and consumer electronics industries have long understood the need for both provenance (understanding the origin of materials) and veracity (ensuring the integrity of their manufacturing processes) in their supply chains; this presentation will address threats to software supply chains and practical approaches to reducing the fragility of your supply chain. Several examples of software supply chain failures will be presented and deconstructed to understand the typical failure modes.
At the most elementary level many pipelines are poorly constructed with low levels of repeatability and poor test coverage, in other organisations there is a lack of governance over the supply chain allowing careless or willingly negligent actors to subvert or bypass controls or testing within the pipeline. There is also no standard mechanism to ensure a ‘chain of custody’ within a pipeline due to a lack common interchange format between tools, or a standard manner to represent the steps within a pipeline build process.
This presentation will cover approaches (using ‘people and process’) in enforcing governance within a supply chain by describing best practices used in large-scale AppSec programmes. Several emerging technology initiatives will be presented: Google’s Grafeas is a means to ensure vulnerability information is represented in a uniform manner across all steps of a pipeline process, while In-Toto is a project to formally enforce the integrity of a pipeline process. A reference secure pipeline will be presented demonstrating both tools working in symphony, along with standard open source and commercial AppSec tools.
Finally the pipeline itself may become the Achille’s Heel in an organisation – many pipelines are not sufficiently hardened and are themselves open to attack by use of vulnerable components and their extensible nature, often along with very wide open permissions. Guidance will be given on hardening of typical pipelines, and a fully secured ephemeral Jenkins pipeline will be demonstrated.
Benefits of this Session: The attendee will gain an increased awareness of the pivotal importance of the software supply chain, and gain an understanding of some common failure modes and weaknesses. Most importantly the attendee will come away with practical guidance on enforcing higher levels of governance on their supply chain without reducing delivery velocity, as well as how to harden the pipeline infrastructure itself.
DevSecCon London 2018: Get rid of these TLS certificatesDevSecCon
Paweł Krawczyk
Most network services and daemons now offer TLS transport protection and their managing certificates and TLS configuration for server farms may use more resources than actual configuration of these services. What if you could get rid of all this complexity and replace it by single transport protection protocol, securing all of the traffic between your servers trasparently and with single centralized key and configuration management? This will be a story of a successful implementation of IPSec protocols, largely and undeservedly forgotten in that purpose, for securing a farm of production cloud servers, with configuration centrally managed with Ansible.
PETKO D. PETKOV
Thanks to the DevSecOps philosophy a growing number of organisations around the world are ensuring their businesses are set up with the security in mind from the get-go. DevSecOps is taking the world by storm. This talk is about how to introduce DevSecOps in your organisation with ready-made, zero-cost, open source templates accessible to everyone. The talk will introduce the OpenDevSecOps project and show many practical examples of how to easily deploy security testing infrastructure on top of existing and well-established development tools.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
DevSecCon London 2018: Variant Analysis – A critical step in handling vulnerabilities
1. LONDON 18-19 OCT 2018
Variant Analysis – A critical step in
handling vulnerabilities
Kevin Backhouse
Sam Lanning
2. LONDON 18-19 OCT 2018
Variant Analysis: who is it for?
• Organizations that develop their own software
• The software is security or safety critical
• Primary use case: incident response
7. LONDON 18-19 OCT 2018
S2-008 Johannes Dahse, Bruce Phillips
S2-032 / CVE-2016-3081 Nike Zheng
S2-033 / CVE-2016-3087 Alvaro Munoz
S2-037 / CVE-2016-4438 Chao Jack PKAV_香草, Shinsaku Nomura
S2-045 / CVE-2017-5638 Nike Zheng
S2-046 / CVE-2017-5638 Chris Frohoff, Nike Zheng, Alvaro Munoz
S2-057 / CVE-2018-11776 Man Yue Mo
My colleague!
Apache Struts 2 OGNL injections
8. LONDON 18-19 OCT 2018
Apple packet-mangler (CVE-2017-13904, CVE-2018-4249)
9. LONDON 18-19 OCT 2018
packet-mangler.c (macOS 10.13)
• Two bugs
• Infinite loop
• Stack buffer overflow
• Both remotely triggerable (if packet-mangler is enabled)
10. LONDON 18-19 OCT 2018
while (tcp_optlen) {
if (tcp_opt_buf[i] == 0x1) {
PKT_MNGLR_LOG(LOG_INFO, "Skipping NOPn");
tcp_optlen--;
i++;
continue;
} else if ((tcp_opt_buf[i] != 0) && (tcp_opt_buf[i] != TCP_OPT_MULTIPATH_TCP)) {
PKT_MNGLR_LOG(LOG_INFO, "Skipping option %xn", tcp_opt_buf[i]);
tcp_optlen -= tcp_opt_buf[i+1];
i += tcp_opt_buf[i+1];
continue;
} else if (tcp_opt_buf[i] == TCP_OPT_MULTIPATH_TCP) {
int j = 0;
int mptcpoptlen = tcp_opt_buf[i+1];
…
for (; j < mptcpoptlen; j++) {
if (p_pkt_mnglr->proto_action_mask &
PKT_MNGLR_TCP_ACT_NOP_MPTCP) {
tcp_opt_buf[i+j] = 0x1;
}
}
tcp_optlen -= mptcpoptlen;
i += mptcpoptlen;
} else {
tcp_optlen--;
i++;
}
}
packet-mangler.c
macOS 10.13
1. Attacker controlled
2. Could be any value from -128 to
127
Out of bounds write if mptcpoptlen is large
Loops until tcp_optlen == 0
1. Grows if mptcpoptlen < 0
2. Goes negative if mptcpoptlen > tcp_optlen
11. LONDON 18-19 OCT 2018
while (tcp_optlen > 0) {
if (tcp_opt_buf[i] == 0x1) {
PKT_MNGLR_LOG(LOG_INFO, "Skipping NOPn");
tcp_optlen--;
i++;
continue;
} else if ((tcp_opt_buf[i] != 0) && (tcp_opt_buf[i] != TCP_OPT_MULTIPATH_TCP)) {
PKT_MNGLR_LOG(LOG_INFO, "Skipping option %xn", tcp_opt_buf[i]);
tcp_optlen -= tcp_opt_buf[i+1];
i += tcp_opt_buf[i+1];
continue;
} else if (tcp_opt_buf[i] == TCP_OPT_MULTIPATH_TCP) {
int j = 0;
unsigned char mptcpoptlen = tcp_opt_buf[i+1];
...
for (; j < mptcpoptlen && j < tcp_optlen; j++) {
if (p_pkt_mnglr->proto_action_mask &
PKT_MNGLR_TCP_ACT_NOP_MPTCP) {
tcp_opt_buf[i+j] = 0x1;
}
}
tcp_optlen -= mptcpoptlen;
i += mptcpoptlen;
} else {
tcp_optlen--;
i++;
}
}
packet-mangler.c
macOS 10.13.2
1. Attacker controlled
2. Could be zero
Don’t allow negative values
Cannot be negative
Bounds check
12. LONDON 18-19 OCT 2018
while (tcp_optlen > 0) {
if (tcp_opt_buf[i] == 0x1) {
PKT_MNGLR_LOG(LOG_INFO, "Skipping NOPn");
tcp_optlen--;
i++;
continue;
} else if ((tcp_opt_buf[i] != 0) && (tcp_opt_buf[i] != TCP_OPT_MULTIPATH_TCP)) {
PKT_MNGLR_LOG(LOG_INFO, "Skipping option %xn", tcp_opt_buf[i]);
/* Minimum TCP option size is 2 */
if (tcp_opt_buf[i+1] < 2) {
PKT_MNGLR_LOG(LOG_ERR, "Received suspicious TCP option");
goto drop_it;
}
tcp_optlen -= tcp_opt_buf[i+1];
i += tcp_opt_buf[i+1];
continue;
} else if (tcp_opt_buf[i] == TCP_OPT_MULTIPATH_TCP) {
int j = 0;
unsigned char mptcpoptlen = tcp_opt_buf[i+1];
...
for (; j < mptcpoptlen && j < tcp_optlen; j++) {
if (p_pkt_mnglr->proto_action_mask &
PKT_MNGLR_TCP_ACT_NOP_MPTCP) {
tcp_opt_buf[i+j] = 0x1;
}
}
packet-mangler.c
macOS 10.13.5
bounds check
13. LONDON 18-19 OCT 2018
packet-mangler.c (macOS 10.13)
• Two bugs
• Infinite loop
• Stack buffer overflow
• Both remotely triggerable (if packet-mangler is enabled)
14. LONDON 18-19 OCT 2018
int i = 0;
tcp_optlen = (tcp.th_off << 2)-sizeof(struct tcphdr);
PKT_MNGLR_LOG(LOG_INFO, "Packet from F5 is TCPn");
PKT_MNGLR_LOG(LOG_INFO, "Optlen: %dn", tcp_optlen);
orig_tcp_optlen = tcp_optlen;
if (orig_tcp_optlen) {
error = mbuf_copydata(*data, offset+sizeof(struct tcphdr), orig_tcp_optlen, tcp_opt_buf);
if (error) {
PKT_MNGLR_LOG(LOG_ERR, "Failed to copy tcp options");
goto input_done;
}
}
while (tcp_optlen > 0) {
if (tcp_opt_buf[i] == 0x1) {
...
packet-mangler.c
macOS 10.13.2User controlled (could be zero)
Could be negative
Implicit cast to size_t could overflow negatively.
Unlimited amount of user
controlled data gets copied to
stack
15. LONDON 18-19 OCT 2018
int i = 0, off;
off = (tcp.th_off << 2);
if (off < (int) sizeof(struct tcphdr) || off > ip_pld_len) {
PKT_MNGLR_LOG(LOG_ERR, "TCP header offset is wrong: %d", off);
goto drop_it;
}
tcp_optlen = off - sizeof(struct tcphdr);
PKT_MNGLR_LOG(LOG_INFO, "Packet from F5 is TCPn");
PKT_MNGLR_LOG(LOG_INFO, "Optlen: %dn", tcp_optlen);
orig_tcp_optlen = tcp_optlen;
if (orig_tcp_optlen) {
error = mbuf_copydata(*data, offset+sizeof(struct tcphdr), orig_tcp_optlen, tcp_opt_buf);
if (error) {
PKT_MNGLR_LOG(LOG_ERR, "Failed to copy tcp options: error %d offset %d optlen %d", error, offset, orig_tcp_optlen);
goto input_done;
}
}
while (tcp_optlen > 0) {
if (tcp_opt_buf[i] == 0x1) {
...
packet-mangler.c
macOS 10.13.5
bounds check
16. LONDON 18-19 OCT 2018
packet-mangler summary
• Multiple bugs found in 55 lines of code
• It took multiple tries to fix all the bugs:
• My initial PoC did not trigger all the bugs
• Apple only fixed the symptoms of the PoC
17. LONDON 18-19 OCT 2018
1. Badly tested area of the codebase
2. Flawed design makes the code bug prone
3. Confusing API leads to errors
4. Bug duplication due to copy/paste
5. The responsible developer made similar mistakes elsewhere
Reasons why bugs are rarely unique
vulns
bugs
Kev’s rule of thumb:
#bugs > 100 * #vulns
co-located bugs
scattered bugs
19. LONDON 18-19 OCT 2018
Techniques for discovering variants
1. Add a regression test
2. Code review:
• Thorough code review of the affected function/module
3. Add unit tests
• Check code coverage results
4. Fuzz testing
• Throwing random inputs at it might uncover other issues
• Use the known issue as a starting point
5. Check other code written by this developer
6. Search the code for similar patterns
20. LONDON 18-19 OCT 2018
Example of a dangerous coding pattern
librelp (rsyslog) CVE-2018-1000140
while(!bFoundPositiveMatch) { /* loop broken below */
…
iAllNames += snprintf(allNames+iAllNames, sizeof(allNames)-iAllNames,
"DNSname: %s; ", szAltName);
…
}
output is fed back into size argument
21. LONDON 18-19 OCT 2018
Code as data
• Import source code into a database
• Write queries to find patterns
22. LONDON 18-19 OCT 2018
Michael Fanning: “A Microsoft DevSecOps Static Application Security Testing (SAST) Exercise”
https://blogs.msdn.microsoft.com/devops/2018/08/21/microsoft-devsecops-static-application-security-testing-sast-exercise/
23. LONDON 18-19 OCT 2018
kev@semmle.com @kevin_backhouse
sam@semmle.com @samlanning
lgtm.com
Editor's Notes
Hi, my name is Kevin Backhouse. I am a security researcher at Semmle, focusing on C and C++ applications.
The proposal and abstract for this talk were originally written by my colleague Sam Lanning. But he is unfortunately double-booked today, so he asked me if I could give the talk instead.
Just to briefly explain my background. I am relatively new to the security field. I only have been doing security research since approximately last summer. Before that, I was a developer. I have spent most of my career as a compiler engineer. So there are large areas of the security field and the DevOps field that I know very little or even nothing about. So for this talk, I am just to stick to the thing that I do know something about, which is about how to find and fix bugs in software.
Ok, so who is this talk aimed at?
I am going to talk about finding bugs in software.
More specifically, I am going to talk about finding bugs in software that you wrote.
For example, this is not about figuring out whether your people in your organization are running ancient versions of Internet Explorer that contain known vulnerabilities.
This is about finding and fixing vulnerabilities or safety issues that were created by your own developers.
Additionally, we are mainly talking here about software that is either security or safety critical.
For example, if the software is any way exposed to the public internet or other potentially attacker-controlled input.
Or if the software is in some way safety critical. For example, you are developing software for self-driving cars, or something like that.
Before I start talking about variant analysis in general, I am going to start with a few examples. I want to show that there have been a lot of high profile cases in which the same bugs have kept reappearing over and over again. The goal of variant analysis is to try to solve that.
This first example is something that I saw on twitter very recently. If you follow the trail of hyperlinks, what you see is a pretty incredible sequence of events.
If we follow the link from that tweet, we end up here. This is the bug tracker where Google Project Zero post the vulnerabilities that they have found.
You can see immediately from this comment that this is not the first bug that he has found in Ghostscript. He was reviewing the fix for a bug that he had reported previously and discovered that they hadn’t fixed it properly.
Something else that I just want to quickly highlight here is this comment about the 90 day disclosure deadline. Google Project Zero are pretty strict about this. If you don’t fix the vulnerability within 90 days, that’s just tough luck: they’re going to publish anyway. And this is pretty standard practice. Most security researchers don’t have an automated system like this that will automatically publish after a fixed period of time, but it is certainly common practice to set a reasonable deadline. So if you are on the receiving end of bug report like this then there is usually time pressure involved. That’s a topic that I will return to later.
Ok, so let’s follow the trail to issue 1690. And yet again, we see a comment referring to a previous bug!
And if we click the link, the trail of misery continues! It just keeps on going.
I am going to stop here, but I think you get the picture.
Here’s another example. OGNL is a scripting language that is used Apache Struts 2. It’s only supposed to be used inside the application, but over the years researchers have found numerous ways to pass OGNL into Struts by connecting to Struts with a specially crafted url. You can see that the first of these bugs was found in 2012 and have kept popping up over the years. The most recent one was found by my colleague Mo, a few months ago.
This example is a bug that I found myself, so I am going to go into a bit more technical detail on this one. What I want to do is show you a very specific example of what the bugs were and how they weren’t fixed properly.
First though, I am going to show this video, which shows what the effect of the bug was.
So there were actually two distinct bugs in the same piece of code. I discovered the infinite loop bug first and the stack buffer overflow a little bit later. So I am going to explain the infinite loop bug first and move onto the buffer overflow later.
So returning to the slide that I showed you earlier, I said that there were two bugs. I have shown you the infinite loop bug, but what about the stack buffer overflow?
What can we learn from the packet-mangler bugs? There were multiple bugs in a 55 line section of code. And it took Apple more than one attempt to fix it properly.
Personally, I learned that I cannot assume that the developers will see everything that I see. Spell it out.
Bugs are rarely unique. These are some of the reasons why.
Maybe this is a low quality section of the codebase? It might have been written in hurry with low quality standards. Has it been tested properly?
Sometimes the design of the software is fundamentally flawed and the developers are paying a game of whack-a-mole. I know from my own past experience working on large software projects that this is not an unusual scenario. A well-known and important example of this is Java deserialization. That was a bad design decision was made a long time ago and that is very difficult to reverse due to backwards compatibility issues.
Sometimes an API is non-intuitive or has some subtle gotchas which can cause even very diligent developers to make mistakes. One example of this was the recent Zip Slip vulnerability, where you might use a library to unzip a file. And you might have no idea that this could expose you to path traversal vulnerabilities (where someone adds ../ to a filename in the zip archive). Another example is the snprintf overflow gotcha that I have written a blog post about. The return value of snprintf is quite non-intuitive which can lead to buffer overflow vulnerabilities in certain situations.
Code gets copied all the time. And it isn’t necessarily bad practice. If you want to know how to implement something, then you go to look for examples of how other people have done similar things. Sometimes you might find an example elsewhere in the same codebase and other times you might find an example on a website like Stack Overflow. And it’s natural to assume that the person who wrote the code that you are copying knew what they were doing. So if that person did make a mistake then that mistake can easily start spreading to other parts of the codebase.
Some developers are just aren’t very careful. And if they have introduced a vulnerability here, then there’s a good chance that they have introduced similar vulnerabilities in other parts of the code that they have worked on. I think it’s also worth mentioning that organizations often unintentionally encourage sloppy coding because it’s much easier to measure the quantity of code that someone produces than the quality. So somebody who quickly churns out a lot of new features is much more likely to be held up as a “rockstar coder” and the fact that the quality of their work is low might go unnoticed.
I have a theory that for every vulnerability in the code there are at least 100 regular bugs. I want to emphasize that I don’t have any statistical evidence for this. It is purely anecdotal, based on my experience of hunting for vulnerabilities. As a security researcher, I don’t care about bugs. I only care about exploitable bugs. So when I see a potential bug in some code, I ignore it unless I think I can write an exploit for it. And I estimate that I pass over approximately 99 out every 100 bug candidates that I look at.
So if you are on the receiving end of a vulnerability report, I think you have to assume the opposite: that the security researcher ignored 99 other possible bugs before they found the one that they sent to you.
So a bug was found. Maybe it was found by your testing team, or maybe it was reported to you by a customer. Or more seriously, maybe there was an accident or a security breach. What now?
Obviously the first step is to diagnose what went wrong and find and fix the bug.
But, as we just saw, that’s not enough. Most bugs are not unique. There’s a good chance that a similar mistake was made elsewhere in the code. So we need to search for variants. And bear in mind that there’s time pressure involved here because you only have a limited amount of time before you have to disclose the vulnerability. So you have to find as many variants as you can before you hit that deadline.
Of course the final step is to make sure that similar bugs don’t happen again.
Here are some ways to discover variants. Step zero is the obvious first response, but it is very unlikely to find any new variants! That’s why you need to do the other stuff too.
Code review. This also seems obvious, as the packet-mangler example showed, I don’t think Apple did it.
Unit tests: obvious. I recommend that you use a code coverage tool to check that the new tests give you really good coverage on the affect file/module. Code that hasn’t been tested usually doesn’t work.
Fuzz testing. This basically means hitting the code with randomly generated input. It isn’t always easy to do, but you have a known issue to start from, so you might be able to generate some random variations of it to check for other issues.
The techniques up to number 3 are mainly good for finding the co-located bugs. But we also need to look for variants elsewhere in the codebase.
One thing that we can do is number 4. But that doesn’t help if the bugs were caused by copy/paste or something like a confusing API. So we also need to search the codebase for similar patterns.
So what do I mean when I say that we should look for similar code patterns?
If we were talking about the packet mangler vulnerability that I showed you before then one of the key features is that the loop doesn’t obviously terminate because it uses the -= operator to update the counter and it’s far from obvious that the counter is decremented by the correct amount. Another thing to look for is code that handles the “tcphdr” type because that almost certainly means that it’s handling untrusted data.
This code snippet shows a different example. This is a snippet of code from the librelp library which is used by rsyslog, a widely used logging tool on Linux.
Everyone knows that snprintf is the safe version of sprintf. It stops you from getting buffer overflows, right?
You pass the size of the buffer in as the second argument and snprintf will never write off the end of the buffer, even if the string doesn’t fit.
This piece of code is writing multiple strings into a buffer. So on each iteration it updates the number of bytes that it has written so far, so that it can pass the correct size argument to snprintf.
So this code looks pretty sensible. What could be wrong with it?
The thing about snprintf is that its return value isn’t what you would probably expect. If the string was too big for the buffer, then it returns the number of bytes that it would have written, if the buffer had been big enough.
So this means that iAllNames can become bigger than the size of the buffer.
And then the real problem happens on the next iteration when you get a negative integer overflow in the calculation of the size argument. The size argument is unsigned, so it wraps and becomes huge, which means that an attacker can write an almost unlimited number of bytes to the buffer. The other thing that is really bad is that by controlling the length of the penultimate string, you can choose where you want the buffer overflow to go. It doesn’t have to go immediately after the end of the buffer. This means that you have a lot of control over which bytes you overwrite. It also means that you can skip over the stack canary, so you can easily bypass the stack protector mitigation.
Ok, so what’s the pattern that we want to look for here. There are three key components to it:
call to snprintf
format string contains a %s
the output of snprintf is used to calculate the size argument on the next iteration
So those are the key components of the pattern. How do we search for it?
The idea is to treat code as data. You import all of your source code into a database, and then you can use queries to search it.
When I say it like that, it probably sounds a bit far fetched, but this concept “code as data” is the basis of all our technology at Semmle. But I am not going to talk about this too much because I don’t want this to start sounding like a vendor pitch. So the main point is that I didn’t just make this up! It really works and you can read more about it after the presentation if you are interested.
So what we can do is write queries that look for dangerous patterns so that you can make sure that you have fixed all the problems, not just the one that you already know about.
This diagram is a more sophisticated version of the diagram with 4 boxes that I showed you earlier.
You can see this diagram on a blog post written by Michael Fanning, who works at Microsoft. It’s well worth a read and I recommend that you check it out.
Microsoft have a lot of experience dealing with security incidents. So they have spent a lot of time honing their process. And they are one of the pioneers of this concept of variant analysis.