Our C# expert Eric Lippert provides his take on the psychology of C# analysis, including the business case for C#, developer characteristics and analysis tools.
These slides provide the high-level results of our comparison of FxCop and the Coverity platform. We used a third party codebase of approx. 100k lines of code and analyzed it using the "fxcop" from Visual Studio 2013 and Coverity 6.6. Perhaps most surprising is how the two solutions (both static analysis tools for C# that aim to improve quality and security) are so different and yet so complementary.
These slides quickly illustrate how you can successfully adopt Agile to improve your development efforts. In addition to discussing how and why teams are interested in Agile, it covers some of the challenges of adopting it and suggestions for ensuring success.
Few developers pay attention to security, in spite of the unstoppable tide of security defects in code. Big money is being spent by governments to buy bugs, and exploits have become a new class of weapon in the arsenal of militaries around the world. It is high time that developers pay attention. In these slides, Coverity CTO & co-founder Andy Chou presents a model for how developers can begin to think about security, including some of the most common types of weaknesses that are still plaguing our applications. For each weakness, a concrete code example helps illustrate the bug and what to do about it. From there, he goes up a level and discuss why developers need to begin to "own security" and change the culture from within in order to make a dent in the security problems we face.
OSS Java Analysis - What You Might Be MissingCoverity
We think FindBugs is a great tool for finding coding style and best practice types of issues, so we conducted a little experiment a few months ago. We analyzed Jenkins core code with both FindBugs and Coverity. These slides provide a high-level summary of our results.
These slides provide the high-level results of our comparison of FxCop and the Coverity platform. We used a third party codebase of approx. 100k lines of code and analyzed it using the "fxcop" from Visual Studio 2013 and Coverity 6.6. Perhaps most surprising is how the two solutions (both static analysis tools for C# that aim to improve quality and security) are so different and yet so complementary.
These slides quickly illustrate how you can successfully adopt Agile to improve your development efforts. In addition to discussing how and why teams are interested in Agile, it covers some of the challenges of adopting it and suggestions for ensuring success.
Few developers pay attention to security, in spite of the unstoppable tide of security defects in code. Big money is being spent by governments to buy bugs, and exploits have become a new class of weapon in the arsenal of militaries around the world. It is high time that developers pay attention. In these slides, Coverity CTO & co-founder Andy Chou presents a model for how developers can begin to think about security, including some of the most common types of weaknesses that are still plaguing our applications. For each weakness, a concrete code example helps illustrate the bug and what to do about it. From there, he goes up a level and discuss why developers need to begin to "own security" and change the culture from within in order to make a dent in the security problems we face.
OSS Java Analysis - What You Might Be MissingCoverity
We think FindBugs is a great tool for finding coding style and best practice types of issues, so we conducted a little experiment a few months ago. We analyzed Jenkins core code with both FindBugs and Coverity. These slides provide a high-level summary of our results.
One of the best experiences you might have as a developer is when you are running your continuous delivery pipeline and one of the test failed because it has found a bug. At this point you see that thanks of your tests you are producing a less buggy software. This should be the normal case, in green field projects, but unlikely to happen when running legacy code with a lot of untested code.
When you work in a small collocated team many engineering practices and approaches are relatively easy to use and adapt. In large project with many teams working on the same product this task is not so simple. I want to share experience report in implementing Code Review practice in big product development team (more than 150 people, 10+ feature teams). In this talk we will review what approaches works in such setup and what don’t work, what tools and additional practices are needed to support Code Review and make it more effective, what difficulties and blockers will you probably see in the real life cases, what useful metrics could be produced by this practice.
We know that Code Reviews are a Good Thing. We probably have our own personal lists of things we look for in the code we review, while also fearing what others might say about our code. How to we ensure that code reviews are actually benefiting the team, and the application? How do we decide who does the reviews? What does "done" look like?
In this talk, Trisha will identify some best practices to follow. She'll talk about what's really important in a code review, and set out some guidelines to follow in order to maximise the value of the code review and minimise the pain.
Actor Concurrency Bugs: A Comprehensive Study on Symptoms, Root Causes, API U...Raffi Khatchadourian
Actor concurrency is becoming increasingly important in the development of real-world software systems. Although actor concurrency may be less susceptible to some multithreaded concurrency bugs, such as low-level data races and deadlocks, it comes with its own bugs that may be different. However, the fundamental characteristics of actor concurrency bugs, including their symptoms, root causes, API usages, examples, and differences when they come from different sources are still largely unknown. Actor software development can significantly benefit from a comprehensive qualitative and quantitative understanding of these characteristics, which is the focus of this work, to foster better API documentation, development practices, testing, debugging, repairing, and verification frameworks. To conduct this study, we take the following major steps. First, we construct a set of 186 real-world Akka actor bugs from Stack Overflow and GitHub via manual analysis of 3,924
Stack Overflow questions, answers, and comments and 3,315 GitHub commits, messages, original and modified code snippets, issues, and pull requests. Second, we manually study these actor bugs and their fixes to understand and classify their symptoms, root causes, and API usages. Third, we study the differences between the commonalities and distributions of symptoms, root causes, and API usages of our Stack Overflow and GitHub actor bugs. Fourth, we discuss real-world examples of our actor bugs with these symptoms and root causes. Finally, we investigate the relation of our findings with those of previous work and discuss their implications. A few findings of our study are: (1) symptoms of our actor bugs can be classified into five categories, with Error as the most common symptom and Incorrect Exceptions as the least common, (2) root causes of our actor bugs can be classified into ten categories, with Logic as the most common root cause and Untyped Communication as the least common, (3) a small number of Akka API packages are responsible for most of API usages by our actor bugs, and (4) our Stack Overflow and GitHub actor bugs can differ significantly in commonalities and distributions of their symptoms, root causes, and API usages. While some of our findings agree with those of previous work, others sharply contrast.
TMPA-2015: Towards a Usable Defect Prediction Tool: Crossbreeding Machine Lea...Iosif Itkin
Towards a Usable Defect Prediction Tool: Crossbreeding Machine Learning and Heuristics
Vladimir Kovalenko, Galina Alperovich , JetBrains
12 - 14 November 2015
Tools and Methods of Program Analysis in St. Petersburg
Systematic Evaluation of the Unsoundness of Call Graph Algorithms for JavaMichael Reif
This talk has been held at the SOAP'18 workshop on static program analysis.
The talk presents our test project to asses the unsoundness of built-in call graph implementation.
Proactive Empirical Assessment of New Language Feature Adoption via Automated...Raffi Khatchadourian
Programming languages and platforms improve over time, sometimes resulting in new language features that offer many benefits. However, despite these benefits, developers may not always be willing to adopt them in their projects for various reasons. In this paper, we describe an empirical study where we assess the adoption of a particular new language feature. Studying how developers use (or do not use) new language features is important in programming language research and engineering because it gives designers insight into the usability of the language to create meaning programs in that language. This knowledge, in turn, can drive future innovations in the area. Here, we explore Java 8 default methods, which allow interfaces to contain (instance) method implementations.
Default methods can ease interface evolution, make certain ubiquitous design patterns redundant, and improve both modularity and maintainability. A focus of this work is to discover, through a scientific approach and a novel technique, situations where developers found these constructs useful and where they did not, and the reasons for each. Although several studies center around assessing new language features, to the best of our knowledge, this kind of construct has not been previously considered.
Despite their benefits, we found that developers did not adopt default methods in all situations. Our study consisted of submitting pull requests introducing the language feature to 19 real-world, open source Java projects without altering original program semantics. This novel assessment technique is proactive in that the adoption was driven by an automatic refactoring approach rather than waiting for developers to discover and integrate the feature themselves. In this way, we set forth best practices and patterns of using the language feature effectively earlier rather than later and are able to possibly guide (near) future language evolution. We foresee this technique to be useful in assessing other new language features, design patterns, and other programming idioms.
Myths In Software Engineering: Does complex code mean there will be more bugs? We have analyzed a number of bug databases (including Eclipse, Mozilla, and various Microsoft projects) and come to surprising conclusions.
Data Generation with PROSPECT: a Probability Specification ToolIvan Ruchkin
Presented at the Winter Simulation Conference 2021.
Abstract: Stochastic simulations of complex systems often rely on sampling dependent discrete random variables. Currently, their users are limited in expressing their intention about how these variables are distributed and related to each other over time. This limitation leads the users to program complex and error-prone sampling algorithms. This paper introduces a way to specify, declaratively and precisely, a temporal distribution over discrete variables. Our tool PROSPECT infers and samples this distribution by solving a system of polynomial equations. The evaluation on three simulation scenarios shows that the declarative specifications are easier to write, 3x more succinct than imperative sampling programs, and are processed correctly by PROSPECT.
A code review is basically a technical discussion which should lead to improvements in the code and/or sharing
knowledge in a team. As with any conversation, it should have substance and form.
What’s involved in a good code review? What kind of problems do we want to spot and address? Trisha Gee will talk
about things a reviewer may consider when looking at changes: what potential issues to look for; why certain
patterns may be harmful; and, of course, what NOT to look at.
But when it comes to commenting on someone’s work, it may be hard to find the right words to convey a useful message
without offending the authors - after all, this is something that they worked hard on. Maria Khalusova will share
some observations, thoughts and practical tricks on how to give and receive feedback without turning a code review
into a battlefield.
One of the best experiences you might have as a developer is when you are running your continuous delivery pipeline and one of the test failed because it has found a bug. At this point you see that thanks of your tests you are producing a less buggy software. This should be the normal case, in green field projects, but unlikely to happen when running legacy code with a lot of untested code.
When you work in a small collocated team many engineering practices and approaches are relatively easy to use and adapt. In large project with many teams working on the same product this task is not so simple. I want to share experience report in implementing Code Review practice in big product development team (more than 150 people, 10+ feature teams). In this talk we will review what approaches works in such setup and what don’t work, what tools and additional practices are needed to support Code Review and make it more effective, what difficulties and blockers will you probably see in the real life cases, what useful metrics could be produced by this practice.
We know that Code Reviews are a Good Thing. We probably have our own personal lists of things we look for in the code we review, while also fearing what others might say about our code. How to we ensure that code reviews are actually benefiting the team, and the application? How do we decide who does the reviews? What does "done" look like?
In this talk, Trisha will identify some best practices to follow. She'll talk about what's really important in a code review, and set out some guidelines to follow in order to maximise the value of the code review and minimise the pain.
Actor Concurrency Bugs: A Comprehensive Study on Symptoms, Root Causes, API U...Raffi Khatchadourian
Actor concurrency is becoming increasingly important in the development of real-world software systems. Although actor concurrency may be less susceptible to some multithreaded concurrency bugs, such as low-level data races and deadlocks, it comes with its own bugs that may be different. However, the fundamental characteristics of actor concurrency bugs, including their symptoms, root causes, API usages, examples, and differences when they come from different sources are still largely unknown. Actor software development can significantly benefit from a comprehensive qualitative and quantitative understanding of these characteristics, which is the focus of this work, to foster better API documentation, development practices, testing, debugging, repairing, and verification frameworks. To conduct this study, we take the following major steps. First, we construct a set of 186 real-world Akka actor bugs from Stack Overflow and GitHub via manual analysis of 3,924
Stack Overflow questions, answers, and comments and 3,315 GitHub commits, messages, original and modified code snippets, issues, and pull requests. Second, we manually study these actor bugs and their fixes to understand and classify their symptoms, root causes, and API usages. Third, we study the differences between the commonalities and distributions of symptoms, root causes, and API usages of our Stack Overflow and GitHub actor bugs. Fourth, we discuss real-world examples of our actor bugs with these symptoms and root causes. Finally, we investigate the relation of our findings with those of previous work and discuss their implications. A few findings of our study are: (1) symptoms of our actor bugs can be classified into five categories, with Error as the most common symptom and Incorrect Exceptions as the least common, (2) root causes of our actor bugs can be classified into ten categories, with Logic as the most common root cause and Untyped Communication as the least common, (3) a small number of Akka API packages are responsible for most of API usages by our actor bugs, and (4) our Stack Overflow and GitHub actor bugs can differ significantly in commonalities and distributions of their symptoms, root causes, and API usages. While some of our findings agree with those of previous work, others sharply contrast.
TMPA-2015: Towards a Usable Defect Prediction Tool: Crossbreeding Machine Lea...Iosif Itkin
Towards a Usable Defect Prediction Tool: Crossbreeding Machine Learning and Heuristics
Vladimir Kovalenko, Galina Alperovich , JetBrains
12 - 14 November 2015
Tools and Methods of Program Analysis in St. Petersburg
Systematic Evaluation of the Unsoundness of Call Graph Algorithms for JavaMichael Reif
This talk has been held at the SOAP'18 workshop on static program analysis.
The talk presents our test project to asses the unsoundness of built-in call graph implementation.
Proactive Empirical Assessment of New Language Feature Adoption via Automated...Raffi Khatchadourian
Programming languages and platforms improve over time, sometimes resulting in new language features that offer many benefits. However, despite these benefits, developers may not always be willing to adopt them in their projects for various reasons. In this paper, we describe an empirical study where we assess the adoption of a particular new language feature. Studying how developers use (or do not use) new language features is important in programming language research and engineering because it gives designers insight into the usability of the language to create meaning programs in that language. This knowledge, in turn, can drive future innovations in the area. Here, we explore Java 8 default methods, which allow interfaces to contain (instance) method implementations.
Default methods can ease interface evolution, make certain ubiquitous design patterns redundant, and improve both modularity and maintainability. A focus of this work is to discover, through a scientific approach and a novel technique, situations where developers found these constructs useful and where they did not, and the reasons for each. Although several studies center around assessing new language features, to the best of our knowledge, this kind of construct has not been previously considered.
Despite their benefits, we found that developers did not adopt default methods in all situations. Our study consisted of submitting pull requests introducing the language feature to 19 real-world, open source Java projects without altering original program semantics. This novel assessment technique is proactive in that the adoption was driven by an automatic refactoring approach rather than waiting for developers to discover and integrate the feature themselves. In this way, we set forth best practices and patterns of using the language feature effectively earlier rather than later and are able to possibly guide (near) future language evolution. We foresee this technique to be useful in assessing other new language features, design patterns, and other programming idioms.
Myths In Software Engineering: Does complex code mean there will be more bugs? We have analyzed a number of bug databases (including Eclipse, Mozilla, and various Microsoft projects) and come to surprising conclusions.
Data Generation with PROSPECT: a Probability Specification ToolIvan Ruchkin
Presented at the Winter Simulation Conference 2021.
Abstract: Stochastic simulations of complex systems often rely on sampling dependent discrete random variables. Currently, their users are limited in expressing their intention about how these variables are distributed and related to each other over time. This limitation leads the users to program complex and error-prone sampling algorithms. This paper introduces a way to specify, declaratively and precisely, a temporal distribution over discrete variables. Our tool PROSPECT infers and samples this distribution by solving a system of polynomial equations. The evaluation on three simulation scenarios shows that the declarative specifications are easier to write, 3x more succinct than imperative sampling programs, and are processed correctly by PROSPECT.
A code review is basically a technical discussion which should lead to improvements in the code and/or sharing
knowledge in a team. As with any conversation, it should have substance and form.
What’s involved in a good code review? What kind of problems do we want to spot and address? Trisha Gee will talk
about things a reviewer may consider when looking at changes: what potential issues to look for; why certain
patterns may be harmful; and, of course, what NOT to look at.
But when it comes to commenting on someone’s work, it may be hard to find the right words to convey a useful message
without offending the authors - after all, this is something that they worked hard on. Maria Khalusova will share
some observations, thoughts and practical tricks on how to give and receive feedback without turning a code review
into a battlefield.
Google+ Profile PageRank: The Real AuthorRank? - SMX Advanced 2013Mark Traphagen
Everyone who uses Google Authorship wants to know: Is AuthorRank active yet? That is, is Google using Authorship data as a direct influence on search results yet? Most likely not. But...Authorship provides a significant benefit that most miss: it builds the PageRank authority of your profile, making it a ranking powerhouse
(One correction from last slide: My Google+ guides are found at http://bit.ly/gplusguides - all lower case)
SEO Strategy and The Hummingbird EffectRobin Leonard
Talk given at #SEMCON2013 on SEO Strategy and the impact caused by Hummingbird.
http://www.pinasevents.com/wp-content/uploads/2013/10/The-7th-Search-Engine-Marketing-SEMCON-2013.jpeg
jQuery Mobile is the easiest way to go from web to mobile. It can be used for internet application or serve as the UI for PhoneGap applications. Here is a fast pace introduction to jQuery Mobile.
Authorship and Publisher are two features available through Google that allow us to connect a website with a particular business, and a page of content (website page, blog post, etc.) with a single author. Join us to learn the basics, how these two features help clients, and how it is integrated in our SEO packages.
The secret's out! The highest converting landing pages are built backwards. There's a formula for building effective landing pages and The Conversion Scientists are here to lay it out for you.
In this Unwebinar, Brian Massey and Joel Harvey:
- Show you how to build a landing page by starting at the end
- Teach you elements of any landing page
- Critique live landing pages from your fellow marketers
A selection of some of Renuglass refurbishing projects in South East Asia.
Recovered materials Anodised aluminium, powder coated aluminium, reflective glass.
OpenGL® is the only cross-platform graphics API that enables developers of software for PC, workstation, and supercomputing hardware to create high- performance, visually-compelling graphics software applications, in markets such as CAD, content creation, energy, entertainment, game development, manufacturing, medical, and virtual reality.
This huge transformation for Visual Studio to enable the creation of any application is two-fold, on the server and on the client:
On the client side, Visual Studio 2015 provides a solution to create first-class applications for any device including iOS, Android and Windows.
On the server side, just like the rest of the Microsoft platform, Visual Studio is embracing Linux and provides a development environment for creating server applications that run on Linux.
We will also support major platforms in our ALM tooling – with features like cross-platform build and heterogenous release management offered by TFS 2015 and Visual Studio Online
What about “every developer”?
Last year, at our Connect() event we made a significant announcement targeted at individual developers, such as students, start-ups, small businesses.
With VS Community, eligible developers can use a full IDE, equivalent to the current VS Professional edition, for creating applications across the cloud and devices – for free!
But what about Enterprises?
With Visual Studio 2015, we are making it easier for enterprises to acquire and use Visual Studio, with a simpler model that will give developers working in organizations easier and more affordable access to Visual Studio. In this new model, we have introduced a new edition of Visual Studio called Visual Studio Enterprise.
Original blogpost http://bit.ly/UX3JKj Every year I have compiled a list of trends that marketers have to follow closely in the following year to beat the competition. Social media marketing trends for 2013 is all about mobile, personalization and location. Read on to find out how marketing will change in the near future.
Thursday night is fine with us know what time we should go and if you are approached to make sure you get the best for the wonderful and soon they were going on the top management across
Capability Building for Cyber Defense: Software Walk through and Screening Maven Logix
Dr. Fahim Arif who is the Director R&D at MCS, principal investigator and GHQ authorized consultant for Nexsource Pak (Pvt) Ltd) discussed the capability of building cyber defense in the Data Protection and Cyber Security event that was hosted recently by Maven Logix. In his session he gave the audience valuable information about the life cycle of a cyber-threat discussing what and how to take measures by performing formal code reviews, code inspections. He discussed essential elements of code review, paired programming and alternatives to treat and tackle cyber-threat
This presentation is a part of the COP2271C college level course taught at the Florida Polytechnic University located in Lakeland Florida. The purpose of this course is to introduce Freshmen students to both the process of software development and to the Python language.
The course is one semester in length and meets for 2 hours twice a week. The Instructor is Dr. Jim Anderson.
A video of Dr. Anderson using these slides is available on YouTube at:
https://youtu.be/KcFCcCsn6mM
A few slides on Robert Seacord's book, "Secure Coding in C/C++". While the McAfee template was used for the original presentation, the info from this presentation is public.
Developers spend up to 20% of their time writing repetitive code that machines could generate more reliably. This presentation explores the problem of duplicated source code that stems from manual implementation of patterns and reveals how to automate the boring side of programming and get a 19x ROI.
The presentation provides insight into:
- the problem of manual implementation of patterns, resulting in boilerplate code
- the cost of boilerplate for companies
- existing technologies for pattern automation
- the key reasons to consider pattern-aware compiler extensions
The white paper was written for CTOs, software architects and senior developers in software-driven organizations—specifically in financial, insurance, healthcare, energy and IT industries that typically write a lot of repetitive code.
Basic concept on Systems/Software Analysis, Design & Development, how software engineering, large projects are done, collaborated, best practices & standards.
What happens when a company either doesn’t fully empower the Security team, or have one at all? Stuff like Goto fail, Equifax, unsandboxed AVs and infinite other buzz, or yet to be buzzed, words describe failures of not adequately protecting customers or services they rely on. Having a solid security team enables a company to set a bar, ensure security exists within the design, insert tooling at various stages of the process and continuously iterate on such results. Working with the folks building the products to give them solutions instead of just problems allows one to scale, earn trust and most importantly be effective and actually ship.
There’s a whole security industry out there with folks wearing every which hat you can think of. They have influence and the ability to find a bug one day and disclose it the next, so companies must adapt both engineering practices and perspectives in order to ‘navigate the waters of reality’ and not just hope one doesn’t take a look at their product. Having processes in place that reduce attack surface, automate testing and set a minimum bar can reduce bugs therefore randomization for devs therefore cost of patching and create a culture where security makes more sense as it demonstratively solves problems.
Nvidia is evolving in this space. Focused on the role of product security, I’ll go through the various components of a security team and how they each interact and complement each other, commodity and niche tooling as well as how relationships across organizations can give one an edge in this area. This talk balances the perspective of security engineers working within a large company with the independent nature of how things work in the industry.
Attendees will walk away with a breadth of knowledge, an inside view of the technical workings, tooling and intricacies of finding and fixing bugs and finding balance within a product-first world.
Code smells and Other Malodorous Software OdorsClint Edmonson
A code smell, also known as bad smell in computer programming code, refers to any symptom in the source code of a program that possibly indicates a deeper problem. Join us in this lively session where we will get a whiff of some aromas encountered in the field and how we can neutralize them.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
4. Intro
• Psychological factors in language design…
• … and compiler error messages…
• … and static analysis tools…
• … and funny pictures of cats.
5. Who is this guy?
• Compiler developer / language designer at Microsoft from
1996 through 2012
• Visual Basic, VBScript, JScript, VS Tools for Office, C# / Roslyn
• Static analysis architect for C# at Coverity since January
• I will use “we” totally inconsistently
• I have no formal background in static analysis
• I take an engineering rather than academic approach
10. The business case for C#
• Productive, successful professional developers who target
Microsoft platforms make those platforms more attractive
to Microsoft’s customers
• Original design goal was “a simple, modern, general-
purpose language”
• Any language with an 800 page specification is no longer
simple, but modern and general-purpose still apply
• Understanding developer psychology is key to achieving
wide adoption of any developer tool
11. Target C# Developer Characteristics
• Professionals, not amateurs
• Engineers, not hackers
• Programming experts, not line-of-business experts
• Pragmatists, not academics
• Skeptics, not true believers
• Conservatives, not radicals
13. Conservatism
• C# developers hate breaking changes imposed by tools
• Even trivial breaking changes are agonized over
• In 11 years and 6 releases C# has never added a new
reserved keyword
• New keywords are contextual so as to not be breaking
• This imposes considerable restrictions on new syntaxes
• For example, consider iterator blocks:
double yield = 123.4;
yield return yield;
14. Conservatism
• C# app developers also hate breaking their users
• Facilitating versionable components was a pri 1 design goal
• Numerous seemingly-counterintuitive features actually mitigate
brittle-base-class failures:
class Base
{
public void M(int x) { }
}
class Derived : Base
{
public void M(double x) { }
}
...
derived.M(123); // Base.M or Derived.M?
16. Conservatism
C# 4.0 added dynamic dispatch to facilitate interoperability
with dynamic languages and “legacy” object models
• Enormous MVP community pushback
• I will use this feature correctly but my coworkers are
going to abuse it and then I’m going to have to fix their
god-awful hacked-up code
• Anything that makes the compiler less capable of finding
bugs is met with skepticism and resistance
• Completely redesigned based on early feedback
18. Error reporting psychology
• Dealing with correct code is literally the smallest problem
• “Roslyn” does syntactic analysis of broken code in the time
between keystrokes; semantic analysis takes a little longer
• Error messages need to be understandable, accurate, polite
and diagnostic rather than prescriptive
• Let’s take a look at some examples
20. Error reporting psychology
A params parameter must be the last
parameter in a formal parameter list
Is this saying:
• If there is a params parameter, it must be the last one? or
• The last parameter and only the last parameter must
always be a params parameter? Or
• The last parameter must be a params parameter; if others
are as well, that’s fine too?
The error is only clear if the feature is already understood
21. Error reporting psychology
Error messages must read the mind of a developer who
wrote broken code and figure out what they meant.
class C
{
public virtual static void M(){}
}
23. Error reporting psychology
Complex operator + (Complex x, Complex y) { ...
User-defined operator must be declared static and public
• This is an example of a prescriptive error done right
• The user absolutely positively has to do this to overload an operator
• Odds that they were not trying to overload an operator are low
25. Warnings are harder than errors
• Must infer developers erroneous thoughts
• Compiler must be fast
• This makes an opportunity for third-party tools
• Must be plausibly wrong
• A warning for code that no one would reasonably type is unhelpful
• Must be able to eliminate warning
• And ideally the warning should tell you how
• Must have low false positive rate
• Encouraging developers to change correct code is harmful
• We will return to this point later
26. What do C# developers want?
Rigidly defined areas of doubt and uncertainty
• Static type checking, type safety, memory safety…
• … that can be disabled if necessary.
• A compiler that infers developer intent…
• … with predictable behavior and understandable rules
• Actionable errors when inference fails…
• …rather than muddling on through and getting it wrong
28. C# was originally called SafeC
C# throws developers into the “Pit of Success”:
• Eliminate unimportant dangerous features entirely
• switch fall through
• Restrict dangerous features to clearly-marked unsafe code regions
• Eliminate implementation-defined behaviours
• x = ++x + x++; is well-defined in C# …
• …but still a bad idea.
• Define common undefined behaviours
• Accessing an array out of bounds causes an exception
• Mandate compiler warnings
There are numerous defects that the Coverity C/C++ analysis checkers
detect which are impossible, unlikely, or already warnings in C#.
Let’s look at a few dozen. Quickly. These are all defects found by Coverity
in C/C++ that are not worth checking in C#…
29. C/C++ defects inapplicable to C#:
• Local read before assignment
• C# rejects programs that use uninitialized locals
• Uninitialized fields / arrays
• Fields and arrays are automatically zeroed out
• Treating a pointer to a variable as a pointer to an array
• Rare, must be marked as unsafe
• Buffer length arithmetic errors
• Strings and arrays know their lengths; checked at runtime
• Pointer/integer/char/bool/enum type errors
• Not inter-assignable in C# without explicit cast operators
30. C/C++ defects inapplicable to C#:
• Failure to consistently check error return codes
• C# uses exceptions
• Accidental sign extension
• Either error or warning
• Implementation-defined side effect order
• Side effect order is well-defined
• Statement with no effect
• is actually a parse time error in C#
• Accidental use of ambiguous names
• C# requires that a simple name have a unique meaning in a block
31. C/C++ defects inapplicable to C#:
• sizeof mistakes
• C#’s sizeof operator only takes types
• Unintentional switch fall-through
• Is an error
• Unreachable code
• Is a warning
• Accidental assignment or comparison of variable to itself
• Yep, that’s a warning too
• Field never written or never read
• Man that’s a lot of warnings
• Missing return statement
• Is illegal
• malloc without free / free without malloc / allocator – deallocator mismatch / use after free
• Not needed in a garbage-collected language
• Dereferencing an address that lived longer than the storage it refers to
• References to variables may not be stored in long-term storage
• Accidental use of function pointer
• Method group expressions can only be used in strictly limited locations
• Overriding errors
• The language was designed to mitigate brittle base class failures by default
33. Defects common to C/C++ and C#
• Copy paste mistakes
• Expression contains variables but always
has the same result
• You checked for null here, you dereferenced
without checking there.
• Some infinite loops
• Dangling else and other indentation issues
• Array index out of bounds
• Integer overflow
• checked arithmetic is off by default
• Non-memory resource leaks
• Such as forgetting to close a file
• Stray semicolons
• Swapped arguments
• Unused return value
• Uncaught exception
• Missing or misordered critical sections
• Including non-atomic operations
inconsistently inside critical sections
• And many more!
And these are just a few that are
common to C and C#; there are
a whole host of defects specific
to C# programs that we could
find statically.
Let’s consider the psychological
aspects of static analysis tools
beyond the compiler.
35. Developer Adoption is Key
• Soundness is explicitly a non-goal
• We don’t want to find all defects or even most defects
• We want every defect reported to be a customer-affecting bug
• Developers won’t adopt a product that they perceive as making
their jobs harder for no customer benefit
• Our business model requires adoption to drive renewals
• How do developers – who, remember, are using C# because they
like a statically-typed language – react to static analysis tools?
37. Developer psychology WRT analysis tools
• Egotistical
• I don’t need this tool for my code
• But my coworkers on the other hand…
• Clever management uses this trait to advantage
39. Developer psychology WRT analysis tools
• Skeptical, conservative, dismissive
• Resistant to change
• Quick to criticize “stupid” false positives
• The first five defects they see had better be true positives
41. Developer psychology WRT analysis tools
• “Busy” with, you know, “real work”
• Code annotations are unacceptable
• Analysis tool must adapt to customer’s build process
• Overnight analysis runs are acceptable – barely
43. Developer psychology WRT analysis tools
• Any change in what defects are reported on the same code
over time – a.k.a. “churn” – is the enemy
• Randomized analysis is right out, unfortunately
• Any improvement to our analysis heuristics can cause
unwanted churn
• We try to keep churn below 5% on every release
45. Developer psychology WRT analysis tools
• Responds well to perverse incentives
• Hard-to-understand defect reports are easy to ignore
• No downside to incorrectly triaging true positives as false positives
• Finding defects is hard; presenting evidence that prevents
incorrect classification as a false positive is harder
• Deep analysis with theorem provers can be worse than shallow
analysis with cheap heuristics.
• Presenting the result is insufficient; the developer must understand
the proof to fix the defect.
47. Displaying good defect messages
public void GetThing(Type type, bool includeFrobs)
{
bool isFrob = (type != null) &&
typeof(IFrob).IsAssignableFrom(type);
object instance = this.objects[this.name]
if (instance is IFrob && includeFrobs)
{ [...] }
else if (type.IsAssignableFrom(instance.GetType())
{ [...] }
48. Displaying good defect messages
public void GetThing(Type type, bool includeFrobs)
{
Assuming type is null.
type != null evaluated to false.
bool isFrob = (type != null) &&
typeof(IFrob).IsAssignableFrom(type);
object instance = this.objects[this.name]
instance is IFrob evaluated to true.
includeFrobs evaluated to false.
if (instance is IFrob && includeFrobs)
{ [...] }
Dereference after null check:
dereferencing type while it is null.
else if (type.IsAssignableFrom(instance.GetType())
{ [...] }
50. Management psychology
• The first time static analysis runs there may be thousands
of errors; typical rate is one defect per thousand LOC
• Academic answer: rank heuristics
• Pragmatic answer: ignore them all
• Simply ignore all defects in existing code
• Triage and fix defects in new code
• “Someday” get around to fixing defects in old code
• Why is this so popular?
• Old code is in the field. It works well enough. Risk is low.
• New code is unproven. It might work, or it might not. Risk is high.
52. Management psychology
• Management actually pays for the developer tools
• And typically has no idea how to use them effectively
• Middle management has perverse incentives too
• Time, cost and complexity are easily measured; quality is not
• “Never upgrade the static analysis tool before release”
• Worse tools are better; better tools are worse
53. Worse is better; better is worse
KnownDefects
Time
No tool improvements ==
Management gets bonus
54. Worse is better; better is worse
KnownDefects
Time
No tool improvements ==
Management gets bonus
Tool upgrades find more defects ==
Management gets no bonus
The fix rate is the same in these two
graphs but if the tool improves faster
than the fix rate, no bonus.
55. Good news
If you have a well-engineered product that:
• makes good use of theoretical and pragmatic approaches,
• finds real-world, user-affecting defects, and
• takes developer and management psychology into account
Then you can make a positive difference
59. Conclusion
• Theoretical static analysis techniques are awesome; we can
and do use them in industry…
• … but doing all that math is actually only one small part of shipping
a static analysis product
• Understanding developer and management psychology is
necessary to ensure adoption of any developer tools
• C# was carefully designed to match a target developer mindset
• Coverity thinks about developer and manager psychology at every
stage in the analysis and overall product design
• Research into better ways to present defects would be awesome
60. More information
• Learn about Coverity at www.Coverity.com
• Read “A Few Billion Lines Of Code Later”
• Find me on Twitter at @ericlippert
• Or read my C# blog at www.EricLippert.com
• Or ask me about C# at www.StackOverflow.com