One premise underlies every argument about usability and security that has ever raged: "Secure software is doomed to be unusable, and usable software is doomed to be insecure." This talk will examine the faulty assumptions behind that belief, using the dual lenses of linguistics and formal language theory. We'll explore what makes software -- particularly software that developers use, e.g., APIs -- easy or difficult to use, how mismatches between what developers expect and what users expect lead to vulnerabilities, and how architects and developers can design and code for improved security and improved usability at the same time.
Working with software means working with bugs. Bugs in software, bugs in hardware; bugs in Open Source code, bugs in proprietary code. If software is eating the world, bugs might end up taking the first bite.
We will present a few typical bugs, some of them famous, some of them infamous (including bugs that actually killed people). Since one can never be too well-prepared to fend off the next infestation, we will give tools, tips, and best practices to fix bugs in Open Source software. We will give real world examples of Really Mysterious Bugs (sometimes nicknamed "Heisenbugs" because they tend to disappear when you try to observe them), and how they were fixed, in Node.js, Docker, and the Linux Kernel.
Covers all aspects of retail mastery, from retail metrics to retail strategy, retail merchandise planning, retail formulas, retail fact based negotiation, retail store financial management
Covers all aspects of retail mastery, from retail metrics to retail strategy, retail merchandise planning, retail formulas, retail fact based negotiation, retail store financial management
Working with software means working with bugs. Bugs in software, bugs in hardware; bugs in Open Source code, bugs in proprietary code. If software is eating the world, bugs might end up taking the first bite.
We will present a few typical bugs, some of them famous, some of them infamous (including bugs that actually killed people). Since one can never be too well-prepared to fend off the next infestation, we will give tools, tips, and best practices to fix bugs in Open Source software. We will give real world examples of Really Mysterious Bugs (sometimes nicknamed "Heisenbugs" because they tend to disappear when you try to observe them), and how they were fixed, in Node.js, Docker, and the Linux Kernel.
Covers all aspects of retail mastery, from retail metrics to retail strategy, retail merchandise planning, retail formulas, retail fact based negotiation, retail store financial management
Covers all aspects of retail mastery, from retail metrics to retail strategy, retail merchandise planning, retail formulas, retail fact based negotiation, retail store financial management
Covers all aspects of retail mastery, from retail metrics to retail strategy, retail merchandise planning, retail formulas, retail fact based negotiation, retail store financial management
Covers all aspects of retail mastery, from retail metrics to retail strategy, retail merchandise planning, retail formulas, retail fact based negotiation, retail store financial management
Monitoring the Coastal Processes in the Vicinity of Little Lagoon Passbwebb_usouthal
This presentation was given to the ALDOT Research Advisory Committee on July 2, 2012 in consideration for funding a two-year monitoring project at Little Lagoon Pass, Alabama.
Covers all aspects of retail mastery, from retail metrics to retail strategy, retail merchandise planning, retail formulas, retail fact based negotiation, retail store financial management
Prometheus as exposition format for eBPF programs running on KubernetesLeonardo Di Donato
The kernel knows more than our programs. Stop bloating our applications with copy-and-paste instrumentation code for metrics. Let's go look under the hoods!
Nowadays every application exposes their metrics via an HTTP endpoint readable by using Prometheus. Nevertheless, this very common pattern, by definition only exposes metrics regarding the specific applications being observed.
This talk, and its companion slides, wants to expose the idea, and a reference implementation (https://github.com/bpftools/kube-bpf), of using eBPF programs to collect and automatically expose applications and kernel metrics via a Prometheus endpoint.
It walks through the architecture of the proposed reference implementation - a Kubernetes operator with a custom resource for eBPF programs - and finally links to a simple demo showing how to use it to grab and present some metrics without having touched any application running on the demo cluster.
---
Talk given at Cloud_Native Rejekts EU - Barcelona, Spain - on May 18th, 2019
Can you go faster with less weight? In 45 minutes, I build a web server with an address book with tests firsts and no frameworks. What can you do if you really understand what's going on?
Debugging Complex Systems - Erlang Factory SF 2015lpgauth
Debugging complex systems can be difficult. Luckily, the Erlang ecosystem is full of tools to help you out. With the right mindset and the right tools, debugging complex Erlang systems can be easy. In this talk, I'll share the debugging methodology I've developed over the years.
We know there are many challenges around User Experience (UX) for end-users who are trying to use decentralized applications but what about the developers who build those applications? Their experience is often under appreciated.
In this meetup, we'll dig deeper into what developer experience (DevX) really means, the state of it on today's platforms and how we can improve it in the future. We'll survey the landscape and tear down a couple of existing platforms.
Following the talk, we'll move to a panel discussion with developers of apps on multiple different platforms about the pros and cons of development on each platform.
Learn more about NEAR Protocol, a scalable blockchain and smart contract platform, at https://nearprotocol.com
Covers all aspects of retail mastery, from retail metrics to retail strategy, retail merchandise planning, retail formulas, retail fact based negotiation, retail store financial management
Covers all aspects of retail mastery, from retail metrics to retail strategy, retail merchandise planning, retail formulas, retail fact based negotiation, retail store financial management
Monitoring the Coastal Processes in the Vicinity of Little Lagoon Passbwebb_usouthal
This presentation was given to the ALDOT Research Advisory Committee on July 2, 2012 in consideration for funding a two-year monitoring project at Little Lagoon Pass, Alabama.
Covers all aspects of retail mastery, from retail metrics to retail strategy, retail merchandise planning, retail formulas, retail fact based negotiation, retail store financial management
Prometheus as exposition format for eBPF programs running on KubernetesLeonardo Di Donato
The kernel knows more than our programs. Stop bloating our applications with copy-and-paste instrumentation code for metrics. Let's go look under the hoods!
Nowadays every application exposes their metrics via an HTTP endpoint readable by using Prometheus. Nevertheless, this very common pattern, by definition only exposes metrics regarding the specific applications being observed.
This talk, and its companion slides, wants to expose the idea, and a reference implementation (https://github.com/bpftools/kube-bpf), of using eBPF programs to collect and automatically expose applications and kernel metrics via a Prometheus endpoint.
It walks through the architecture of the proposed reference implementation - a Kubernetes operator with a custom resource for eBPF programs - and finally links to a simple demo showing how to use it to grab and present some metrics without having touched any application running on the demo cluster.
---
Talk given at Cloud_Native Rejekts EU - Barcelona, Spain - on May 18th, 2019
Can you go faster with less weight? In 45 minutes, I build a web server with an address book with tests firsts and no frameworks. What can you do if you really understand what's going on?
Debugging Complex Systems - Erlang Factory SF 2015lpgauth
Debugging complex systems can be difficult. Luckily, the Erlang ecosystem is full of tools to help you out. With the right mindset and the right tools, debugging complex Erlang systems can be easy. In this talk, I'll share the debugging methodology I've developed over the years.
We know there are many challenges around User Experience (UX) for end-users who are trying to use decentralized applications but what about the developers who build those applications? Their experience is often under appreciated.
In this meetup, we'll dig deeper into what developer experience (DevX) really means, the state of it on today's platforms and how we can improve it in the future. We'll survey the landscape and tear down a couple of existing platforms.
Following the talk, we'll move to a panel discussion with developers of apps on multiple different platforms about the pros and cons of development on each platform.
Learn more about NEAR Protocol, a scalable blockchain and smart contract platform, at https://nearprotocol.com
Fuzzing is a software testing technique that involves providing invalid, unexpected, or random data to the inputs of a computer program. The presentation covers types of fuzzers and describes how they work. We will write and run a real fuzzer. Also it shows how fuzzers can guess correct CRC checksums, help with regression testing and find logical bugs in programs. Finally, it summarizes fuzzing usage at Google.
Steelcon 2014 - Process Injection with Pythoninfodox
This is the slides to accompany the talk given by Darren Martyn at the Steelcon security conference in July 2014 about process injection using python.
Covers using Python to manipulate processes by injecting code on x86, x86_64, and ARMv7l platforms, and writing a stager that automatically detects what platform it is running on and intelligently decides which shellcode to inject, and via which method.
The Proof of Concept code is available at https://github.com/infodox/steelcon-python-injection
Property-based testing an open-source compiler, pflua (FOSDEM 2015)Igalia
By Katerina Barone-Adesi.
Discover property-based testing, and see how it works on a real project, the pflua compiler.
How do you find a lot of non-obvious bugs in an afternoon? Write a property that should always be true (like "this code should have the same result before and after it's optimized"), generate random valid expressions, and study the counter-examples!
Property-based testing is a powerful technique for finding bugs quickly. It can partly replace unit tests, leading to a more flexible test suite that generates more cases and finds more bugs in less time.
It's really quick and easy to get started with property-based testing. You can use existing tools like QuickCheck, or write your own: Andy Windo and I wrote pflua-quickcheck and found a half-dozen bugs with it in one afternoon, using pure Lua and no external libraries.
In this talk, I will introduce property-based testing, demonstrate a tool for using it in Lua - and how to write your own property-based testing tool from scratch, and explain how simple properties found bugs in pflua.
(c) 2015 FOSDEM VZW
CC BY 2.0 BE
https://archive.fosdem.org/2015/
Your users aren’t interested in your CPU utilization, and nobody is starting Reddit threads about how much disk space you have available. Questions like, “How long will I be in this queue?” or “How many disconnects is that today?” draw the wrong kind of attention. Instead of trying to guess which of your system metrics have the potential to cause an issue, your tools need to evolve from asking the same kinds of questions that your users are. An SLO, or Service Level Objective, lets you do this, resulting in fewer false alarms and surprises. The result? Happier users, happier teams, and a more productive organization.
Puppet@Citygrid - Julien Rottenberg - PuppetCamp LA '12Puppet
Julien Rottenberg, CityGrid. Presentation of the tools and workflow for our puppet setup. How puppet helps us managing 500+ servers in a hybrid environment O&O Datacenter and EC2, hands free. Watch the video at http://youtu.be/FPwga7HwomM
PuppetCamp LA, May '12.
Erlang factory SF 2011 "Erlang and the big switch in social games"Paolo Negri
talk given at erlang factory 2011 about using erlang to build social games backends
Watch the video of this presentation http://vimeo.com/22144057#at=0
Online games backend are challenging applications, a single user generates one http call every few seconds, usage volume can spike very quickly and balance between data read and write is close to 50/50 which make the use of write through cache or other common scaling approaches not so effective.
Follow how in our quest for a better architecture to serve millions of games sessions daily and reduce our resource usage we took the decision to write in Erlang our third generation game backend, see how we’re leveraging the actor model in order to change how we use and conceive our persistency layer. See also how introducing Erlang as a new tool in a company is working out, what we found hard from an organizational and technical point of view, which obstacle we hit and how as technical guys we convinced our management to take the risk of bringing in house a different technology.
Philipp Von Weitershausen Plone Age Mammoths, Sabers And Caveen Cant The...Vincenzo Barone
It is the last Plone age. The big and strong but lonely mammoth has led the way for eons. But now it is threatened by a pack of saber-tooth tigers who are quick, agile and work together. Can the friendly caveman save the mammoth and make piece with the sabers? Can Grok help making Zope and Plone more agile? Will Zope and the other web frameworks fall in love, and what do WSGI and Paste have to say about that? From the makers of "Zope on a Paste", coming this October, a comedy for the whole family (developers, integrators and newbiews). Rated PG-13.
Speaker:
Alex Cruise (Dir. Architecture, Metafor Software)
Abstract:
The rise of the DevOps movement has brought into welcome focus something that is often learned only through painful experience and expense: the success of a software product critically depends not only on its implementation, maintenance and enhancement, but also on how it’s deployed and operated.
Distributed systems are hard, but you can’t escape them: you need to scale out, but wrapping proxy interfaces around remote resources so they look local is a recipe for a fragile system. Plus, as the complexity of components and services increases, local systems aren’t actually as reliable as we think! Concurrency is hard, but you can’t escape it: whether you’re using threads in a single process, or multiple processes on a single machine, you still need to synchronize state between them somehow. Fault tolerance is hard, but you can’t escape it: parts will fail, you need to cope without rebooting the whole application. Correctness is hard, but you can’t escape it: whether through laborious testing or a Sufficiently Advanced Compiler, you need to have some assurance that the software will work as intended.
Let’s talk about a set of architectural patterns (and, yes, frameworks) that can really help us achieve the goals of concurrency, fault tolerance and correctness, while affording us the flexibility we need to scale our deployments when we achieve terrifying success.
Continuous Automated Testing - Cast conference workshop august 2014Noah Sussman
CAST 2014 New York: The Art and Science of Testing
The Association for Software Testing www.associationforsoftwaretesting.org
COURSE DESCRIPTION
Automated tools provide test professionals with the capability to make relevant observations even in the fastest-paced environments. Automated testing is also a powerful tool for improving communication between software engineers. This is important because good communication is a prerequisite for growing a great software engineering organization.
This workshop will explore the continuous testing of software systems. Special focus will be given to the situation where the engineering team is deploying code to production so frequently that it is not possible to perform deep regression testing before each release.
People who participate in this course will learn pragmatic automated testing strategies like:
* Data analysis on the command line with find, grep and wc.
* Network analysis with Chrome Inspector, Charles and netcat.
* Using code churn to predict hotspots where bugs may occur.
* Putting stack traces in context with automated SCM blame emails.
* Using statsd to instrument a whole application.
* Testing in production.
* Monitoring-as-testing.
Technical level: participants should have some familiarity with the command line and with editing code using a text editor or IDE. Familiarity with Git, SVN or another version control system is helpful but not required. Likewise some knowledge of Web servers is helpful but not required. It is desirable for participants to bring laptops.
BIO
From 2010 to 2012 Noah was a Test Architect at Etsy. He helped build Etsy's continuous integration system, and has helped countless other engineers develop successful automated testing strategies.These days Noah is an independent consultant in New York. He is passionate about helping engineers understand and use automated tools as they work to scale their applications more effectively.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
How Does XfilesPro Ensure Security While Sharing Documents in Salesforce?XfilesPro
Worried about document security while sharing them in Salesforce? Fret no more! Here are the top-notch security standards XfilesPro upholds to ensure strong security for your Salesforce documents while sharing with internal or external people.
To learn more, read the blog: https://www.xfilespro.com/how-does-xfilespro-make-document-sharing-secure-and-seamless-in-salesforce/
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
Your Digital Assistant.
Making complex approach simple. Straightforward process saves time. No more waiting to connect with people that matter to you. Safety first is not a cliché - Securely protect information in cloud storage to prevent any third party from accessing data.
Would you rather make your visitors feel burdened by making them wait? Or choose VizMan for a stress-free experience? VizMan is an automated visitor management system that works for any industries not limited to factories, societies, government institutes, and warehouses. A new age contactless way of logging information of visitors, employees, packages, and vehicles. VizMan is a digital logbook so it deters unnecessary use of paper or space since there is no requirement of bundles of registers that is left to collect dust in a corner of a room. Visitor’s essential details, helps in scheduling meetings for visitors and employees, and assists in supervising the attendance of the employees. With VizMan, visitors don’t need to wait for hours in long queues. VizMan handles visitors with the value they deserve because we know time is important to you.
Feasible Features
One Subscription, Four Modules – Admin, Employee, Receptionist, and Gatekeeper ensures confidentiality and prevents data from being manipulated
User Friendly – can be easily used on Android, iOS, and Web Interface
Multiple Accessibility – Log in through any device from any place at any time
One app for all industries – a Visitor Management System that works for any organisation.
Stress-free Sign-up
Visitor is registered and checked-in by the Receptionist
Host gets a notification, where they opt to Approve the meeting
Host notifies the Receptionist of the end of the meeting
Visitor is checked-out by the Receptionist
Host enters notes and remarks of the meeting
Customizable Components
Scheduling Meetings – Host can invite visitors for meetings and also approve, reject and reschedule meetings
Single/Bulk invites – Invitations can be sent individually to a visitor or collectively to many visitors
VIP Visitors – Additional security of data for VIP visitors to avoid misuse of information
Courier Management – Keeps a check on deliveries like commodities being delivered in and out of establishments
Alerts & Notifications – Get notified on SMS, email, and application
Parking Management – Manage availability of parking space
Individual log-in – Every user has their own log-in id
Visitor/Meeting Analytics – Evaluate notes and remarks of the meeting stored in the system
Visitor Management System is a secure and user friendly database manager that records, filters, tracks the visitors to your organization.
"Secure Your Premises with VizMan (VMS) – Get It Now"
Modern design is crucial in today's digital environment, and this is especially true for SharePoint intranets. The design of these digital hubs is critical to user engagement and productivity enhancement. They are the cornerstone of internal collaboration and interaction within enterprises.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Enhancing Research Orchestration Capabilities at ORNL.pdf
Cats And Dogs Living Together: Langsec Is Also About Usability
1. Cats and Dogs Living Together:
Langsec Is Also About Usability
Meredith L. Patterson
SEC-T 2014
Stardate 68179.7
2. Forward Observer’s Log,
Science Vessel Beagle
“The worse your logic,
the more interesting the consequences
to which it gives rise.”
-- Bertrand Russell
3.
4.
5.
6. What is usability for devs?
• IDEs?
• Code completion?
• Developers’ main tools are libraries
• Nobody’s really studied what makes APIs
“good” or “bad” to use
7. “Sooner or later
you’re going to have to
stop throwing new functions
into that menu and clean it up.”
-- Jonathan Korman
8.
9.
10. The Prime Directive
“Whenever mankind interferes
with a less developed civilisation, no matter
how well intentioned that interference may be,
the results are invariably disastrous.”
-- Jean-Luc Picard
This is why we can’t get rid of PHP.
13. cf. Alter and Oppenheimer,
“Uniting the Tribes of Fluency to Form a Metacognitive Nation,” 2009
14.
15. Chunking
we’ll never
remember
this,
will we
nope
cf. George A. Miller, “The Magical Number Seven, Plus or Minus Two,” 1956
16. Semantics-First Design
• Every problem has a domain
• Every problem also has a range
– What are the effects of success?
– What are the effects of failure?
• Model how domain values map to range values
• Then invent domain-meaningful syntax to
describe the mappings
cf. Erwig and Walkingshaw,
“Semantics First! Rethinking the Language Design Process,” 2011
17. cf. Georgiev et al, “The Most Dangerous Code in the World”, 2012
18. When a yes-or-no question isn’t
• CURLOPT_SSL_VERIFYHOST
– Sounds like a boolean, right?
– Nope! 2 = verify, 1 = “a CN exists”, and TRUE = 1
– “Future versions will stop returning an error for 1
and just treat 1 and 2 the same”
– 11 releases later, it’s still there
• But now I know it’s a valid cert, right?
– Only if CURLOPT_SSL_VERIFYPEER=TRUE too
20. Fine, I’ll use plain OpenSSL
• Great. Did you set SSL_VERIFY_PEER?
– And did you set a verify_callback with it?
• Either way, did you call
SSL_get_verify_result()?
• Gotta validate that host yourself, too
• GnuTLS is no better
– Returns negative values for some errors
– But 0 for others, like self-signed certs!
22. It Gets Better
• Some libraries have been around long
enough to watch their interfaces evolve
• C++ STL got a lot better in C++11
– They had to add move semantics to do it, but
threading is awesome now
– Confusing auto_ptr gone; shared_ptr and
unique_ptr do what they say on the tin
• But let’s talk about a security library.
23. You call this making it easy?
gpgme_ctx_t ctx;
gpgme_error_t err;
gpgme_data_t cipher, plain;
gpgme_engine_info_t engine;
[~20 lines of boilerplate]
err = gpgme_op_decrypt(ctx, cipher, plain);
if (err == GPG_ERR_NO_ERROR) {
[at least 8 more lines of boilerplate,
just to see what you decrypted]
}
...
Python has to be better, right?
24.
25. …maybe?
• ISConf GPG.py: wraps the gpg binary
• Very opinionated about:
– How keyrings are named
– Which options various operations use
• Leaves out a lot of functionality
– Want a detached signature? Too bad
“WHO PUTS UNITTESTS IN A TRY/EXCEPT BLOCK
WHICH CATCHES ALL EXCEPTIONS?!”
26. 2013: finally something usable
• All the command-line functionality!
• Public interface, no need to touch the rest
• Sanitizes untrusted inputs!
• kwargs for all the things!
• All in all, much more pythonic
• THANK YOU ISIS, WE LOVE YOU
27. “I believe that usability is a security concern;
systems that do not pay attention to the
human interaction factors involved
risk failing to provide security
by failing to attract users.”
-- Len Sassaman
28. Credits
• @skry
• Jonathan Korman
• The education panel at SLE2014, especially:
– Massimo Tisi
– Eric Walkingshaw and Martin Erwig
• The GIMP and G’MIC
• Paramount Pictures (and everyone at TrekCore)
• My sisters the elementary school teachers
Editor's Notes
Humans are really, really bad at reasoning about humans – including themselves. Even really experienced designers get surprised all the time by how users respond to the interfaces they develop. UX has become data-driven, because people fool themselves constantly about what they think they want, and only actual usage data can confirm whether the reasoning that drove your decisions was valid or flawed. Not really even confirm; more like hint.
“The street finds its uses for things – uses the makers never intended.” Tools that weren’t intended for contexts where security matters still end up getting used in life-or-death situations all the time; this might have been news when the Arab Spring broke out, but nobody has an excuse anymore.
But even with security-sensitive use cases popping up everywhere from the Ukraine to Cupertino, we don’t have the luxury of A/B testing to empirically determine whether the tools we build provide the security properties we think they do. We have to get it right the first time.
There are certain arguments…
… that we keep having again and again and again….
… and I’m getting really tired of them. Security vs. usability is probably the *oldest*.
When people talk about usability in a development context, they’re usually talking about IDEs, code completion, and so on. Not here.
In an enterprise (har!) context, you often don’t get to pick what language to use, but you do have degrees of freedom about what libraries you use
Turns out there’s been next to no design research on what makes an API “good” or “bad” to use
Tons of effort on graphical interfaces, next to none on text interfaces. But some of these insights translate.
We talk about technical debt; Jonathan Korman talks about UX design debt. Sooner or later you’re going to have to stop adding methods to that API and refactor it into something people can remember how to use without having to look it up all the time.
We *can* talk about what makes *tools made from language* “easy” or “hard” to use
I’m not going to be able to speak decisively about that, because de gustibus est non disputandum
There has been very little research on this as well, but we can draw insights from cognitive science and its applications in education.
HOWEVER.
UX WITHOUT USER RESEARCH IS NOT UX (then click)
We need to do empirical research on iterating toward usability. Nadim Kobeissi started with usability, and has been iterating toward security since 2011, and *that’s actually working*. But it’s still dangerous. People in a hostile environment who have a risky tool they can use and a safe tool they can’t will use the risky tool every time. If you’re in Syria and your choice is between Facebook or not getting vital information to the people who need it, Facebook wins. I have friends here in Stockholm who used to give tech support to Syrian rebels. I say “used to” because one by one, the Syrians fell off the face of the net. We don’t know where they are or what happened to them. Let that sink in for a minute.
We have libraries that we know can provide security properties that people want and need, but as we’ll see, the design of those libraries often makes it really difficult to use them in a way that does provide those properties. We can test tools built with these libraries in non-hostile environments, and in a few minutes we’ll talk about how one team actually did. This implies we can also iterate toward library usability in a non-hostile environment, and we should be.
There’s a tendency among security practitioners to look down on people who don’t take security into account when they write code. When we come in and tell them “you need to be coding differently,” two things happen: they get upset, and they still get it wrong.
It’s hard to stay mindful of multiple concerns at the same time, even when those concerns are not inherently in conflict. And I don’t think that security and usability are inherently in conflict, but I do think that we as security practitioners need to take a step back and observe the choices that regular developers make – which concerns they work hard to satisfy and which ones they kick to the curb – and think about why they make the tradeoffs they do.
[click]
If people use something that’s terrible in most ways, it’s because that tool is doing something that other tools aren’t.
PHP is terrible for everything except:
Getting up and running quickly – there’s less to configure than any other language, drop a template in the right directory and you’re done. “Hello World” is literally a file containing the text “Hello World.”
Not leaking, since state that isn’t stored in a database or a memory cache is destroyed
Hilariously, this means PHP is more referentially transparent than other web languages, and violates REST principles less
In other words, PHP meets some of people’s concerns about how code is supposed to behave
It just violates nearly all of our concerns about how code is supposed to behave
Turns out, both of these matter a lot. When management cares about speed, “time to unblock” is your most important metric as a developer. And when there are enough other users who have gotten up and running, then gotten stuck the same place you have, someone has answered your question on StackOverflow.
If you can’t rely on it, it isn’t secure. But this goes farther than that; computation must be composable in order to get the right answer in the first place. If process 1 transforms A into B, and process 2 transforms B into C, then you can compose them sequentially into a system that transforms A into C – but only if the processes operate in that order. If process 2 goes to transform B into C and there isn’t a B there yet, but process 2 just grabs whatever data it sees and assumes that’s a B, all bets are off. And if a third-party adversary put that data there, process 2 and the entire environment it’s running on is in trouble.
If you have two computations that interact – one produces data that the other consumes, both write data to the same location, whatever – they are only composable if they don’t violate each other’s assumptions. Which is why in langsec we talk about boundaries of competence – those points of interaction where assumptions that must not be violated can be violated if one of the actors is malicious or even just sloppy.
Keeping track of a lot of assumptions is hard. The biggest problem that API designers face is how to enable developers to manage that state. And while we don’t have any silver bullets, we do know a few things about brains work that can inform those design decisions. So let’s talk about those.
You’re probably already familiar with the concept of fluency with a language. If you’re fluent in a language, you find it easy to understand and express things in that language – it doesn’t take very much work. Processing fluency refers to how much work it takes to process information. It is subjective. There are several kinds of processing fluency; the ones we care about are:
Perceptual fluency – how easy is it to recognise a piece of information, especially based on what it looks like. When two pieces of information look too similar – like method names, or option names – perceptual fluency suffers.
Retrieval fluency – how easy it is to remember a piece of information. This is affected by several cognitive biases, particularly recency bias, which is your tendency to remember the most recent thing you encountered, and the availability heuristic, which is your tendency to stop thinking as soon as you remember that most recent thing. This quickly becomes a self-fulfilling prophecy: learn something the wrong way once, do it the wrong way until you force yourself to stop.
Decision fluency – how easy it is to make a decision. Having too many options, or options that are difficult to tell apart, makes it much harder to make decisions at all, much less the right one.
Recognition vocabulary: the words that you already know the meanings of and can recognise immediately. Also known as sight vocabulary.
In natural language, homonyms – words that are spelled the same, and sound the same, but have different meanings – cause confusion. This means it’s important to think about what you name things. (example: refactoring parser_project into ParserModel and PerParseContext. Sol suggests “ParseContext,” Milly says “no, that sounds like you’re parsing a context” – and she’s right. “Parse” is more available as a verb than as a noun, and we don’t want future maintainers to jump to the conclusion that it’s a verb here! So we fight the availability heuristic with a preposition, because “per” is almost always followed by a noun.)
Being too specific can also cause confusion. .NET: URLPathEncode – you’d think it would encode a URL, right? Only up until the ?, because everything after that is the query. But people tend to think of a full URL as the “path” to the resource it locates. Microsoft had to call this out specifically in their ASP.NET best practices because so many people opened themselves up to XSS by only encoding the path of the URL and not the query parameters.
People tend to remember things in groups, categorising them by the extent to which they’re interrelated. But if there are too many similar elements in a group, retrieval fluency suffers: unless you have some way to organise them into smaller and more closely related clusters, then nest those clusters into a hierarchical structure, you’ll have a hard time even remembering roughly how many elements there were. Effective working limit empirically seems to be somewhere between 5 and 9.
Implications for large namespaces should be obvious – and yes, this totally contradicts “flat is better than nested” from the Zen of Python. But even the Zen of Python thinks we should be using more namespaces.
OTOH, deep inheritance hierarchies create a related problem: sure everything’s organised, but the path back to the root is so long that it’s hard to remember which methods came from which superclass. And it’s worth studying whether implementing too many interfaces, or using too many mixins, creates problems.
“Write the man page first” – good advice, but not the same as “come up with the syntax first.”
Domain and range – you probably heard those in high school algebra class when you learned about functions. Same deal here.
So we still only have a rough idea of what makes a good API, but if we restrict “bad” to “fails to provide users with the security guarantees it promises,” there’s actually been some science on that.
2 years ago, a team from the University of Texas and Stanford did an exhaustive review of how applications and other libraries use SSL implementations like OpenSSL and GnuTLS, data-transport libraries like cURL and Apache HTTPClient, and language modules like Python’s httplib. Everybody was doing it wrong. Not just the little “everybodies” like people building storefronts on top of Drupal – although they were certainly vulnerable, given that of the 14 shopping-cart modules they looked at, only 2 had cert validation turned on, and they were both for Google Checkout, which doesn’t exist anymore. Google Wallet replaced it, and although it requires HTTPS to send things like credit card numbers around, there’s no way for Google Wallet to know that that conn hasn’t been MiTMed.
No, by “everybody” we’re talking about things like the Amazon EC2 Java library. Android push notifications. Amazon Flexible Payments. Paypal. EVERYTHING.
CURLOPT_SSL_VERIFYHOST has to be set to 2 in order to check that the Common Name in the cert matches the server’s hostname. That’s all it verifies.
Version 7.28.1 introduced the “throw an error if CURLOPT_SSL_VERIFYHOST=1” behaviour in November 2012. 7.38.0 came out a week and a half ago. There have been 10 minor version bumps and a point release in almost 2 years,
Worse, there’s another option that affects how CURLOPT_SSL_VERIFYHOST behaves, and it works differently depending on what SSL library you build cURL against. CURLOPT_SSL_VERIFYPEER actually is a boolean, and it defaults to TRUE, but if someone switches it off for whatever reason, cURL no longer checks that the cert is authentic – only that the names match. That’s with OpenSSL. Build against NSS and set VERIFYPEER to false, and cURL won’t even check that the hostname matches the Common Name. Your users will never know.Again with the confusing names: to your average dev, a “peer” is another client like yourself, a “host” is a server. Yes, RFC5246 (TLS) calls everyone peers. That doesn’t make it obvious.
Why would you ever want to only check that a host and CN match, and not check that the cert is authentic? Sure, in dev you might use a self-signed cert to get your code working before you pay good money for a cert; again, time to unblock is the crucial metric. But what’s blocking you is the libraries themselves, because they’ve established an invalid mapping between domain and range.
The domain consists of certs and CAs. Those are the inputs. The range – the set of possible outputs when you check the authenticity of a cert – has at least four possibilities. The cert can have an invalid signature, which means you’re done, fail closed. It can be valid but self-signed, which means you can’t authenticate it against the PKI. Similarly, it can have a valid signature that chains back to a root you don’t have, which actually happened when I went to pay my Belgian taxes online for the first time. Or it can have a valid signature and a valid trust chain back to the root.
Instead of reporting what actually happens so that devs can decide what subset of the range constitutes failure for their particular domain, and then whether to fail open or closed, libraries force devs to fail open in order to make any progress at all. Good luck remembering to switch that back to fail-closed.
Now you have even more problems. Instead of a library managing them for you badly, you have to manage them all yourself.
Per Georgiev et al, SSL_connect sets an error value if chain-of-trust verification fails, but if there is no callback, SSL_connect still succeeds if the error isn’t related to incorrect parsing. If you only check the return value, you don’t actually know what happened.
Lynx misunderstood GnuTLS so badly that although they checked the tls_status code, which is analogous to the error value that OpenSSL sets, both checks for GNUTLS_CERT_SIGNER_NOT_FOUND were only reached if tls_status was negative. 0 is not negative.
Providing too much choice is paralyzing. Inconsistent error reporting deludes people into thinking they’re safe when they aren’t.
Providing too little choice is frustrating. People will turn off all the security features they have to in order to get their work done, and who has time to go turn that back on?
If you want your library to be fully functional, make it express the circumstances that have resulted from its actions in a consistent manner, and let developers make their choice from there. And for crying out loud, stop hiding error codes behind what appear to be successful return values. People can only observe the principle of full recognition before processing if they know where to find what they have to recognise. If you make them look in more than one place, you’re setting them up for failure.
The changes they had to make to C++ to support these STL changes reach way down into the language. In order to support move constructors and move assignment, they had to add an entirely new kind of reference – a reference to the right-hand side of an assignment operation.
And they weren’t afraid to deprecate things! Not only auto_ptr; adding move semantics deprecates entire idioms that C++ users had become resigned to, like swapping a container with a temporary copy of itself to get rid of extra capacity or making heavyweight classes inherit from a useless but small base class to reduce overhead during temporary copy construction.
GnuPG Made Easy made a lot of the same usability mistakes that OpenSSL did. It has its own squirrelly I/O abstraction layer; you have to set up a “context” god-object first; it lumps disparate data types like keys and usernames together under a god-struct where some members are only valid for certain types.
Error handling is a little better, in that most functions return gpgme_error_t … but then all the operations you care about, like decryption or signing, have corresponding gpg_op_foo_result functions that return a gpgme_foo_result_t that does not contain the result. The actual result is in the context, but the only way to get it out of the context is to use the I/O abstraction layer functions that recapitulate the C stdio library, and if you do anything else to the context before retrieving the result_t, you can kiss that data goodbye. You didn’t need to know who that message was encrypted to, right?
But it was 2000 when Werner Koch designed this API. And it’s C. Python has to be better, right?
This example has been in pygpgme in almost exactly this form since revision 4, in January 2006. A lot of the boilerplate is hidden, but this is still the same idiom. It’s tightly coupled to the gpgme API, and not very pythonic. An older library, pyme, is just about as bad.
I end up having the same problems with build systems all the time – people write them assuming that the way they do it is the way every right-thinking person ought to do it.
[click]
But were they necessarily all that right-thinking? Who left that comment, and where?
Last year, Isis Lovecruft from the Tor Project went and rewrote GPG.py
The interface surface is actually smaller now thanks to the refactoring.
With many tasks, appearing to have accomplished something is just as good as having actually accomplished it. This is manifestly not the case for APIs. As a result, you need to not just make it easy to do what you want to do, but hard to use it wrong. APIs need to actively be difficult to use in ways that are dangerous to developers, because those dangers propagate out to end-users.
We’ve learned some hard lessons about what makes APIs difficult to use correctly. We’re getting better at making APIs easier to use correctly. Now we need to figure out how to make them hard to use wrong.