As part of the 2014 USENIX Release Engineering Summit West, I presented a talk about packaging software and what's wrong with current trends.
Here's the abstract:
Reliably distributing software is a notoriously difficult problem, and almost every operating system and programming language vendor has tried to solve it. This has led to a herd of packaging systems, almost none of which are cross-compatible; some manage system-level software, while others focus on extending their own language (often by trampling on system-level software). And like all competing standards, every packaging system comes with its own sharp corners, dull edges, and hidden idiosyncrasies to deal with along the path to packaging happiness. In an attempt to answer the question "How do I install this software and ensure that its dependencies are fulfilled?", some novel solutions have begun to see popular adoption. But a lot of these newer tools and techniques tread the same ground as their predecessors while overlooking the lessons that were learned along the way.
I'll talk about the state of native packaging systems on some popular platforms (Debian/Ubuntu, RHEL/CentOS/Fedora, and Mac OS X), packaging systems for popular languages (Ruby, Python, Perl, and Node) and the ways that developers are attempting to work around the limitations of these systems. I'll review the reasons that tools like curlbash, FPM, and omnibus packages have become popular by sharing lessons I've learned while working through these systems. While this will be an amusing presentation, I'll show how native packages can address the concerns that have pushed Release Engineers and Developers away. I will also talk about what native packaging systems can learn from the next generation of packaging tools.
The original abstract is available here:
https://www.usenix.org/conference/ures14west/summit-program/presentation/mckern
Steelcon 2015 - 0wning the internet of trashinfodox
My presentation slides from Steelcon 2015 on "Owning the Internet of Trash", a presentation on exploitation of endemic vulnerabilities in the so called "internet of things", with a focus on finding vulnerabilities in, exploiting, and gaining persistent access to, routers and other such embedded devices.
This talk was recorded, a video will be linked soonish, and went over some basics of analysing firmware, hardware, and suchlike to find bugs in things and hack the planet!
This is the slides accompanying the talk I gave at BSides Hannover 2015, discussing the reverse engineering and exploitation of numerous vulnerabilities in Icomera Moovmanage products along with the post exploitation of such, including the potential creation of a firmware rootkit
BSides Edinburgh 2017 - TR-06FAIL and other CPE Configuration Disastersinfodox
This talk discusses vulnerabilities involving poor implementations of the TR-069 and TR-064 protocols on both the CPE (Consumer Premises Equipment) end and on the ISP's end of the whole flaming pile of crap - the ACS (Auto Configuration Service) system.
Topics discussed include the TR-064 "TR-06FAIL" command injection vulnerability exploited by the "Annie" Mirai variant in late 2016, with a nod to previous issues discovered such as Misfortune Cookie. It then goes on to discuss the total shit show that is the entire TR-XXX ecosystem of protocols by demonstrating that there is also a complete and total disregard for software security and proper development practices on the other end, by showing off an (zero day at time of talk) exploit in the FreeACS implementation of an ACS server which allows for total remote compromise of the ACS, along with abusing it as a command and control system to hijack all of the CPE devices associated with it.
This is the first in a series of talks on the matter. There will be sequels. There will be more bugs. There will be ISP engineers working massive overtime. There will be tears, and blood, and whiskey.
Contributing to WordPress core - a primerlessbloat
A few thoughts on getting started contributing code and designs to WordPress core. Geared towards designers and front-end developers who may not have a lot of experience with Trac, IRC, and running
Dock ir incident response in a containerized, immutable, continually deploy...Shakacon
Incident response is generally predicated on the ability to examine a system post-breach, pull memory dumps, file system artifacts, system logs, etc. But what happens when that system was part of a fleet of containers? How do you pull a memory dump from an ephemeral container? How do you do forensics when the container and the host that ran the container have been gone for days? Even assuming you catch an intrusion while it's ongoing, how do you respond effectively if you can't access the systems in question because they are read-only, no SSH access? Coinbase has spent the last year attacking these challenges in a AWS-based, immutable and fully containerized infrastructure that stores over a billion dollars of digital currency. Come see how we do it.
This presentation will sum up how to do tunnelling with different protocols and will have different perspectives detailed. For example, companies are fighting hard to block exfiltration from their network: they use http(s) proxies, DLP, IPS technologies to protect their data, but are they protected against tunnelling? There are so many interesting questions to answer for users, abusers, companies and malware researchers. Mitigation and bypass techniques will be shown you during this presentation, which can be used to filter any tunnelling on your network or to bypass misconfigured filters.
Steelcon 2015 - 0wning the internet of trashinfodox
My presentation slides from Steelcon 2015 on "Owning the Internet of Trash", a presentation on exploitation of endemic vulnerabilities in the so called "internet of things", with a focus on finding vulnerabilities in, exploiting, and gaining persistent access to, routers and other such embedded devices.
This talk was recorded, a video will be linked soonish, and went over some basics of analysing firmware, hardware, and suchlike to find bugs in things and hack the planet!
This is the slides accompanying the talk I gave at BSides Hannover 2015, discussing the reverse engineering and exploitation of numerous vulnerabilities in Icomera Moovmanage products along with the post exploitation of such, including the potential creation of a firmware rootkit
BSides Edinburgh 2017 - TR-06FAIL and other CPE Configuration Disastersinfodox
This talk discusses vulnerabilities involving poor implementations of the TR-069 and TR-064 protocols on both the CPE (Consumer Premises Equipment) end and on the ISP's end of the whole flaming pile of crap - the ACS (Auto Configuration Service) system.
Topics discussed include the TR-064 "TR-06FAIL" command injection vulnerability exploited by the "Annie" Mirai variant in late 2016, with a nod to previous issues discovered such as Misfortune Cookie. It then goes on to discuss the total shit show that is the entire TR-XXX ecosystem of protocols by demonstrating that there is also a complete and total disregard for software security and proper development practices on the other end, by showing off an (zero day at time of talk) exploit in the FreeACS implementation of an ACS server which allows for total remote compromise of the ACS, along with abusing it as a command and control system to hijack all of the CPE devices associated with it.
This is the first in a series of talks on the matter. There will be sequels. There will be more bugs. There will be ISP engineers working massive overtime. There will be tears, and blood, and whiskey.
Contributing to WordPress core - a primerlessbloat
A few thoughts on getting started contributing code and designs to WordPress core. Geared towards designers and front-end developers who may not have a lot of experience with Trac, IRC, and running
Dock ir incident response in a containerized, immutable, continually deploy...Shakacon
Incident response is generally predicated on the ability to examine a system post-breach, pull memory dumps, file system artifacts, system logs, etc. But what happens when that system was part of a fleet of containers? How do you pull a memory dump from an ephemeral container? How do you do forensics when the container and the host that ran the container have been gone for days? Even assuming you catch an intrusion while it's ongoing, how do you respond effectively if you can't access the systems in question because they are read-only, no SSH access? Coinbase has spent the last year attacking these challenges in a AWS-based, immutable and fully containerized infrastructure that stores over a billion dollars of digital currency. Come see how we do it.
This presentation will sum up how to do tunnelling with different protocols and will have different perspectives detailed. For example, companies are fighting hard to block exfiltration from their network: they use http(s) proxies, DLP, IPS technologies to protect their data, but are they protected against tunnelling? There are so many interesting questions to answer for users, abusers, companies and malware researchers. Mitigation and bypass techniques will be shown you during this presentation, which can be used to filter any tunnelling on your network or to bypass misconfigured filters.
PHP Conference Argentina 2013 - Independizate de tu departamento IT - Habilid...Pablo Godel
Un programador PHP/web no está completo sin conocimientos de administración de servidores. Cuando buscas un trabajo, seguramente te encontrarás con el requerimiento de conocimientos para configurar un servidor (Linux, Apache, MySQL and PHP). Las posibilidades de que consigas ese trabajo son mayores si conoces sobre servidores.
Dirty Little Secrets They Didn't Teach You In Pentest Class v2Rob Fuller
This talk (hopefully) provides some new pentesters tools and tricks. Basically a continuation of last year’s Dirty Little Secrets they didn’t teach you in Pentest class. Topics include; OSINT and APIs, certificate stealing, F**king with Incident Response Teams, 10 ways to psexec, and more. Yes, mostly using metasploit.
Presentation delivered by Darran Lofthouse, Principal Software Engineer, Red Hat & Kabir Khan, Principal Software Engineer, Red Hat, during London JBoss User Group event on the 21st of May 2014.
Daniel Stenberg explains HTTP/3 and QUIC at GOTO 10, January 22, 2019. This is the slideset, see https://daniel.haxx.se/blog/2019/01/23/http-3-talk-on-video/ for the video.
HTTP/3 is the designated name for the coming next version of the protocol that is currently under development within the QUIC working group in the IETF.
HTTP/3 is designed to improve in areas where HTTP/2 still has some shortcomings, primarily by changing the transport layer. HTTP/3 is the first major protocol to step away from TCP and instead it uses QUIC.
Why the new protocols are deemed necessary, how they work, how they change how things are sent over the network and what some of the coming deployment challenges will be.
As you will see in this film, there are a lot of questions from an interested and educated audience.
Daniel Stenberg is the founder and lead developer of the curl project. He has worked on HTTP implementations for over twenty years. He has been involved in the HTTPbis working group in IETF for ten years and he worked with HTTP in Firefox for years before he left Mozilla. He participates in the QUIC working group and is the author of the widely read documents ”HTTP2 explained” and ”HTTP/3 explained”.
Giving back with GitHub - Putting the Open Source back in iOSMadhava Jay
My experience helping take an in house Swift library and making it into an Open Source framework available on GitHub and Package Management repositories like Cocoapods. Any questions or feedback appreciated @madhavajay
A presentation covering some of the interesting things going on with Powershell in the Infosec community. I give a brief overview of what powershell is, then go over some interesting aspects of three different offensive powershell frameworks and finally give a demo of how a local user can escalate to domain admin privileges using just these frameworks.
As part of the 2016 USENIX LISA conference, I presented a talk about technical hiring, and the disservices we do to both candidates and interviewers during the Standard Technical Interview.
Here's the abstract:
A strong team is more than a loosely affiliated assemblage of individuals or an echo chamber of like-minded people who speak as one multi-headed hive mind. Hiring new people for your strong team is probably one of the most challenging tasks your team will have to do. All too often strong technical teams use some variation of the "Standard Technical Interview." This self-propagating interview "process" seems to be designed to both wear out the team giving the interview and emotionally flatline any candidates subjected to it.
I believe that hiring is one of the most important contributions you will make to your organization. Hiring well should be about more than just getting Unicorn candidates to sign on the dotted line. After years of technical interviews with different types of organization, I’ve realized that most technical interviews suffer from focusing too deeply on problems the team had yesterday instead of the team they want to be tomorrow.
I want to talk about what happened when my team tried treating candidates like peers who already had the job instead of giving in to Repetition Compulsion and inflicting trial-by-combat on them just because that's how we were hired. Interviewers felt like they had a better grasp on their role in the process, candidates did not feel like there was some secret handshake or passphrase they were missing, and the company didn’t collapse even though nobody wrote pretend-code on a whiteboard!
(Credit for the amazing Thunderdome illustration in the title slide to Matthew Elliot)
Original abstract: https://www.usenix.org/conference/lisa16/conference-program/presentation/mckern
Named one of the top 10 coolest storage startups of 2014 by CRN, NAKIVO is delivering a new way for cloud providers, enterprises, and SMBs to protect their VMware environments more reliably, efficiently, and cost effectively. NAKIVO Backup & Replication is VMware-certified, purely agentless, and can be deployed on both Linux and Windows. Featuring a simple and intuitive Web UI, the product can back up and replicate VMware VMs onsite, offsite, and to private/public clouds. NAKIVO Backup & Replication supports live applications and databases and provides data deduplication, instant file recovery, instant Exchange object recovery, flash VM boot, and network acceleration
PHP Conference Argentina 2013 - Independizate de tu departamento IT - Habilid...Pablo Godel
Un programador PHP/web no está completo sin conocimientos de administración de servidores. Cuando buscas un trabajo, seguramente te encontrarás con el requerimiento de conocimientos para configurar un servidor (Linux, Apache, MySQL and PHP). Las posibilidades de que consigas ese trabajo son mayores si conoces sobre servidores.
Dirty Little Secrets They Didn't Teach You In Pentest Class v2Rob Fuller
This talk (hopefully) provides some new pentesters tools and tricks. Basically a continuation of last year’s Dirty Little Secrets they didn’t teach you in Pentest class. Topics include; OSINT and APIs, certificate stealing, F**king with Incident Response Teams, 10 ways to psexec, and more. Yes, mostly using metasploit.
Presentation delivered by Darran Lofthouse, Principal Software Engineer, Red Hat & Kabir Khan, Principal Software Engineer, Red Hat, during London JBoss User Group event on the 21st of May 2014.
Daniel Stenberg explains HTTP/3 and QUIC at GOTO 10, January 22, 2019. This is the slideset, see https://daniel.haxx.se/blog/2019/01/23/http-3-talk-on-video/ for the video.
HTTP/3 is the designated name for the coming next version of the protocol that is currently under development within the QUIC working group in the IETF.
HTTP/3 is designed to improve in areas where HTTP/2 still has some shortcomings, primarily by changing the transport layer. HTTP/3 is the first major protocol to step away from TCP and instead it uses QUIC.
Why the new protocols are deemed necessary, how they work, how they change how things are sent over the network and what some of the coming deployment challenges will be.
As you will see in this film, there are a lot of questions from an interested and educated audience.
Daniel Stenberg is the founder and lead developer of the curl project. He has worked on HTTP implementations for over twenty years. He has been involved in the HTTPbis working group in IETF for ten years and he worked with HTTP in Firefox for years before he left Mozilla. He participates in the QUIC working group and is the author of the widely read documents ”HTTP2 explained” and ”HTTP/3 explained”.
Giving back with GitHub - Putting the Open Source back in iOSMadhava Jay
My experience helping take an in house Swift library and making it into an Open Source framework available on GitHub and Package Management repositories like Cocoapods. Any questions or feedback appreciated @madhavajay
A presentation covering some of the interesting things going on with Powershell in the Infosec community. I give a brief overview of what powershell is, then go over some interesting aspects of three different offensive powershell frameworks and finally give a demo of how a local user can escalate to domain admin privileges using just these frameworks.
As part of the 2016 USENIX LISA conference, I presented a talk about technical hiring, and the disservices we do to both candidates and interviewers during the Standard Technical Interview.
Here's the abstract:
A strong team is more than a loosely affiliated assemblage of individuals or an echo chamber of like-minded people who speak as one multi-headed hive mind. Hiring new people for your strong team is probably one of the most challenging tasks your team will have to do. All too often strong technical teams use some variation of the "Standard Technical Interview." This self-propagating interview "process" seems to be designed to both wear out the team giving the interview and emotionally flatline any candidates subjected to it.
I believe that hiring is one of the most important contributions you will make to your organization. Hiring well should be about more than just getting Unicorn candidates to sign on the dotted line. After years of technical interviews with different types of organization, I’ve realized that most technical interviews suffer from focusing too deeply on problems the team had yesterday instead of the team they want to be tomorrow.
I want to talk about what happened when my team tried treating candidates like peers who already had the job instead of giving in to Repetition Compulsion and inflicting trial-by-combat on them just because that's how we were hired. Interviewers felt like they had a better grasp on their role in the process, candidates did not feel like there was some secret handshake or passphrase they were missing, and the company didn’t collapse even though nobody wrote pretend-code on a whiteboard!
(Credit for the amazing Thunderdome illustration in the title slide to Matthew Elliot)
Original abstract: https://www.usenix.org/conference/lisa16/conference-program/presentation/mckern
Named one of the top 10 coolest storage startups of 2014 by CRN, NAKIVO is delivering a new way for cloud providers, enterprises, and SMBs to protect their VMware environments more reliably, efficiently, and cost effectively. NAKIVO Backup & Replication is VMware-certified, purely agentless, and can be deployed on both Linux and Windows. Featuring a simple and intuitive Web UI, the product can back up and replicate VMware VMs onsite, offsite, and to private/public clouds. NAKIVO Backup & Replication supports live applications and databases and provides data deduplication, instant file recovery, instant Exchange object recovery, flash VM boot, and network acceleration
Spacewalk (http://spacewalk.redhat.com/ ) has been deployed locally by the Fuqua School of Business to manage a Linux server plant of roughly 70 CentOS and Scientific Linux servers. Advantages include GUI "scoreboard" of all servers, central management and scheduling of updates/reboots, server configurations, locally-built packages, and kickstart images. Compliance auditing, service monitoring, and event alerting are also available. Servers can be grouped in multiple and arbitrary ways to meet local needs. Authentication to the interface can be controlled via PAM, and several levels of authorization (roles) can be assigned. The product has enabled more consistent and secure management of scores of diverse servers by less than 1.5 FTE at Fuqua.
Package manages and Puppet - PuppetConf 2015ice799
This talk will begin by explaining what a package manager is and how package managers work, at a high level. Next, we'll observe the common patterns seen on the internet of compiling software in a Puppet manifest and discuss why this not ideal. This talk will conclude by showing how you can add package repositories to your infrastructure using Puppet and what settings are important for ensuring secure access to remote package repositories.
Puppet Camp LA 2015 talk covering: packages, package managers, puppet, and tips, tricks, and puppet modules for setting up secure package repositories.
This is *not* my presentation by any mean. It is the one Isaac Schlueter gave at nodeconf. I had to upload it here because it was only available in .key here http://dl.dropbox.com/u/3685/presentations/nodeconf-npm/index.html
This talk aims to cover a breadth of topics about package management and Chef, starting with some fundamentals and continuing on to more advanced techniques and tips.
This talk will begin by explaining why packages and package management are fundamental tenants to managing infrastructure. We'll examine why the common practice of simply running "make install" in a Chef recipe is a bad idea and what users can do when they see recipes like this in the wild.
An extremely common problem with package management is misconfiguration of package repositories and client software. Most of the existing documentation available does not cover all of the configuration required to correctly setup and access package repositories securely and lots of configurations are simply copy-and-pasted from unreliable sources.
In order to combat some of this, the talk will continue by examining some common Chef resources for controlling package repositories with care to carefully go over commonly misunderstood and misused options. We'll examine how to generate secure package repositories, what options must be set in Chef recipes to access repositories securely, and what bugs you may bump into in your infrastructure that may prevent you from securely accessing package repositories.
Finally, this talk will wrap up with some concluding tips, tricks, and thoughts about packaging and how to use it to carefully manage infrastructure.
https://youtu.be/-HJ7EZ85THU
DevOpsDays Baltimore 2017.
In high security environments, we are often behind proxies, firewalls or obnoxious corporate policies that disallow access to Github or RubyGems. What gives?! In this session, I will talk about what problems we need to solve to build and manage environments in an offline world and how infrastructure as code is at the heart of making it happen.
In high security environments, we are often behind proxies, firewalls or obnoxious corporate policies that disallow access to Github or RubyGems. What gives?! In this session, I will talk about what problems we need to solve to build and manage environments in an offline world and how infrastructure as code is at the heart of making it happen.
A story of how we went about packaging perl and all of the dependencies that our project has.
Where we were before, the chosen path, and the end result.
The pitfalls and a view on the pros and cons of the previous state of affairs versus the pros/cons of the end result.
OSDC 2016 - Continous Integration in Data Centers - Further 3 Years later by ...NETWAYS
I gave a talk titled "Continuous Integration in data centers“ at OSDC in 2013, presenting ways how to realize continuous integration/delivery with Jenkins and related tools.Three years later we gained new tools in our continuous delivery pipeline, including Docker, Gerrit and Goss. Over the years we also had to deal with different problems caused by faster release cycles, a growing team and gaining new projects. We therefore established code review in our pipeline, improved our test infrastructure and invested in our infrastructure automation.In this talk I will discuss the lessons we learned over the last years, demonstrate how a proper continuous delivery pipeline can improve your life and how open source tools like Jenkins, Docker and Gerrit can be leveraged for setting up such an environment.
Steelcon 2014 - Process Injection with Pythoninfodox
This is the slides to accompany the talk given by Darren Martyn at the Steelcon security conference in July 2014 about process injection using python.
Covers using Python to manipulate processes by injecting code on x86, x86_64, and ARMv7l platforms, and writing a stager that automatically detects what platform it is running on and intelligently decides which shellcode to inject, and via which method.
The Proof of Concept code is available at https://github.com/infodox/steelcon-python-injection
OSMC 2017 | Groovy There is a Docker in my Dashing Pipeline by Kris Buytaert NETWAYS
Dashing or rather Smashing is an awesome Monitoring Dashboard, but it’s a pita to deploy. This talk will document the efforts we went trough to make the deployment of both dashing and the dashboards fully automated. It also will show how we test these dashboards using docker and how we build these pipelines with the JenkinsDSL.
Introduction to Docker at SF Peninsula Software Development Meetup @GuidewiredotCloud
Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.
[HES2013] Virtually secure, analysis to remote root 0day on an industry leadi...Hackito Ergo Sum
Today most networks present one “gateway” to the whole network – The SSL-VPN. A vector that is often overlooked and considered “secure”, we decided to take apart an industry leading SSL-VPN appliance and analyze it to bits to thoroughly understand how secure it really is. During this talk we will examine the internals of the F5 FirePass SSL-VPN Appliance. We discover that even though many security protections are in-place, the internals of the appliance hides interesting vulnerabilities we can exploit. Through processes ranging from reverse engineering to binary planting, we decrypt the file-system and begin examining the environment. As we go down the rabbit hole, our misconceptions about “security appliances” are revealed.
Using a combination of web vulnerabilities, format string vulnerabilities and a bunch of frustration, we manage to overcome the multiple limitations and protections presented by the appliance to gain a remote unauthenticated root shell. Due to the magnitude of this vulnerability and the potential for impact against dozens of fortune 500 companies, we contacted F5 and received one of the best vendor responses we’ve experienced – EVER!
https://www.hackitoergosum.org
Similar to Packaging is the Worst Way to Distribute Software, Except for Everything Else (20)
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
10. Distributing software sucks
Shipping new platforms is so hard
Cross-platform packaging is so hard
Unpredictable user-space is so hard
Moving the packaged bits is so hard
12. Who among us knows this pain?
sad@roberto Downloads $ wget -‐-‐quiet http://
ftpmirror.gnu.org/gcc/gcc-‐4.9.1/gcc-‐4.9.1.tar.bz2
sad@roberto Downloads $ tar xjf gcc-‐4.9.1.tar.bz2
sad@roberto Downloads $ cd gcc-‐4.9.1/
sad@roberto Downloads $ ./configure
./configure: line 532: sed: command not found
./configure: line 1371: sed: command not found
./configure: line 1920: sed: command not found
./configure: line 2291: sed: command not found
configure: error: cannot run /bin/sh ./config.sub
./configure: line 361: sed: command not found
./configure: line 310: sort: command not found
13. This was a problem because
the customer's time has value
17. Dependency management
calculon ~ # apt-‐get install cmake
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
cmake-‐data emacsen-‐common libarchive12 libnettle4
libxmlrpc-‐core-‐c3
The following NEW packages will be installed:
cmake cmake-‐data emacsen-‐common libarchive12 libnettle4
libxmlrpc-‐core-‐c3
0 upgraded, 6 newly installed, 0 to remove and 51 not
upgraded.
25. .rpm
• Managed by the recursively named
"RPM Package Manager" & yum
• cpio compressed binaries & text files
• Post-installation tasks are shell scripts
26. .deb
• Managed by dpkg & apt, the
"Advanced Package Tool"
• ar compressed package with two
gzipped tarballs & a small text file
• Post-installation tasks are shell scripts
27. Mac .pkg
• Used by Mac OS X, and often delivered
in a .dmg (disk image) or a .zip file
• xar compressed archive, containing a
binary file, two archives, and an XML
document
• post-installation tasks are still
shell scripts
28. About all those post-install
shell scripts
Maybe they're not that safe, but the
surface area of this problem is big.
That doesn't mean we needed "dash"
29. Ruby .gem, Python .egg,
and Node .npm
• These are library managers with
delusions of grandeur
• Reuses the "download, decompress,
configure, build, install" patterns,
which hasn't got much spam in it
• Constant compilation is a bummer
30. What about... ?
#realtalk
We only have 45 minutes, and I hope
you're going to have some questions for
me to evade
36. Full Disclosure
• Puppet Labs does use the curl|bash
technique as an option for our PE
agent installation
• If you don't trust your own Puppet
Master, who do you trust?
• (ALL THE COOL KIDS WERE DOING IT)
38. curl | bash often assumes
• There is no air-gap
• Every request is a safe & sane request
• That HTTPS is good enough
39. curl | bash often forgets
• >100% Broadband coverage
• Mirrors exist
• HTTPS secures transport, not content
40. curl | bash totally ignores
• The benefits of reusability
• The fragility of shell scripts
• The fragility of shells
41. Security is hard
• RVM recently introduced hand-rolled
GPG signing*
• Thread had 48 comments within a
week, almost universally about the
implementation
• Broke semver, automation, and hearts
* https://github.com/wayneeseguin/rvm/issues/3105
43. Isn't that from Chef?
• Sure, but so is Test Kitchen
• Builds packages while still controlling
the entire dependency stack
• Lots of love from users with
complicated dependency stacks
44. Omnibus is one way to skin
the entire cat
• Abstracts (instead of removes)
dependency management
• Only builds packages for the platform
it's installed on
• You're going to want to know Ruby
46. Effing Package Managers
•General purpose swiss-army knife of
package building
•Works around a lot of the shortcomings
of existing package managers
•Jordan Sissel is a SAINT (Shout out to
#hugops!)
47. "Common packaging patterns, a
distaste for existing packaging
practices, and some hate-driven
development yielded FPM! Add
some amazing contributions in
code, bugs, features, and support
from the community and boom we
have modern FPM."
Jordan Sissel
My inbox, Oct 10 2014
48. Effing FPM
• Swiss army knives are rarely the best
tool for a given job
• General purpose in this case means a
lot (~150ish) of command line flags
• Still infinitely better than curl | bash
50. RPM Packaging can
be tough
• RPM Spec files are weird
• Kind-of M4, kind of Shell, all obtuse
• Oh, and kind-of Make; only kind-of
• Sort-of competing RPM standards
51. Deb Packaging can feels
like penance
• "debian/" directories are outright
hostile to man & beast alike
• Debian "Helpers" usually don't
• dpatch can use unified diffs (sane) or
shell scripts (what?!)
52. Conflation of purpose
• Some library managers try to install
executables, e.g. gem, pip, npm
• Remember when I said "delusions of
grandeur"?
(Google Image Search was kind of
useless here)
53. But really, I just have a
hypothesis!
• Developers love solving new problems
• Sometimes they confuse their
problems for the customer's problems
• Maybe packaging isn't a solved
problem yet, but it's close
55. Sometimes the only choices you have
are bad ones; but you still have
to choose.
56. TL;DR: this problem is
(mostly) solved
Stop writing new installers
from scratch
Give your customers the best
packages possible
Don't forget Pareto
(any number of 80/20 rules)
57. Thank you
You're wonderful. Thank you for letting
me rant at you for as long as you did.
mckern@puppetlabs.com
@the_mckern