Monitoring as an entry point for collaborationJulien Pivotto
In the last years, we have been building complex stacks, made from lots of components. All of this backed by multiple teams. This talk will present how you can use monitoring to look at the business side and have everyone looking at the same dashboards, making cooperation a reality.
Data analytics in the cloud with Jupyter notebooks.Graham Dumpleton
Jupyter Notebooks provide an interactive computational environment, in which you can combine Python code, rich text, mathematics, plots and rich media. It provides a convenient way for data analysts to explore, capture and share their research.
Numerous options exist for working with Jupyter Notebooks, including running a Jupyter Notebook instance locally or by using a Jupyter Notebook hosting service.
This talk will provide a quick tour of some of the more well known options available for running Jupyter Notebooks. It will then look at custom options for hosting Jupyter Notebooks yourself using public or private cloud infrastructure.
An in-depth look at how you can run Jupyter Notebooks in OpenShift will be presented. This will cover how you can directly deploy a Jupyter Notebook server image, as well as how you can use Source-to-Image (S2I) to create a custom application for your requirements by combining an existing Jupyter Notebook server image with your own notebooks, additional code and research data.
Specific use cases around Jupyter Notebooks which will be explored will include individual use, team use within an organisation, and class room environments for teaching. Other issues which will be covered include importing of notebooks and data into an environment, storing data using persistent volumes and other forms of centralised storage.
As an example of the possibilities of using Jupyter Notebooks with a cloud, it will be shown how you can easily use OpenShift to set up a distributed parallel computing cluster using ‘ipyparallel’ and use it in conjunction with a Jupyter Notebook.
Lawrence berkeley national laboratory sep 2015 - Jupyter Talk
Scientific facilities are increasingly generating large data sets. Next-generation scientific productivity relies on user-friendly tools and efficient, effective and seamless access to resources and data. Traditional approaches to research and software development for science focus on the hardware and software of the machine and do not consider the user. In this talk, I will highlight a different approach to building software for scientific users by including user knowledge in the process. I will illustrate a few example projects where this has been used to date.
GIthub repository: https://github.com/Carreau/talks/tree/master/labtech-2015
IPython is an interactive Python shell, it provides tools for interactive and parallel computing that are widely used in the scientific world. It can also benefit any other Python developer.
Monitoring as an entry point for collaborationJulien Pivotto
In the last years, we have been building complex stacks, made from lots of components. All of this backed by multiple teams. This talk will present how you can use monitoring to look at the business side and have everyone looking at the same dashboards, making cooperation a reality.
Data analytics in the cloud with Jupyter notebooks.Graham Dumpleton
Jupyter Notebooks provide an interactive computational environment, in which you can combine Python code, rich text, mathematics, plots and rich media. It provides a convenient way for data analysts to explore, capture and share their research.
Numerous options exist for working with Jupyter Notebooks, including running a Jupyter Notebook instance locally or by using a Jupyter Notebook hosting service.
This talk will provide a quick tour of some of the more well known options available for running Jupyter Notebooks. It will then look at custom options for hosting Jupyter Notebooks yourself using public or private cloud infrastructure.
An in-depth look at how you can run Jupyter Notebooks in OpenShift will be presented. This will cover how you can directly deploy a Jupyter Notebook server image, as well as how you can use Source-to-Image (S2I) to create a custom application for your requirements by combining an existing Jupyter Notebook server image with your own notebooks, additional code and research data.
Specific use cases around Jupyter Notebooks which will be explored will include individual use, team use within an organisation, and class room environments for teaching. Other issues which will be covered include importing of notebooks and data into an environment, storing data using persistent volumes and other forms of centralised storage.
As an example of the possibilities of using Jupyter Notebooks with a cloud, it will be shown how you can easily use OpenShift to set up a distributed parallel computing cluster using ‘ipyparallel’ and use it in conjunction with a Jupyter Notebook.
Lawrence berkeley national laboratory sep 2015 - Jupyter Talk
Scientific facilities are increasingly generating large data sets. Next-generation scientific productivity relies on user-friendly tools and efficient, effective and seamless access to resources and data. Traditional approaches to research and software development for science focus on the hardware and software of the machine and do not consider the user. In this talk, I will highlight a different approach to building software for scientific users by including user knowledge in the process. I will illustrate a few example projects where this has been used to date.
GIthub repository: https://github.com/Carreau/talks/tree/master/labtech-2015
IPython is an interactive Python shell, it provides tools for interactive and parallel computing that are widely used in the scientific world. It can also benefit any other Python developer.
A quick overview of why to use and how to set up iPython notebooks for researchAdam Pah
A quick overview of why to use and how to set up iPython notebooks for research in the Amaral lab. Example notebook is a gist at:
http://nbviewer.ipython.org/gist/anonymous/f8e6d8985d2ea0e4bab1
OSDC 2016 - Continous Integration in Data Centers - Further 3 Years later by ...NETWAYS
I gave a talk titled "Continuous Integration in data centers“ at OSDC in 2013, presenting ways how to realize continuous integration/delivery with Jenkins and related tools.Three years later we gained new tools in our continuous delivery pipeline, including Docker, Gerrit and Goss. Over the years we also had to deal with different problems caused by faster release cycles, a growing team and gaining new projects. We therefore established code review in our pipeline, improved our test infrastructure and invested in our infrastructure automation.In this talk I will discuss the lessons we learned over the last years, demonstrate how a proper continuous delivery pipeline can improve your life and how open source tools like Jenkins, Docker and Gerrit can be leveraged for setting up such an environment.
A One-Stop Solution for Puppet and OpenStackPuppet
Throughout the last year, we have been using and developing tools that allow us to have an IaaS where our data center is configured by Puppet and our virtualization and authentication needs are catered by Openstack. RedHat's foreman is our lifecycle management tool which we configured to support both bare metal and Openstack virtual machines. We use git to manage environments and hostgroup configurations and we will tell you how we deal with its security implications, how to store Hieradata secrets. Switching from a homebrew toolchain to open source tools like Facter, Foreman, Openstack has turned out into many contributions to these teams. Nearly everyone at CERN has started to wear the devops hat which brings new challenges in terms of development workflows and scalability.
Daniel Lobato Garcia
Software Engineer, CERN
Daniel Lobato is a developer who has worked in very different environmentst, from data centers and mainframes to startups. Nowadays he has dived into the Agile Infrastructure team at CERN where the design and implementation of the new computing infrastructure is done. As for Puppet, he currently helps RedHat to develop Foreman, a lifecycle management tool for physical and virtual machines. One of his goals at CERN is to knot this tool to all the relevant parts of the infrastructure, which includes Puppet for configuration management, OpenStack for virtualization and authentication, Puppetdb and others. He is sure the source of all computer problems is between the chair and the keyboard.
KOSS Lab. Conference 2016에서 튜토리얼 섹션으로 진행하였습니다.
link: https://kosscon.kr/program/tutorial#11
제로부터 시작하는 텐서플로우
Zero Staring Life in Artificial Intelligence
텐서플로우는 구글에서 공개한 오픈 소스 프로젝트입니다.
기계학습에 관심이 있지만 IT를 잘 모르시는 분들을 위해서
가상환경부터 시작해서 텐서플로우를 설치하고,
예제코드를 따라하면서,
기계학습을 공부하는 사람들에게 마중물(!) 역활을 작게나마 도움이 되었으면 좋겠습니다.
see https://docs.google.com/presentation/d/1qiHUXWhfyjId9OIOSqn1sceiAJs9kRUNLnNUl509HhI/pub?start=false&loop=false&delayms=5000 for higher quality original of this presentation given at EclipseCon Europe 2016 in Ludwigsburg Germany
PyCon AU 2012 - Debugging Live Python Web ApplicationsGraham Dumpleton
Monitoring tools record the result of what happened to your web application when a problem arises, but for some classes of problems, monitoring systems are only a starting point. Sometimes it is necessary to take more intrusive steps to plan for the unexpected by embedding mechanisms that will allow you to interact with a live deployed web application and extract even more detailed information.
Talk given at OSDC 2016 about Foreman and managing a lab. This is a feedback of our 3 years experience with the Foreman and emphasis Foreman and Puppet, Libvirt cooperation.
A quick overview of why to use and how to set up iPython notebooks for researchAdam Pah
A quick overview of why to use and how to set up iPython notebooks for research in the Amaral lab. Example notebook is a gist at:
http://nbviewer.ipython.org/gist/anonymous/f8e6d8985d2ea0e4bab1
OSDC 2016 - Continous Integration in Data Centers - Further 3 Years later by ...NETWAYS
I gave a talk titled "Continuous Integration in data centers“ at OSDC in 2013, presenting ways how to realize continuous integration/delivery with Jenkins and related tools.Three years later we gained new tools in our continuous delivery pipeline, including Docker, Gerrit and Goss. Over the years we also had to deal with different problems caused by faster release cycles, a growing team and gaining new projects. We therefore established code review in our pipeline, improved our test infrastructure and invested in our infrastructure automation.In this talk I will discuss the lessons we learned over the last years, demonstrate how a proper continuous delivery pipeline can improve your life and how open source tools like Jenkins, Docker and Gerrit can be leveraged for setting up such an environment.
A One-Stop Solution for Puppet and OpenStackPuppet
Throughout the last year, we have been using and developing tools that allow us to have an IaaS where our data center is configured by Puppet and our virtualization and authentication needs are catered by Openstack. RedHat's foreman is our lifecycle management tool which we configured to support both bare metal and Openstack virtual machines. We use git to manage environments and hostgroup configurations and we will tell you how we deal with its security implications, how to store Hieradata secrets. Switching from a homebrew toolchain to open source tools like Facter, Foreman, Openstack has turned out into many contributions to these teams. Nearly everyone at CERN has started to wear the devops hat which brings new challenges in terms of development workflows and scalability.
Daniel Lobato Garcia
Software Engineer, CERN
Daniel Lobato is a developer who has worked in very different environmentst, from data centers and mainframes to startups. Nowadays he has dived into the Agile Infrastructure team at CERN where the design and implementation of the new computing infrastructure is done. As for Puppet, he currently helps RedHat to develop Foreman, a lifecycle management tool for physical and virtual machines. One of his goals at CERN is to knot this tool to all the relevant parts of the infrastructure, which includes Puppet for configuration management, OpenStack for virtualization and authentication, Puppetdb and others. He is sure the source of all computer problems is between the chair and the keyboard.
KOSS Lab. Conference 2016에서 튜토리얼 섹션으로 진행하였습니다.
link: https://kosscon.kr/program/tutorial#11
제로부터 시작하는 텐서플로우
Zero Staring Life in Artificial Intelligence
텐서플로우는 구글에서 공개한 오픈 소스 프로젝트입니다.
기계학습에 관심이 있지만 IT를 잘 모르시는 분들을 위해서
가상환경부터 시작해서 텐서플로우를 설치하고,
예제코드를 따라하면서,
기계학습을 공부하는 사람들에게 마중물(!) 역활을 작게나마 도움이 되었으면 좋겠습니다.
see https://docs.google.com/presentation/d/1qiHUXWhfyjId9OIOSqn1sceiAJs9kRUNLnNUl509HhI/pub?start=false&loop=false&delayms=5000 for higher quality original of this presentation given at EclipseCon Europe 2016 in Ludwigsburg Germany
PyCon AU 2012 - Debugging Live Python Web ApplicationsGraham Dumpleton
Monitoring tools record the result of what happened to your web application when a problem arises, but for some classes of problems, monitoring systems are only a starting point. Sometimes it is necessary to take more intrusive steps to plan for the unexpected by embedding mechanisms that will allow you to interact with a live deployed web application and extract even more detailed information.
Talk given at OSDC 2016 about Foreman and managing a lab. This is a feedback of our 3 years experience with the Foreman and emphasis Foreman and Puppet, Libvirt cooperation.
Microservices architecture is a very powerful way to build scalable systems optimized for speed of change. To do this, we need to build independent, autonomous services which by definition tend to minimize dependencies on other systems. One of the tenants of microservices, and a way to minimize dependencies, is “a service should own its own database”. Unfortunately this is a lot easier said than done. Why? Because: your data.
We’ve been dealing with data in information systems for 5 decades so isn’t this a solved problem? Yes and no. A lot of the lessons learned are still very relevant. Traditionally, we application developers have accepted the practice of using relational databases and relying on all of their safety guarantees without question. But as we build services architectures that span more than one database (by design, as with microservices), things get harder. If data about a customer changes in one database, how do we reconcile that with other databases (especially where the data storage may be heterogenous?).
For developers focused on the traditional enterprise, not only do we have to try to build fast-changing systems that are surrounded by legacy systems, the domains (finance, insurance, retail, etc) are incredibly complicated. Just copying with Netflix does for microservices may or may not be useful. So how do we develop and reason about the boundaries in our system to reduce complexity in the domain?
In this talk, we’ll explore these problems and see how Domain Driven Design helps grapple with the domain complexity. We’ll see how DDD concepts like Entities and Aggregates help reason about boundaries based on use cases and how transactions are affected. Once we can identify our transactional boundaries we can more carefully adjust our needs from the CAP theorem to scale out and achieve truly autonomous systems with strictly ordered eventual consistency. We’ll see how technologies like Apache Kafka, Apache Camel and Debezium.io can help build the backbone for these types of systems. We’ll even explore the details of a working example that brings all of this together.
If you work with or at a Telco, Financial Institution or a Government entity, you probably already know about compliance and the various acronyms and headaches it can bring.
How can we make this less of a painful process?
Well, if you think about it: compliance is a set of rules that someone has given you to enforce and prove that they're being enforced. What is Puppet? A series of rules for systems that need to be enforced. So compliance is the perfect use-case for configuration management.
Knee deep in the undef - Tales from refactoring old Puppet codebasesPeter Souter
As Puppet pushes into it’s second decade of reign, there are several organisations out there that have been using Puppet for a long time. Sometimes, even since the beginning!
With the EOL announcement Puppet 3.X release, we’ve had a number of customers approach us to help with their upgrade. Normally the upgrade itself is fairly, it’s the code base that gives the biggest challenge, especially those with over 3 years of organic growth.
So let’s spread the word of common anti-patterns and issues that can come back to bite you
We’ll be talking about how Hiera is both the best and worst thing to happen to Puppet, marvel at how people were happily running 0.2 Puppet in production and what hacky solutions that seemed good at the time will come back to bite you!
By the end of this, you’ll hopefully have learnt how to make sure that your Puppet code is defensively coded to to make sure your Puppet code base is healthy for the next decade!
Feedback about 5 years of Foreman Experience to manage different kinds of infrastructure. A story about Open Source. Given for the 7th Birthday of The Foreman.
Driving DevOps for Oracle with the orawls Puppet ModulesSimon Haslam
Administrators these days have a rich choice of tools for automating the provisioning of their Oracle platforms - and one popular choice is Puppet. However the tool only provides a framework for scripting and changing the state of a server - on top of this you need a run-time configuration that uses the framework to install specific products, such as the Oracle software and domain configuration. This is where Edwin Biemond's "orawls" modules come in.
This presentation will discuss what the orawls modules do out of the box, how to use them and the configuration layer you need to create on top to tailor the installation to your own topology. It will allow you to use this open source software to build your own FMW environments fully automatically.
First presented by Simon Haslam (eProseed) and Arturo Viveros (SYSCO) at OUG Norway conference in March 2017.
Le "Continuous Delivery" est un sacré buzz word, et "Docker" encore plus, mais les blog que j'ai pu lire sur sujet ne proposent qu'un pipelines naif et minimaliste : compile, test, push docker image, et voilà.
En 2015 Jenkins adresse clairement plus que de l'Integration Continue, et avec le support récent du workflow plugin nous pouvons orchestrer avec un DSL des pipelines de grande complexité. L'integration avec Docker lui donne encore plus de puissance.
Pendant cette session, je vais construire un pipeline de CD pour montrer l'utilisation du workflow et sa flexibilité, ainsi que l'apport de Docker à votre boite à outils de Continuous Delivery, avant de marier les deux - mais chut, ne spoilons pas.
Par Nicolas De Loof (Dev- / Ops- / Support- / Coach- Engineer @CloudBees)
Toutes les vidéos des conférences seront disponibles sur Xebia.tv
Beyond the Operating System: Red Hat's Open Strategy for the Modern EnterpriseJames Falkner
23 years ago, Red Hat began selling CDs of Linux. Today, Red Hat leads or participates in more than 500 open source projects in container infrastructure, middleware, cloud, storage, mobile and more. Many of these are available as enterprise-supported products and open platforms that power thousands of small and large businesses. In this presentation we'll take a closer look at several innovative projects beyond the OS and Red Hat's strategy for providing them as open source solutions to modern enterprise challenges.
Presented at @OpenSourceNorth June 2016, www.opensourcenorth.com
Enjoying the Journey from Puppet 3.x to Puppet 4.x (PuppetConf 2016)Robert Nelson
Let's describe the process for upgrading from Puppet 3 to 4, list some common pitfalls and how to avoid them, and be sure to enjoy ourselves in the process!
Puppetconf 2015 - Puppet Reporting with Elasticsearch Logstash and Kibanapkill
Answer deep questions about the health of configuration runs on your nodes with the popular Elasticsearch, Logstash and Kibana stack. While many questions about resources, catalogs and runtimes can be answered by using the Puppet Dashboard or Puppet Enterprise, there are limitations. Putting the reports and run metrics into Elasticsearch gives users full text search and filtering. Also, you can perform metrics and aggregations over resource numbers or run times. Kibana graphs are also a great way to supplement the dashboards available in Puppet Enterprise.
Massively Parallel Processing with Procedural Python by Ronert Obst PyData Be...PyData
The Python data ecosystem has grown beyond the confines of single machines to embrace scalability. Here we describe one of our approaches to scaling, which is already being used in production systems. The goal of in-database analytics is to bring the calculations to the data, reducing transport costs and I/O bottlenecks. Using PL/Python we can run parallel queries across terabytes of data using not only pure SQL but also familiar PyData packages such as scikit-learn and nltk. This approach can also be used with PL/R to make use of a wide variety of R packages. We look at examples on Postgres compatible systems such as the Greenplum Database and on Hadoop through Pivotal HAWQ. We will also introduce MADlib, Pivotal’s open source library for scalable in-database machine learning, which uses Python to glue SQL queries to low level C++ functions and is also usable through the PyMADlib package.
Data Science Amsterdam - Massively Parallel Processing with Procedural LanguagesIan Huston
The goal of in-database analytics is to bring the calculations to the data, reducing transport costs and I/O bottlenecks. With Procedural Languages such as PL/Python and PL/R data parallel queries can be run across terabytes of data using not only pure SQL but also familiar Python and R packages. The Pivotal Data Science team have used this technique to create fraud behaviour models for each individual user in a large corporate network, to understand interception rates at customs checkpoints by accelerating natural language processing of package descriptions and to reduce customer churn by building a sentiment model using customer call centre records.
http://www.meetup.com/Data-Science-Amsterdam/events/178974942/
Season 7 Episode 1 - Tools for Data Scientistsaspyker
Metaflow (Ville Tuulos)
Data scientists at Netflix are expected to develop and operate large machine learning workflows autonomously. However, we do not expect that all our scientists are deeply experienced with distributed systems and data engineering. Metaflow was created to make it delightfully easy to build and operate ML workflows in the cloud using idiomatic Python and off-the-shelf ML libraries, covering the whole lifecycle of an ML project from prototype to production.
Polynote (Jeremy Smith)
Polynote is a new notebook tool we created from scratch to address some of the pain points we've run into while using Scala in machine-learning notebooks at Netflix. It provides essential code editing features other tools lack like interactive auto-completes, support for mixing multiple languages and sharing data between them within a single notebook, and encourages reproducible notebooks with its immutable data model.
Papermill (Matthew Seal)
Nteract is an open source organization under which there are several libraries and applications that Netflix and many other companies and individuals contribute to. One of these libraries is Papermill, a library used to programmatically parameterize and execute Jupyter Notebooks. Papermill provides a CLI and Python interface that we'll explore during the session to see how it can be used and what value it adds. Using this pattern we'll also briefly talk about how we've integrated papermill at Netflix and how it interfaces with other Jupyter and nteract services.
Puppet can be used effectively and at scale without running as root. In many organizations, particularly large ones, different teams are responsible for different pieces of the infrastructure. In my case, I am on a team responsible for installation, configuration, upkeep, and monitoring of an application, but we are denied root access. Despite this, we have a rich puppet infrastructure thats saves us time and reduces configuration drift. I will present our model for success in this kind of limited environment, including recipes for using puppet as non root and some encouraging words and ideas for those who want to implement puppet, but the rest of their organization isn't ready yet.
Spencer Krum
Systems Admin, UTI Worldwide
Spencer is a Linux and application administrator with UTI Worldwide, a shipping and logistics firm. He lives and works in Portland. He has been using Linux and Puppet for years. Spencer is co-authoring (with William Van Hevelingen and Ben Kero) the second edition of Pro Puppet by James Turnbull and Jeff McCune, which should be available from Apress in alpha/beta E-Book in time for Puppet Conf '13. He enjoys hacking, tennis, StarCraft, and Hawaiian food.
What's New in Prometheus and Its EcosystemJulien Pivotto
Let's have a look at all the recent features and changes in the Prometheus server and the community. We will introduce the new features and see how you can integrate them in your workflows to get a better Prometheus experience.
Prometheus: What is is, what is new, what is comingJulien Pivotto
Prometheus is a metrics-based monitoring and alerting system and also the project with the second longest tenure within the CNCF. As such you have probably heard about it by now. We will give you a short introduction to Prometheus, what it is and why it was such a big deal when it was initially released. In all those years since then, the project has only gained speed, which provides us with the opportunity to tell you about all the exciting new features that have just been released or are in the pipeline, including opportunities to contribute to the project and its wider ecosystem.
Talk at kubecon 2021
Monitoring in a fast-changing world with PrometheusJulien Pivotto
Prometheus is an open source monitoring project used to gather metrics.
It as many capabilities built-in, such as service discovery, which makes it very suitable for an automated environment.
This talk will give a brief introduction of Prometheus, what are the latest developments, and then give practical tips and examples about how you can use it in an automated world.
Graphs can represent many different things. Across the years I have learned how to display different situations in Grafana effectively. I share how to visualize different kinds of situations and make them easy to read by using advanced features of Grafana.
HAProxy is often used to route ingress traffic, but we use it the other way around. We use it for egress. Our applications talk to the outside world through HAProxy. We get a lot of benefits from this unique approach: throttling, guaranteed response times, unified monitoring, and path rewriting. I will highlight how we use HAProxy at Inuits and how we achieve observability via Prometheus and Grafana.
Improved alerting with Prometheus and AlertmanagerJulien Pivotto
One of the reasons we collect metrics is to be able to alert on them. This presentation will introduce you some concepts of PromQL, prometheus and alertmanager to highly improve the quality and reliability of your alerts. This talk will cover different topic, including: - Reducing flapping alerts - Hysteresis - "Time of the day" based alerting - Computed thresholds with data history
his talk will introduce you to the Prometheus monitoring solution and how you can use it to monitor yous CentOS servers, and the applications that run on top of them. It will provide tips about the setup and show some great, real life example.
A small demo involving OpenShift will also be produced, to demonstrate how Prometheus can work with dynamic environments.
Automation is at the heart of modern infrastructure. Ansible is a great tool to automate your routing workflows and your infrastructure.
This talk will present you the best of Ansible - how you can quickly get started and start automating your infrastructure with it.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Enhancing Performance with Globus and the Science DMZGlobus
ESnet has led the way in helping national facilities—and many other institutions in the research community—configure Science DMZs and troubleshoot network issues to maximize data transfer performance. In this talk we will present a summary of approaches and tips for getting the most out of your network infrastructure using Globus Connect Server.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!