Beyond websites using drupal for digital signsAcquia
Drupal 8 can power experiences beyond the traditional web. As more data rich APIs become available, Drupal can be used to accumulate data, identify a variety of devices in an Internet of Things network and then route data to the appropriate places.
Given Drupal’s own rich content management capabilities, the CMS can still be utilized to enhance this datastream - making it that much more relevant based on location, language or any other metadata stored in it. In this presentation we will demonstrate how to use Drupal 8 to power a real-time signage system and discuss the techniques to build your own!
What’s Covered:
Responsive Techniques to support different display sizes.
ADA rules around public signage. We’re not just talking WCAG/508 anymore!
How to rebroadcast data from other sources.
Data Delivery Methods: Push and Pull models.
Sizing and Scaling your network of Signs.
Fault tolerance on your Kiosk.
Why even use Drupal to power a sign?
Learn about the new AWS Database Migration Service, which helps you migrate databases with minimal downtime from on-premises and Amazon EC2 environments to Amazon RDS, Amazon Redshift, Amazon Aurora and EC2 databases. We discuss homogeneous (e.g. Oracle-to-Oracle, PostgreSQL-to-PostgreSQL, etc.) and heterogeneous (e.g. Oracle to Aurora, SQL Server to MariaDB) database migrations. We also talk about the new AWS Schema Conversion Tool that saves you development time when migrating your Oracle and SQL Server database schemas, including PL/SQL and T-SQL procedural code, to their MySQL, MariaDB and Aurora equivalents.
Migrating Databases to AWS for Business Critical Applications and Analytics Amazon Web Services
Migrating business critical applications to a new environment can be difficult and expensive. The short duration of maintenance windows often dictates the use of costly tools to perform change data capture (CDC) from the source to target databases so that the switch over process happens as quickly as possible. Amazon Web Services recently introduced the Database Migration Service (DMS) that supports the migration of databases from on-premises to the cloud with CDC support. This session will explain how DMS provides a simple and cost effective way to migrate business critical applications to Amazon Web Services. It will also cover how DMS enables new workloads for analytics, dev/test and heterogeneous database migrations.
Migrating Your Databases to AWS – Tools and Services (Level 100)Amazon Web Services
In this webinar, you will learn how the AWS Database Migration Service (DMS) and AWS Schema Conversion Tool (SCT) can help migrate your databases to AWS for homogeneous and heterogeneous migrations. We will also discuss new sources and targets, together with new features that make DMS and SCT a powerful combination for both your database migration and data replication requirements.
Speaker: Blair Layton, APAC Business Development, Database, AWS APAC
Beyond websites using drupal for digital signsAcquia
Drupal 8 can power experiences beyond the traditional web. As more data rich APIs become available, Drupal can be used to accumulate data, identify a variety of devices in an Internet of Things network and then route data to the appropriate places.
Given Drupal’s own rich content management capabilities, the CMS can still be utilized to enhance this datastream - making it that much more relevant based on location, language or any other metadata stored in it. In this presentation we will demonstrate how to use Drupal 8 to power a real-time signage system and discuss the techniques to build your own!
What’s Covered:
Responsive Techniques to support different display sizes.
ADA rules around public signage. We’re not just talking WCAG/508 anymore!
How to rebroadcast data from other sources.
Data Delivery Methods: Push and Pull models.
Sizing and Scaling your network of Signs.
Fault tolerance on your Kiosk.
Why even use Drupal to power a sign?
Learn about the new AWS Database Migration Service, which helps you migrate databases with minimal downtime from on-premises and Amazon EC2 environments to Amazon RDS, Amazon Redshift, Amazon Aurora and EC2 databases. We discuss homogeneous (e.g. Oracle-to-Oracle, PostgreSQL-to-PostgreSQL, etc.) and heterogeneous (e.g. Oracle to Aurora, SQL Server to MariaDB) database migrations. We also talk about the new AWS Schema Conversion Tool that saves you development time when migrating your Oracle and SQL Server database schemas, including PL/SQL and T-SQL procedural code, to their MySQL, MariaDB and Aurora equivalents.
Migrating Databases to AWS for Business Critical Applications and Analytics Amazon Web Services
Migrating business critical applications to a new environment can be difficult and expensive. The short duration of maintenance windows often dictates the use of costly tools to perform change data capture (CDC) from the source to target databases so that the switch over process happens as quickly as possible. Amazon Web Services recently introduced the Database Migration Service (DMS) that supports the migration of databases from on-premises to the cloud with CDC support. This session will explain how DMS provides a simple and cost effective way to migrate business critical applications to Amazon Web Services. It will also cover how DMS enables new workloads for analytics, dev/test and heterogeneous database migrations.
Migrating Your Databases to AWS – Tools and Services (Level 100)Amazon Web Services
In this webinar, you will learn how the AWS Database Migration Service (DMS) and AWS Schema Conversion Tool (SCT) can help migrate your databases to AWS for homogeneous and heterogeneous migrations. We will also discuss new sources and targets, together with new features that make DMS and SCT a powerful combination for both your database migration and data replication requirements.
Speaker: Blair Layton, APAC Business Development, Database, AWS APAC
Forget the gap between Dev and Ops - the gap between Devs and DBAs is a chasm. Here are some observations from the field about the causes of the rift and some ideas about how to close the gap (and even whether the gap is worth closing). Oh, and I'm writing a book about it.
Data Con LA 2020
Description
Coming from a grand belief of data democratization, I believe that in order for any team to be successful collaborators, it has to be data centric and data should be accessible to all.
*To ensure that your non software or software engineering centric team has maximum efficiency, data should be visible, data lake should be accessible.
*Form a database for analytics summaries, talk about the different technologies(SQL, NoSQL) cost of deployment, need, team driven structure. Build an API for this database for external/inter team crosstalk.
*Build analytics and visual layer on top of it. Flask/Django/Node, etc.., to enable the team to have high visibility in their analysis, and to ensure a higher turnaround of data.
*Talk about an easy way of enabling the team to run code, could be local/cloud, JupyterHub is a great way of doing so, talk about the tremendous value added in that and the potential it enables
*Talk about the common tools user for version control/CICD/Coding technologies, etc..
*Finally summarize the value of the mixture of all these tools and technologies in order to ensure the maximum efficiency.
Speaker
Nawar Khabbaz, Rivian, Data Engineer
Postgres, the leading open source relational database, is positioned as the centerpiece of a pivot from traditional architectures to a micro-services based approach that is in full support of a DevOps motion.
Presented by Marc Linster, Senior Vice President of Product Development at EnterpriseDB, this explores how Postgres meets the key requirements for DevOps. Lister explains how Postgres is developer friendly, supporting the process with a versatile data model using JSONB, integrating other data sources using Foreign Data Wrappers, and how Postgres supports rapid deployment in the cloud and on premises.
PartnerSkillUp_Enable a Streaming CDC SolutionTimothy Spann
PartnerSkillUp_Enable a Streaming CDC Solution
Tim Spann
Principal Developer Advocate in Data In Motion for Cloudera, Global
https://attend.cloudera.com/skillupseriesseptember14
Streaming Change Data Capture (CDC) Two Unique Ways
In this next session,
learn how to use Debezium with Flink, Kafka, and NiFi for Change Data Capture using two different mechanisms: Kafka Connect and Flink SQL.
With the virtual nature of today's world, streaming data is more critical than ever. Join Cloudera Chief Data-In-Motion Principal, Tim Spann, and Partner Solution Engineer, Salvador Alamazan as they look closely at key CDC use cases, discuss why Debezium is the best option for handling CDC and use examples to show you how to demonstrate value.
This is a must-attend experience!
Zoom Webinar
September 14, 2023
10:00am–11:00am EDT
FLaNK Stack
Apache NiFi
Apache Flink
Apache Kafka
Kafka Connect
Flink SQL
Cloudera DataFlow
Cloudera SQL Stream Builder
Cloudera Streams Messages Manager
Debezium
Postgresql
IBM DB2
Oracle DB
Karen's Favourite Features of SQL Server 2016Karen Lopez
Slides from a one hour webinar on Karen Lopez's favorite features from database designer's point of view. Topics include Always Encrypted, Data Masking, Row Level Security, Foreign Keys, JSON and more.
Notice an error? Let me know. I welcome this sort of feedback.
Reliable Data Intestion in BigData / IoTGuido Schmutz
Many of the Big Data and IoT use cases are based on combing data from multiple data sources and to make them available on a Big Data platform for analysis. The data sources are often very heterogeneous, from simple files, databases to high-volume event streams from sensors (IoT devices). It’s important to retrieve this data in a secure and reliable manner and integrate it with the Big Data platform so that it is available for analysis in real-time (stream processing) as well as in batch (typical big data processing). In past some new tools have emerged, which are especially capable of handling the process of integrating data from outside, often called Data Ingestion. From an outside perspective, they are very similar to a traditional Enterprise Service Bus infrastructures, which in larger organization are often in use to handle message-driven and service-oriented systems. But there are also important differences, they are typically easier to scale in a horizontal fashion, offer a more distributed setup, are capable of handling high-volumes of data/messages, provide a very detailed monitoring on message level and integrate very well with the Hadoop ecosystem. This session will present and compare Apache Flume, Apache NiFi, StreamSets and the Kafka Ecosystem and show how they handle the data ingestion in a Big Data solution architecture.
Cassandra & puppet, scaling data at $15 per monthdaveconnors
Constant Contact shares lessons learned from DevOps approach to implementing Cassandra to manage social media data for over 400k small business customers. Puppet is the critical in our tool chain. Single most important factor was the willingness of Development and Operations to stretch beyond traditional roles and responsibilities.
Forget the gap between Dev and Ops - the gap between Devs and DBAs is a chasm. Here are some observations from the field about the causes of the rift and some ideas about how to close the gap (and even whether the gap is worth closing). Oh, and I'm writing a book about it.
Data Con LA 2020
Description
Coming from a grand belief of data democratization, I believe that in order for any team to be successful collaborators, it has to be data centric and data should be accessible to all.
*To ensure that your non software or software engineering centric team has maximum efficiency, data should be visible, data lake should be accessible.
*Form a database for analytics summaries, talk about the different technologies(SQL, NoSQL) cost of deployment, need, team driven structure. Build an API for this database for external/inter team crosstalk.
*Build analytics and visual layer on top of it. Flask/Django/Node, etc.., to enable the team to have high visibility in their analysis, and to ensure a higher turnaround of data.
*Talk about an easy way of enabling the team to run code, could be local/cloud, JupyterHub is a great way of doing so, talk about the tremendous value added in that and the potential it enables
*Talk about the common tools user for version control/CICD/Coding technologies, etc..
*Finally summarize the value of the mixture of all these tools and technologies in order to ensure the maximum efficiency.
Speaker
Nawar Khabbaz, Rivian, Data Engineer
Postgres, the leading open source relational database, is positioned as the centerpiece of a pivot from traditional architectures to a micro-services based approach that is in full support of a DevOps motion.
Presented by Marc Linster, Senior Vice President of Product Development at EnterpriseDB, this explores how Postgres meets the key requirements for DevOps. Lister explains how Postgres is developer friendly, supporting the process with a versatile data model using JSONB, integrating other data sources using Foreign Data Wrappers, and how Postgres supports rapid deployment in the cloud and on premises.
PartnerSkillUp_Enable a Streaming CDC SolutionTimothy Spann
PartnerSkillUp_Enable a Streaming CDC Solution
Tim Spann
Principal Developer Advocate in Data In Motion for Cloudera, Global
https://attend.cloudera.com/skillupseriesseptember14
Streaming Change Data Capture (CDC) Two Unique Ways
In this next session,
learn how to use Debezium with Flink, Kafka, and NiFi for Change Data Capture using two different mechanisms: Kafka Connect and Flink SQL.
With the virtual nature of today's world, streaming data is more critical than ever. Join Cloudera Chief Data-In-Motion Principal, Tim Spann, and Partner Solution Engineer, Salvador Alamazan as they look closely at key CDC use cases, discuss why Debezium is the best option for handling CDC and use examples to show you how to demonstrate value.
This is a must-attend experience!
Zoom Webinar
September 14, 2023
10:00am–11:00am EDT
FLaNK Stack
Apache NiFi
Apache Flink
Apache Kafka
Kafka Connect
Flink SQL
Cloudera DataFlow
Cloudera SQL Stream Builder
Cloudera Streams Messages Manager
Debezium
Postgresql
IBM DB2
Oracle DB
Karen's Favourite Features of SQL Server 2016Karen Lopez
Slides from a one hour webinar on Karen Lopez's favorite features from database designer's point of view. Topics include Always Encrypted, Data Masking, Row Level Security, Foreign Keys, JSON and more.
Notice an error? Let me know. I welcome this sort of feedback.
Reliable Data Intestion in BigData / IoTGuido Schmutz
Many of the Big Data and IoT use cases are based on combing data from multiple data sources and to make them available on a Big Data platform for analysis. The data sources are often very heterogeneous, from simple files, databases to high-volume event streams from sensors (IoT devices). It’s important to retrieve this data in a secure and reliable manner and integrate it with the Big Data platform so that it is available for analysis in real-time (stream processing) as well as in batch (typical big data processing). In past some new tools have emerged, which are especially capable of handling the process of integrating data from outside, often called Data Ingestion. From an outside perspective, they are very similar to a traditional Enterprise Service Bus infrastructures, which in larger organization are often in use to handle message-driven and service-oriented systems. But there are also important differences, they are typically easier to scale in a horizontal fashion, offer a more distributed setup, are capable of handling high-volumes of data/messages, provide a very detailed monitoring on message level and integrate very well with the Hadoop ecosystem. This session will present and compare Apache Flume, Apache NiFi, StreamSets and the Kafka Ecosystem and show how they handle the data ingestion in a Big Data solution architecture.
Cassandra & puppet, scaling data at $15 per monthdaveconnors
Constant Contact shares lessons learned from DevOps approach to implementing Cassandra to manage social media data for over 400k small business customers. Puppet is the critical in our tool chain. Single most important factor was the willingness of Development and Operations to stretch beyond traditional roles and responsibilities.
https://devoxx.be/talk/?id=52363
CQRS, Event streaming, Event sourced, log management, Kafka, RabbitMQ... The all ecosystem is now working on event management, event sourcing, and CQRS. Kafka trend is growing, in a king of modern style ESB. This all trned now allow the emergence of new software. One of the new event log, stream and storage, is Apache Pulsar, a great Apache project, using Zookeeper and Bookeeper, coming from Yahoo! team.
This talk will help to understand the architecture, the good points, the differences, and compare it with SQS, Kafka, RabbitMQ Iron or Redis listen. There will be example using java code.
The two speakers are coming from two different companies, using Pulsar on production.
Traversing hyper driven developpement to do great technical choices and make ...Quentin Adam
On this era of industrial changes, we all know that software is eating the world, and the world is small, or at least, not so big. So how to manage to make great technical choices on this era where giants apply the marketing of the Shame on us? How do we keep best developper in our organisation when it's a furious competition on hiring out there? More important, how do we make sure people we work with are both happy and productive? Beyond marketing, we will try to figure out how we do to compete and create value for us and our users.
Remove centralization on Authorization - API Days Paris 2018 (announcement fo...Quentin Adam
talk with @gcouprie
First time we talk about biscuit
Authentication is one of the main pain points in distributed and microservices systems. We want it to be scalable, work on all nodes without too much coupling. We want it to be safe and decentralized.
That space has seen some exciting work recently, with people deploying systems based on JWT or macaroons, but those come with shortcomings as well.
We will show you how authentication systems are built, what to watch out for, how current solutions are integrated, and where we can go from there.
PostgreSQL is the new NoSQL - at Devoxx 2018Quentin Adam
Have you seen the latest updates for traditional RDBNS lately? It's insane. They are all catching up and won't be left out. While all NoSQL stores are proposing SQL, all RDMS are proposing top notch JSON support. And it does not stop there.
Latest PostgreSQL version have added new scalability features like table partitioning, query parallelism, pub/sub framework, a new quorum system for data sync. They have also improved their window functions for better time series queryability.
And as it happens, we are using some of these new functionalities at Clever Cloud. In this talk I will showcase some of them to try to convince you that PostgreSQL is the new NoSQL.
talk is recorded here: https://www.youtube.com/watch?v=t8-BQjWJFKw
https://dvbe18.confinabox.com/talk/BLA-3308/PostgreSQL_is_the_new_NoSQL
Monitorer l'inconnu, 1000 * 100 series par jour - talk avec @clementd à #devo...Quentin Adam
Slide créé sur google slides https://docs.google.com/presentation/d/1pZvS5BEFfXceS3xXIePKkeAx-aZpxhloNInaIHD5eTw/edit?usp=sharing
Comment monitorer ce qu’on ne connait pas? Un des défis technique chez Clever CLoud, à part la scalabilité, c’est de monitorer automatiquement toutes les stacks techniques de nos clients, sans que l’on sache quoi que ce soit. Notre premier but quand nous avons reconstruit notre plateforme de monitoring était de supporter notre pattern Imutable Infrastructure qui génère quantité de hosts éphémères chaque minute. L’approche traditionel est de se concentrer sur les VMs et les Hosts, pas les applications?
Il fallait changer de paradigme pour avoir une approche de découverte automatique des métriques à monitorer, permettre à du code tiers de publier ses propres métriques. Ce talk décrit le chemin qui nous a ammené à construire Clever Cloud Metrics, basé sur Warp10 ( basé sur Kafka/Hadoop/Storm) pour améliorer les conditions de travail de nos utilisateurs et la stabilité de nos applications.
Comment les contrôleurs de gestion ont fuck up mon IT - Lean Kanban France 2017Quentin Adam
http://2017.leankanban.fr/sessions/comment-les-controleurs-de-gestion-ont-fuck-up-mon-it/
Speaker’s pitch
This talk is about how financially minded management of companies lead to split the IT management into several business units, each with its own goals and management.
This split creates misalignment and conflicts between teams that were supposed to work together.
This keynote is a toolbox designed to help you bring proper implementation of devops and to make people work together on a common goal: efficient automatisation and use of the human brain power geared towards making IT an asset instead of a cost center.
Le mot de l’organisation
Le pitch est en anglais, mais la conférence sera bien en français. Ca parle de tech et de budget, une conférence comme on les aime. C’est mieux de savoir ce qu’est DevOps, mais cela n’est pas obligatoire.
Monitoring the unknown, 1000*100 series a day - Big Data Vilnius 2017Quentin Adam
How to monitor unknown third party code? One of the hardest challenges we face running Clever Cloud, apart from the impressive scale we face with hundreds of new applications per week, is the monitoring of unknown tech stacks. The first goal of rebuilding the monitoring platform was to accommodate the immutable infrastructure pattern that generates lots of ephemeral hosts every minute. The traditional approach is to focus on VMs or hosts, not applications. We needed to shift this into an approach of auto-discovery of metrics to monitor, allowing third party code to publish new items. This talk explains our journey in building Clever Cloud Metrics stack, heavily based on Warp10 (Kafka/Hadoop/Storm based) to deliver developer efficiency and trustability to our clients applications.
Problems you’ll face in the Microservices World: Configuration, Authenticatio...Quentin Adam
Okay, Microservices are cool. But, as all the new trendy buzzword, it’s not a silver bullet, and there are several problems to manage. One is the authentication, distributed authentication is hard, and there is many ways to achieve it. Configuration is the second issue to be managed when dealing with distributed micro application strategy. This talk is a concrete return of experience to build a strategy on microservice and problems we will have to deal on this occasion.
MONITORING THE UNKNOWN, 1000*100 SERIES A DAY - DEVOXX MOROCCO 2017Quentin Adam
How to monitor unknown third party code? One of the hardest challenges we face running Clever Cloud, apart from the impressive scale we face with hundreds of new applications per week, is the monitoring of unknown tech stacks. The first goal of rebuilding the monitoring platform was to accommodate the immutable infrastructure pattern that generates lots of ephemeral hosts every minute. The traditional approach is to focus on VMs or hosts, not applications. We needed to shift this into an approach of auto-discovery of metrics to monitor, allowing third party code to publish new items. This talk explains our journey in building Clever Cloud Metrics stack, heavily based on Warp10 (Kafka/Hadoop/Storm based) to deliver developer efficiency and trustability to our clients applications.
Understand immutable infrastructure, what? Why? How? - Meta-Meetup DEVOPS NIGHT Quentin Adam
Why everybody is speaking about Immutability? Immutable infrastructure? The All IT automation ecosystem need to rely on the append only, remove historical management of servers. This talk explain what is immutable infrastructure, how to build it, and how to manage data in this infrastructure pattern. It will cover pattern to use it on containers or virtual machine world.
What is systemd? Why use it? how does it work? - breizhcampQuentin Adam
Après la grande guerre initd et systemd, il est clair que maintenant systemd s'est imposé. Pourquoi ? Quels sont les intérêts ? Est ce difficile de faire un fichier de configuration systemd ? Comment ça marche ? Comment écrire un fichier de conf ? Comment gérer des CRONs avec ?
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.