I presented Bnz's Redhat OpenShift container adoption journey with OpenShift developers / operations team in Wellington OpenShift meetup on 23 Nov 2017.
4 Success stories in 3 years - A Docker Production JourneyYun Zhi Lin
Docker's 4th Birthday @Sydney Docker Meetup. It's time to celebrate the growing maturity of arguably the most disruptive technology of this decade.
I would like to take you on a journey across 4 companies I've had the privilege to worked with, each from a different industry: proptech, fintech, foodtech and telco; and each with their own unique vision to change the world.
But they all share one thing in common: they all leveraged Docker to empower their Engineers, fill in the gap between Dev and Ops, and ultimately succeed in getting their product to client faster.
Oracle Essbase in the Cloud A Mercer Advisors Success StoryPerficient, Inc.
Mercer Advisors, a privately held wealth management firm with approximately $11 billion in assets under management, needed a more robust financial reporting solution. Its legacy solution relied on an Excel-based framework with multiple General Ledger systems providing current and historical data.
Mercer Advisors decided to implement Essbase Cloud, part of the Oracle Analytics Cloud (OAC) platform to provide a modern platform for financial reporting. Mercer Advisors partnered with Perficient to execute on its vision to reap the benefits of this solution.
Mercer Advisors chief financial officer, Douglas Maxwell, discussed the OAC implementation including lessons learned and how OAC can benefit organizations like yours.
Discussion included:
-Challenges with the legacy environment
-Excel to cloud migration approach
-Benefits realized
Prathap prabhakaran document controller with 9 years experiencePrathap Prabhakaran
As a Competent Document Controller I have 19 years (9 Years Gulf Experience in oil & Gas Field) experience in 100 % Export Oriented Units and Construction Companies in various positions as Document Controller, Management Information System In-charge and Database Administrator.
4 Success stories in 3 years - A Docker Production JourneyYun Zhi Lin
Docker's 4th Birthday @Sydney Docker Meetup. It's time to celebrate the growing maturity of arguably the most disruptive technology of this decade.
I would like to take you on a journey across 4 companies I've had the privilege to worked with, each from a different industry: proptech, fintech, foodtech and telco; and each with their own unique vision to change the world.
But they all share one thing in common: they all leveraged Docker to empower their Engineers, fill in the gap between Dev and Ops, and ultimately succeed in getting their product to client faster.
Oracle Essbase in the Cloud A Mercer Advisors Success StoryPerficient, Inc.
Mercer Advisors, a privately held wealth management firm with approximately $11 billion in assets under management, needed a more robust financial reporting solution. Its legacy solution relied on an Excel-based framework with multiple General Ledger systems providing current and historical data.
Mercer Advisors decided to implement Essbase Cloud, part of the Oracle Analytics Cloud (OAC) platform to provide a modern platform for financial reporting. Mercer Advisors partnered with Perficient to execute on its vision to reap the benefits of this solution.
Mercer Advisors chief financial officer, Douglas Maxwell, discussed the OAC implementation including lessons learned and how OAC can benefit organizations like yours.
Discussion included:
-Challenges with the legacy environment
-Excel to cloud migration approach
-Benefits realized
Prathap prabhakaran document controller with 9 years experiencePrathap Prabhakaran
As a Competent Document Controller I have 19 years (9 Years Gulf Experience in oil & Gas Field) experience in 100 % Export Oriented Units and Construction Companies in various positions as Document Controller, Management Information System In-charge and Database Administrator.
Lauren Technologies have been in IT industry for more than 20 years, being specialized in business applications and catering to hardware and software requirements of customers. Right from creating mobile applications to providing dashboards and reporting, we have been delivering value to the customers !
2 Speed IT powered by Microsoft Azure and MinecraftSriram Hariharan
In this session, Mike will show how a model reference architecture in Azure and Minecraft can be used by architects to visualize solutions that you want your teams to build.
2 Speed IT powered by Microsoft Azure and MinecraftBizTalk360
In this session, Mike will show how a model reference architecture in Azure and Minecraft can be used by architects to visualize solutions that you want your teams to build.
I Love APIs 2015
Chris Munns, Amazon
@chrismunns
http://www.amazon.com/
As computing costs decreased and computing power grew over time, so increased the complexity of the problems computers were called to solve and complexity of software. Enterprise applications quickly went through the stage of monolithic applications to client-server to multiple tier and beyond – to the land of massively distributed architectures. We arrived at the point where enterprise software is well beyond the capability of a single person or even a reasonably practical group of people to understand and control. Are microsevices the answer? Join Chris Munns to learn about how microservices are scaled at Amazon.
Enhancing Organizational Performance by Creating a Culture of Stewardship wit...Iver Band
Genesis Financial Solutions (GFS), a leading nonprime consumer credit platform, has created a culture of stewardship with LeanIX. Stewardship at GFS includes acquiring, creating, sustaining, enhancing, and retiring assets. Stewards are the primary decision-makers for their assigned assets. They collaborate with consumer lending and technology leaders to guide the evolution of business capabilities, applications, and IT components.
Presented to the SUST Alumni, mainly a group of professional developers.
May be beneficial for anyone who wants to step up from a developer to Solution architect,
Modernizing the Back-office to improve the sporting fan's experience with IB...IBM
In this session learn how Maple Leaf Sports & Entertainment (MLSE) transformed its finance and procurement system to enable better decision-making processes for brand recognition, fan loyalty, and overall fan experience. MLSE is Canada’s leader in delivering top-quality sports and entertainment experiences. It owns several professional sports franchises and the venues its teams play and train in. It also provides fans with music and entertainment. Hear how IBM helped transition MLSE from manual processes and the Great Plains legacy system to best-in-class business processes in an on-time, on-budget implementation of Oracle ERP Cloud in seven months, to quickly lay down the financial backbone of its transformation journey.
RES and guest analysts from the 451 Group, William Fellows and Agatha Poon, uncover the power of automation in delivering apps and services to the enterprise through a more scalable and effective approach. We will also discuss future benefits of automation and self-service, as customers map out their digital workspace and cloud journey. Register Here
Organizations everywhere struggle with one specific, real problem- How fast we could deliver value to our customers. That is a critical measure. The Purpose of an organization is the fundamental reason why the organization exists. It is the most central component of core culture. The Purpose of an organization is not the answer to the question, "What do you do?" This typically focuses on products, services and customers. To clarify, it should answer the question, "Why is the work you do important?"
Businesses exist to make a profit. But they also exist to make a difference. Through work, individuals can make a difference. They can be part of a meaningful legacy.
As we step into the eleventh year of the term "DevOps", it is now mainstream in most organizations. It makes into the Strategy & Board meetings, CIO presentations, press releases and success parties. As most organisations are just scratching the surface, there is a lot to take on when it comes to any transformation initiatives. Primarily focusing on - Speed & Stability. Organizations have realized that - hundreds of deployments do not matter, but matters what - the "Value" to the
customer.
Overcoming Enterprise Disconnect With Value Streams and Flow MetricsBMK Lakshminarayanan
As we step into the eleventh year of the term "DevOps", it is now mainstream in most organizations. It makes into the Strategy & Board meetings, CIO presentations, press releases and success parties.
As most organisations are just scratching the surface, there is a lot to take on in any transformation initiatives. Primarily focusing on - Speed & Stability. Organizations have realized that - hundreds of deployments do not matter, but matter what - the "Value" to the customer.
In this presentation, BMK will share his study on most of the Enterprises' states, reflecting on how the pressure is more on "IT" to deliver value faster. However, the rest of the organization's operations and processes are still slow, e.g., funding the initiatives, Architecture, governance, etc.
Value Stream Management - helps to overcome this "Enterprise Disconnect":
* Instead of all working to achieve our own goals, we work towards organizational goals.
* Instead of working against each other and pushing our priorities/agenda, we all work together to achieve better customer and business outcomes.
*Instead of focusing on "project timeline & budget", we work towards delivering "Value."
*Instead of mapping our strategy to structure, we work towards regrouping and organizing our teams to map our plan & centred around "Customer" and "Value."
BMK's talk will cover and help you to explore and apply:
* Optimizing the organizational functions and processes "Flow."
* Make the work & flow visible to everyone.
* Measure what matters - the "Flow" with the help of Flow Metrics
* Finally - make the right thing, the easy thing.
BMK's known for speaking his heart content - originally authoritative; He shares his real-time experiences, learnings & his perspective as a practitioner with the wider community to amplify the "Community Learning Experience".
Lauren Technologies have been in IT industry for more than 20 years, being specialized in business applications and catering to hardware and software requirements of customers. Right from creating mobile applications to providing dashboards and reporting, we have been delivering value to the customers !
2 Speed IT powered by Microsoft Azure and MinecraftSriram Hariharan
In this session, Mike will show how a model reference architecture in Azure and Minecraft can be used by architects to visualize solutions that you want your teams to build.
2 Speed IT powered by Microsoft Azure and MinecraftBizTalk360
In this session, Mike will show how a model reference architecture in Azure and Minecraft can be used by architects to visualize solutions that you want your teams to build.
I Love APIs 2015
Chris Munns, Amazon
@chrismunns
http://www.amazon.com/
As computing costs decreased and computing power grew over time, so increased the complexity of the problems computers were called to solve and complexity of software. Enterprise applications quickly went through the stage of monolithic applications to client-server to multiple tier and beyond – to the land of massively distributed architectures. We arrived at the point where enterprise software is well beyond the capability of a single person or even a reasonably practical group of people to understand and control. Are microsevices the answer? Join Chris Munns to learn about how microservices are scaled at Amazon.
Enhancing Organizational Performance by Creating a Culture of Stewardship wit...Iver Band
Genesis Financial Solutions (GFS), a leading nonprime consumer credit platform, has created a culture of stewardship with LeanIX. Stewardship at GFS includes acquiring, creating, sustaining, enhancing, and retiring assets. Stewards are the primary decision-makers for their assigned assets. They collaborate with consumer lending and technology leaders to guide the evolution of business capabilities, applications, and IT components.
Presented to the SUST Alumni, mainly a group of professional developers.
May be beneficial for anyone who wants to step up from a developer to Solution architect,
Modernizing the Back-office to improve the sporting fan's experience with IB...IBM
In this session learn how Maple Leaf Sports & Entertainment (MLSE) transformed its finance and procurement system to enable better decision-making processes for brand recognition, fan loyalty, and overall fan experience. MLSE is Canada’s leader in delivering top-quality sports and entertainment experiences. It owns several professional sports franchises and the venues its teams play and train in. It also provides fans with music and entertainment. Hear how IBM helped transition MLSE from manual processes and the Great Plains legacy system to best-in-class business processes in an on-time, on-budget implementation of Oracle ERP Cloud in seven months, to quickly lay down the financial backbone of its transformation journey.
RES and guest analysts from the 451 Group, William Fellows and Agatha Poon, uncover the power of automation in delivering apps and services to the enterprise through a more scalable and effective approach. We will also discuss future benefits of automation and self-service, as customers map out their digital workspace and cloud journey. Register Here
Organizations everywhere struggle with one specific, real problem- How fast we could deliver value to our customers. That is a critical measure. The Purpose of an organization is the fundamental reason why the organization exists. It is the most central component of core culture. The Purpose of an organization is not the answer to the question, "What do you do?" This typically focuses on products, services and customers. To clarify, it should answer the question, "Why is the work you do important?"
Businesses exist to make a profit. But they also exist to make a difference. Through work, individuals can make a difference. They can be part of a meaningful legacy.
As we step into the eleventh year of the term "DevOps", it is now mainstream in most organizations. It makes into the Strategy & Board meetings, CIO presentations, press releases and success parties. As most organisations are just scratching the surface, there is a lot to take on when it comes to any transformation initiatives. Primarily focusing on - Speed & Stability. Organizations have realized that - hundreds of deployments do not matter, but matters what - the "Value" to the
customer.
Overcoming Enterprise Disconnect With Value Streams and Flow MetricsBMK Lakshminarayanan
As we step into the eleventh year of the term "DevOps", it is now mainstream in most organizations. It makes into the Strategy & Board meetings, CIO presentations, press releases and success parties.
As most organisations are just scratching the surface, there is a lot to take on in any transformation initiatives. Primarily focusing on - Speed & Stability. Organizations have realized that - hundreds of deployments do not matter, but matter what - the "Value" to the customer.
In this presentation, BMK will share his study on most of the Enterprises' states, reflecting on how the pressure is more on "IT" to deliver value faster. However, the rest of the organization's operations and processes are still slow, e.g., funding the initiatives, Architecture, governance, etc.
Value Stream Management - helps to overcome this "Enterprise Disconnect":
* Instead of all working to achieve our own goals, we work towards organizational goals.
* Instead of working against each other and pushing our priorities/agenda, we all work together to achieve better customer and business outcomes.
*Instead of focusing on "project timeline & budget", we work towards delivering "Value."
*Instead of mapping our strategy to structure, we work towards regrouping and organizing our teams to map our plan & centred around "Customer" and "Value."
BMK's talk will cover and help you to explore and apply:
* Optimizing the organizational functions and processes "Flow."
* Make the work & flow visible to everyone.
* Measure what matters - the "Flow" with the help of Flow Metrics
* Finally - make the right thing, the easy thing.
BMK's known for speaking his heart content - originally authoritative; He shares his real-time experiences, learnings & his perspective as a practitioner with the wider community to amplify the "Community Learning Experience".
A DevOps Mario Developer Game Challenge with GRC (Governance, Risk & Compliance)
To stay compliant, secure, you need to go faster. You may wonder how is that possible? Governance, Risk & Security — generally a bottleneck in most of the enterprises for their DevOps transformation. Wait, you have more to that story.
As we step into the eleventh year of the term “DevOps”, it is now mainstream in most organizations. It makes into the Strategy & Board meetings, CIO presentations, press releases and success parties.
BMK shares his experience with Enterprise DevOps Adoption
Session Name: Our DevOps Journey is Incomplete Without Data
Every company is a “Software Company”, “Software is eating the world”, in these similar lines I recently heard every company regardless of size it is a “Data Company”. True, somewhere or other every organisation produces, consumes, analyses, report Data & makes a decision, promotes, buy, sell, acquire, expand, down-size, so on based on Data. DevOps momentum has seen a rapid growth rate of new tools in the space of CI/CD, ARA(Application Release Automation), frameworks for enabling application delivery at pace. When it comes to Continuous Delivery, modern architecture patterns, practices like Microservices, our delivery teams face challenges with data. I want to discuss some of the challenges that I have gone through, ideas and some concrete pointers to help further to understand the “Data” problem in the DevOps space. If we could go as fast as only our weakest link, then Data, Data Management, Data Architecture, and associated practices need our attention and love.
My talk on "Cloud Confusion, DevOps Dilemma, Microservice Madness" at DevOps India Summit 2019. I discuss the mad rush in adopting these new terms and falling short in achieving the organisational goal.
For leading global technology company I presented on "Enterprise Journey to the Cloud" topic and discussing the barriers, speed breakers for the Enterprises and their Cloud-Journey.
How to Avoid Cloud Confusion, DevOps dilemma, Microservice MadnessBMK Lakshminarayanan
on 10 Dec 2019 as part of Global SKILup Day organised by DevOps Institute, I presented on topic "How to Avoid Cloud Confusion, DevOps dilemma, Microservice Madness" along with DevOps leaders, practitioners, authors and speakers.
The presentation covers three major areas of today's trend: Cloud, DevOps, Microservices and I presented on some practical "How to" #DevOps session as an invited speaker.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
2. 150+ Years in Banking
for New Zealand
Bank of New Zealand is one of New
Zealand's largest banks and has been
operating continuously in the country
since the first office was opened in
Auckland in October 1861 followed
shortly after by the first branch in Dunedin
in December 1861
8. New Challenges
• Enterprise – Integrating with governance and security
• Monitoring
• Services & Application logs
• Monolith to Microservices
• Building PaaS / Cloud native applications
9. Silos are #1 enemy of throughput and quality
-Damon Edwards
10. DevOps @ Bnz
• Self-service offering | Provision your own Dev, Test boxes
• Application performance monitoring | Run, Monitor, Manage, Learn
• Microservices and Containers | Immutable Infrastructures
• Infrastructure automation and PaaS offerings | 3 days to 30 minutes
• Squads, Tribes | Co-located cross functional teams
• Automated Deployments| 2 Hours to 20 seconds
• DevSecOps| Shifting left
11. Resources
• https://blog.openshift.com/
• DevOpsDays NZ conference | follow us @devopsdaysnz
• Wellington Enterprise DevOps meetup and OpenShift meetup
• https://devopsnz.slack.com/
• https://12factor.net/
• Building Microservices by Sam Newman
• #devops
Editor's Notes
Introductions:
Introduce title; Understand the need for speed; Time to market;
Golf-course to Production; Flow, pipeline, feedback loops
It is all DevOps; Wake up everyone in the enterprise; Having great capabilities in organization it terms of platform
You do not have time and resources in your reach to do the best for given context; Speed compromises quality, quality compromises cost; cost compromises scope;
Even if you have containerized platform it is not easy; Every organization has their own challenges.
Red Hat OpenShift Container Platform is the first and only hybrid cloud solution delivering enterprise-grade Kubernetes and Linux containers, based on Red Hat Enterprise Linux, the world’s leading enterprise Linux platform.
We are monitoring the standard endpoints and processes based on https://github.com/redhat-cop/openshift-playbooks/blob/master/playbooks/operationalizing/monitoring_guide.adoc
We have built some custom monitoring scripts to help monitor and alert on capacity and performance issues.
docker-registery-monitor which does a pull and push to the docker registry every 5 minutes .
persistent-volume-monitor which checks how many persistent volumes we have left that can be used
build-max-pod-monitor this monitors our openshift builder nodes to make sure we are not hitting the max pods for the builder nodes
app-max-pod-monitor this will make sure we are not hitting max pods for our compute nodes
docker-pool-monitor to monitor the docker pool's on the openshift nodes, to ensure we are not going to run out of disk for the docker thin pools
cert monitor, we have a script that runs once every day to check that none of the certs are due to expire in the next month
We have standard OS monitoring on all the nodes (and masters) CPU, mem swap etc.
security
We use cloudforms which scans all of the openshift servers using openscap and reporting on which container images have high severity security CVE's. I think we need to do more in this space (intergrating some scanning into the pipelines)
logging
Currently using elastic search, fluentd and kibana (EFK), but we are working on migrating this to splunk. We have found ELK to be not as reliable.
pipeline
The pipelines are expanding, there are new pipelines being built every day, and having a well-documented pattern is crucial. We are using Jenkins with the openshift plugin, and using a jenkinsfile. This allows us to use ephemeral jenkins.
When the pipelines are working well, things are great. However our current pattern seems to be quite fiddley and complicated where it's getting rather difficult to support.
Playbooks
OpenShift playbooks do:
In a nutshell
Ensures OpenShift projects are kept consistent with the source of truth (Git). This reduces the amount of human intervention and therefore human error at deployment time.
In detail
Executed before deploying to a given environment
Downloads the latest configuration from Git and:
Updates BuildConfig
Updates DeploymentConfig
Updates Service
Updates Route
Creates Imagestreams
Updates Secrets
Updates Configmaps
This is especially handy for a developer who needs to update a configuration property for their project. All they need to do is ensure that source control is updated with the new values and automation takes care of the rest.
We are monitoring the standard endpoints and processes based on https://github.com/redhat-cop/openshift-playbooks/blob/master/playbooks/operationalizing/monitoring_guide.adoc
We have built some custom monitoring scripts to help monitor and alert on capacity and performance issues.
docker-registery-monitor which does a pull and push to the docker registry every 5 minutes .
persistent-volume-monitor which checks how many persistent volumes we have left that can be used
build-max-pod-monitor this monitors our openshift builder nodes to make sure we are not hitting the max pods for the builder nodes
app-max-pod-monitor this will make sure we are not hitting max pods for our compute nodes
docker-pool-monitor to monitor the docker pool's on the openshift nodes, to ensure we are not going to run out of disk for the docker thin pools
cert monitor, we have a script that runs once every day to check that none of the certs are due to expire in the next month
We have standard OS monitoring on all the nodes (and masters) CPU, mem swap etc.
security
We use cloudforms which scans all of the openshift servers using openscap and reporting on which container images have high severity security CVE's. I think we need to do more in this space (intergrating some scanning into the pipelines)
logging
Currently using elastic search, fluentd and kibana (EFK), but we are working on migrating this to splunk. We have found ELK to be not as reliable.
pipeline
The pipelines are expanding, there are new pipelines being built every day, and having a well-documented pattern is crucial. We are using Jenkins with the openshift plugin, and using a jenkinsfile. This allows us to use ephemeral jenkins.
When the pipelines are working well, things are great. However our current pattern seems to be quite fiddley and complicated where it's getting rather difficult to support.
Playbooks
OpenShift playbooks do:
In a nutshell
Ensures OpenShift projects are kept consistent with the source of truth (Git). This reduces the amount of human intervention and therefore human error at deployment time.
In detail
Executed before deploying to a given environment
Downloads the latest configuration from Git and:
Updates BuildConfig
Updates DeploymentConfig
Updates Service
Updates Route
Creates Imagestreams
Updates Secrets
Updates Configmaps
This is especially handy for a developer who needs to update a configuration property for their project. All they need to do is ensure that source control is updated with the new values and automation takes care of the rest.