Towards an Infrastructure for Enabling Systematic Development and Research of...Rafael Ferreira da Silva
Presentation held at the 17th IEEE eScience Conference
Scientific workflows have been used almost universally across scientific domains, and have underpinned some of the most significant discoveries of the past several decades. Many of these workflows have high computational, storage, and/or communication demands, and thus must execute on a wide range of large-scale platforms, from large clouds to upcoming exascale high-performance computing (HPC) platforms. These executions must be managed using some software infrastructure. Due to the popularity of workflows, workflow management systems (WMSs) have been developed to provide abstractions for creating and executing workflows conveniently, efficiently, and portably. While these efforts are all worthwhile, there are now hundreds of independent WMSs, many of which are moribund. As a result, the WMS landscape is segmented and presents significant barriers to entry due to the hundreds of seemingly comparable, yet incompatible, systems that exist. As a result, many teams, small and large, still elect to build their own custom workflow solution rather than adopt, or build upon, existing WMSs. This current state of the WMS landscape negatively impacts workflow users, developers, and researchers. In this talk, I will provide a view of the state of the art and some of my previous research and technical contributions, and identify crucial research challenges in the workflow community.
A Semantic-Based Approach to Attain Reproducibility of Computational Environm...Idafen Santana Pérez
Slides from our presentation at the 1st International Workshop on Reproducibility in Parallel Computing (REPPAR'14) in conjunction with Euro-Par 2014 (August 25-29)
Presentation held at the USC Information Sciences Institute on July 27, 2016
Abstract - Understanding user behavior is a crucial factor when evaluating scheduling and allocation performances in high performance computing environments. Since workload traces implicitly include interaction processes, they are often used for conducting performance evaluation. Nevertheless, realistic performance evaluations need to take into account the dynamic user reaction to different levels of system performance as recorded data reflects only one instantiation of an interactive process. To further understand this process, we perform a comprehensive analysis of the user behavior in recorded data in the form of delays in subsequent job submission behavior. Therefore, we characterize a workload trace from Mira supercomputer at ALCF (Argonne Leadership Computing Facility) covering one year of job submissions. We perform an in-depth analysis of correlations between job characteristics, system performance metrics, and subsequent user behavior. Analysis results show that the user behavior is significantly influenced by long waiting times, and that complex jobs (in terms of number of nodes and CPU hours) lead to longer delays in subsequent job submissions. Also, we investigate that a notification mechanism informing users upon job completion does not influence the subsequent submission behavior. Furthermore, we advance the results of HPC job submission to HTC job submission. We consider HTC job submission behavior in terms of parallel batch-wise submissions, as well as delays and pauses in job submission. We compare differences in batch characteristics by classifying batches using a popular model. Our findings show that modeling an HTC job submission behavior requires knowledge of the underlying bags of tasks, which is often unavailable. Additionally, we find evidence that the subsequent job submission behavior is not influenced by the different complexities and requirements of HPC and HTC jobs.
Reproducibility - The myths and truths of pipeline bioinformaticsSimon Cockell
In a talk for the Newcastle Bioinformatics Special Interest Group (http://bsu.ncl.ac.uk/fms-bioinformatics) I explored the topic of reproducibility. Looking at the pros and cons of pipelining analyses, as well as some tools for achieving this. I also considered some additional tools for enabling reproducible bioinformatics, and look at the 'executable paper', and whether it represents the future for bioinformatics publishing.
ECL-Watch: A Big Data Application Performance Tuning Tool in the HPCC Systems...HPCC Systems
Lily Xu, PhD Student, Clemson University presents her paper, ECL-Watch: A Big Data Application Performance Tuning Tool in the HPCC Systems Platform at the Workshop on Benchmarking, Performance Tuning and Optimization for Big Data Applications (BPOD) as part of the 2017 IEEE International Conference on Big Data.
Towards an Infrastructure for Enabling Systematic Development and Research of...Rafael Ferreira da Silva
Presentation held at the 17th IEEE eScience Conference
Scientific workflows have been used almost universally across scientific domains, and have underpinned some of the most significant discoveries of the past several decades. Many of these workflows have high computational, storage, and/or communication demands, and thus must execute on a wide range of large-scale platforms, from large clouds to upcoming exascale high-performance computing (HPC) platforms. These executions must be managed using some software infrastructure. Due to the popularity of workflows, workflow management systems (WMSs) have been developed to provide abstractions for creating and executing workflows conveniently, efficiently, and portably. While these efforts are all worthwhile, there are now hundreds of independent WMSs, many of which are moribund. As a result, the WMS landscape is segmented and presents significant barriers to entry due to the hundreds of seemingly comparable, yet incompatible, systems that exist. As a result, many teams, small and large, still elect to build their own custom workflow solution rather than adopt, or build upon, existing WMSs. This current state of the WMS landscape negatively impacts workflow users, developers, and researchers. In this talk, I will provide a view of the state of the art and some of my previous research and technical contributions, and identify crucial research challenges in the workflow community.
A Semantic-Based Approach to Attain Reproducibility of Computational Environm...Idafen Santana Pérez
Slides from our presentation at the 1st International Workshop on Reproducibility in Parallel Computing (REPPAR'14) in conjunction with Euro-Par 2014 (August 25-29)
Presentation held at the USC Information Sciences Institute on July 27, 2016
Abstract - Understanding user behavior is a crucial factor when evaluating scheduling and allocation performances in high performance computing environments. Since workload traces implicitly include interaction processes, they are often used for conducting performance evaluation. Nevertheless, realistic performance evaluations need to take into account the dynamic user reaction to different levels of system performance as recorded data reflects only one instantiation of an interactive process. To further understand this process, we perform a comprehensive analysis of the user behavior in recorded data in the form of delays in subsequent job submission behavior. Therefore, we characterize a workload trace from Mira supercomputer at ALCF (Argonne Leadership Computing Facility) covering one year of job submissions. We perform an in-depth analysis of correlations between job characteristics, system performance metrics, and subsequent user behavior. Analysis results show that the user behavior is significantly influenced by long waiting times, and that complex jobs (in terms of number of nodes and CPU hours) lead to longer delays in subsequent job submissions. Also, we investigate that a notification mechanism informing users upon job completion does not influence the subsequent submission behavior. Furthermore, we advance the results of HPC job submission to HTC job submission. We consider HTC job submission behavior in terms of parallel batch-wise submissions, as well as delays and pauses in job submission. We compare differences in batch characteristics by classifying batches using a popular model. Our findings show that modeling an HTC job submission behavior requires knowledge of the underlying bags of tasks, which is often unavailable. Additionally, we find evidence that the subsequent job submission behavior is not influenced by the different complexities and requirements of HPC and HTC jobs.
Reproducibility - The myths and truths of pipeline bioinformaticsSimon Cockell
In a talk for the Newcastle Bioinformatics Special Interest Group (http://bsu.ncl.ac.uk/fms-bioinformatics) I explored the topic of reproducibility. Looking at the pros and cons of pipelining analyses, as well as some tools for achieving this. I also considered some additional tools for enabling reproducible bioinformatics, and look at the 'executable paper', and whether it represents the future for bioinformatics publishing.
ECL-Watch: A Big Data Application Performance Tuning Tool in the HPCC Systems...HPCC Systems
Lily Xu, PhD Student, Clemson University presents her paper, ECL-Watch: A Big Data Application Performance Tuning Tool in the HPCC Systems Platform at the Workshop on Benchmarking, Performance Tuning and Optimization for Big Data Applications (BPOD) as part of the 2017 IEEE International Conference on Big Data.
Scientific Workflows: what do we have, what do we miss?Paolo Romano
Presentation given on June 22, 2013, in Nice, at the CIBB 2013 International Workshop.
In collaboration with Paolo Missier, University of Newcastle upon Tyne, UK
On 18th September, our CEO, István Ráth, joined by Enrique Krajmalnik from Zuken, presented at the 2021 INCOSE Western States Regional Conference. Their talk concentrated on the current challenges of systems engineering, promoting a much-needed paradigm shift and a novel, holistic approach.
The conceptual framework underpinning this novel concept is the combination of light-weight bridge tools, such as the E3.GENESYS Connector from Zuken, and digital thread analytics powered by our flagship product, the IncQuery Suite. This framework provides discipline-specific views of multi-domain engineering data, and powerful structural and numerical analysis to ensure completeness, correctness and consistency throughout the entire design process.
This document covers guidelines around achieving multitenancy in a data lake environment. It mentions the different design and implementation guidelines necessary for on premise as well as cloud-based multitenant data lake, and highlights the reference architecture for both these deployment options.
Complex software-intensive systems are often described as systems of systems (SoS) due to their heterogeneous architectural elements. As SoS behavior is often only understandable during operation, runtime monitoring is needed to detect deviations from requirements. Today, while diverse monitoring approaches exist, most do not provide what is needed to monitor SoS, e.g., support for dynamically defining and deploying diverse checks across multiple systems. In this talk, I will describe our experiences of developing, applying, and evolving an approach for monitoring an SoS in the domain of industrial automation software, that is based on a domain-specific language (DSL). I will first describe our initial approach to dynamically define and check constraints in SoS at runtime, including a demo of our monitoring tool REMINDS, and then motivate and describe its evolution based on requirements elicited in an industry collaboration project. I will furthermore describe solutions we have developed to support the evolution of our approach, i.e., a code generation approach and a framework to automate testing the DSL after changes. We evaluated the expressiveness and scalability of our new DSL-based approach using an industrial SoS. At the end of the talk, I will also present general lessons we learned and give an overview of other projects in the area of software monitoring as well as other areas such as software product lines, that I am currently involved in.
COTMAC is whole and sole distributor for EPLAN products in India. We offer variety of E-CAD solutions.
For more details visit http://www.cotmac.io/eplan-software-distributors/
From Multi-Cloud and MicroServices to12-Factor Apps, Cloud-Native Applications are designed to be fast, tested and fail safe with continuous deployment to production. Simple policy declaration and enforcement across your stack allow you to move at greater speed, safety, and scale.
Scientific Workflows: what do we have, what do we miss?Paolo Romano
Presentation given on June 22, 2013, in Nice, at the CIBB 2013 International Workshop.
In collaboration with Paolo Missier, University of Newcastle upon Tyne, UK
On 18th September, our CEO, István Ráth, joined by Enrique Krajmalnik from Zuken, presented at the 2021 INCOSE Western States Regional Conference. Their talk concentrated on the current challenges of systems engineering, promoting a much-needed paradigm shift and a novel, holistic approach.
The conceptual framework underpinning this novel concept is the combination of light-weight bridge tools, such as the E3.GENESYS Connector from Zuken, and digital thread analytics powered by our flagship product, the IncQuery Suite. This framework provides discipline-specific views of multi-domain engineering data, and powerful structural and numerical analysis to ensure completeness, correctness and consistency throughout the entire design process.
This document covers guidelines around achieving multitenancy in a data lake environment. It mentions the different design and implementation guidelines necessary for on premise as well as cloud-based multitenant data lake, and highlights the reference architecture for both these deployment options.
Complex software-intensive systems are often described as systems of systems (SoS) due to their heterogeneous architectural elements. As SoS behavior is often only understandable during operation, runtime monitoring is needed to detect deviations from requirements. Today, while diverse monitoring approaches exist, most do not provide what is needed to monitor SoS, e.g., support for dynamically defining and deploying diverse checks across multiple systems. In this talk, I will describe our experiences of developing, applying, and evolving an approach for monitoring an SoS in the domain of industrial automation software, that is based on a domain-specific language (DSL). I will first describe our initial approach to dynamically define and check constraints in SoS at runtime, including a demo of our monitoring tool REMINDS, and then motivate and describe its evolution based on requirements elicited in an industry collaboration project. I will furthermore describe solutions we have developed to support the evolution of our approach, i.e., a code generation approach and a framework to automate testing the DSL after changes. We evaluated the expressiveness and scalability of our new DSL-based approach using an industrial SoS. At the end of the talk, I will also present general lessons we learned and give an overview of other projects in the area of software monitoring as well as other areas such as software product lines, that I am currently involved in.
COTMAC is whole and sole distributor for EPLAN products in India. We offer variety of E-CAD solutions.
For more details visit http://www.cotmac.io/eplan-software-distributors/
From Multi-Cloud and MicroServices to12-Factor Apps, Cloud-Native Applications are designed to be fast, tested and fail safe with continuous deployment to production. Simple policy declaration and enforcement across your stack allow you to move at greater speed, safety, and scale.
Build your cloud-native applications with Oracle Cloud. Check Terraform, Docker, Oracle ATP and Kubernetes at work to deploy our Python microservice. The entire thing will be soon available on GitHub.
AWS Initiate Berlin - Cloud Transformation und der Faktor MenschAmazon Web Services
Eine erfolgreiche Cloud-Transformation beruht auf drei Säulen: dem Prozess, der Technologie und den Menschen. Viel zu häufig konzentrieren sich Organisationen primär auf die Implementierung der Technologie und vernachlässigen dabei den menschlichen Aspekt und die Adaption der Prozesse. In diesem Vortrag werden die besten Methoden erörtert, die Kunden mit dem auszustatten, was sie zur Bewältigung dieser Herausforderung brauchen. Sie lernen Rollen und Verantwortlichkeiten kennen, die beim Übergang zur Cloud und auch danach von Belang sind. So können Sie beurteilen, wo es in Ihrer Organisation noch Nachholbedarf beim Ausbau von Fähigkeiten und Kompetenzen gibt. Richten Sie wirkungsvolle Schulungsmodelle ein und prägen Sie damit eine effektive DevOps-Kultur.
Sprecher: Ralph Winzinger, Solutions Architect - AWS
7 habits of highly effective private cloud architectsHARMAN Services
Cloud computing provides economics of scale. Many startups go ahead with public cloud computing which helps them start with no upfront infra costs and grow as the business grows.
However, in the case of enterprises, public cloud computing does not serve as a silver bullet. There are security concerns that prevents them from utilizing the benefits of public cloud computing. However, that does not mean, enterprise applications cannot not get the advantages of Cloud. The private cloud comes to the rescue. Private cloud is not only virtualization.
This paper discusses the habits of successful private cloud architects.
Using cloud native development to achieve digital transformationUni Systems S.M.S.A.
Avishay Sebban, Partner Senior Solution Architect at Red Hat IGC, gives the comprehensive idea behind Red Hat Ansible platform, the full automation capabilities and the smooth deployment to cloud. From Cloud Migration Through Automation: Next Level Flexibility virtual event, hosted on September 30, 2020
There are options beyond a straight forward lift and shift into Infrastructure as a Service. This session is about learning about how Azure helps modernize applications faster utilising modern technologies like PaaS, containers and serverless
The ability to deliver software is no longer a differentiator. In fact, it is a basic requirement for survival. Companies that embrace cloud native patterns of software delivery will survive; companies that don’t - will not.
In this webinar, we will:
- Look at the common patterns that distinguish cloud native companies and the architectures that they employ.
- Discover that an opinionated platform, one that stretches from the infrastructure all the way to the application framework, rather than ad-hoc automation, is an essential component to an enterprise's cloud native journey.
- Show that the combination of Pivotal Cloud Foundry and Spring is the complete cloud native platform.
Speaker:
Faiz Parkar
DIRECTOR OF PRODUCT MARKETING
As Director of Product Marketing for Pivotal in the Europe, Middle East and Africa region, Faiz Parkar loves working at the intersection of cloud native platforms, big data/analytics and agile application development to help organisations deliver compelling data-driven software experiences for their customers. With more than 25 years experience in the IT industry, Faiz has helped organisations large and small to take advantage of technology transitions from proprietary systems to client/server, from physical infrastructure to virtual, and from virtual infrastructure to cloud. His mission now is to help organisations accelerate their digital transformation journey and reinvent themselves as the digital leaders of the future.
Arquitetando soluções de computação em nuvem com JavaOtávio Santana
O Cloud Native se tornou uma grande palavra de ordem em todo o mundo, um termo que é praticamente usado por todos em todos os momentos. Mas o que isso significa? Quais são as vantagens que ele traz ao seu aplicativo e ao seu dia como desenvolvedor ou arquiteto de software? O que há de novo no mundo Java e quais são as etapas a seguir para um aplicativo em nuvem nativo? Esta apresentação é um guia passo a passo que praticamente o guiará na implementação de serviços de computação em nuvem de maneira eficaz e eficiente.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
3. Is this my
problem ? I think it should
be handled by
the infra guys
I will think
about that later.
Let’s focus on
my code.
Yes. The
sizing is well
cover for the
whole year
6. “Any organization that designs a
system (defined broadly) will
produce a design whose
structure is a copy of the
organization’s communication
structure”Melvyn Conway
1967
12. "NCAs are designed to take advantage of cloud computing
frameworks, which are composed of loosely-coupled cloud
services. That means that developers must break down
tasks into a separate services that can run on several
servers in different locations. Because the infrastructure
that supports a Native Cloud App does not run locally,
NCAs must be planned with redundancy in mind so the
application can withstand equipment failure and be able to
remap IP addresses automatically should hardware fail."
http://searchitoperations.techtarget.com/definition/native-cloud-application-NCA
13. http://searchitoperations.techtarget.com/definition/native-cloud-application-NCA
"NCAs are designed to take advantage of cloud computing
frameworks, which are composed of loosely-coupled cloud
services. That means that developers must break down
tasks into a separate services that can run on several
servers in different locations. Because the infrastructure
that supports a Native Cloud App does not run locally,
NCAs must be planned with redundancy in mind so the
application can withstand equipment failure and be able to
remap IP addresses automatically should hardware fail.”
19. Orchestration
(Central brain to guide and
drive the process)
Choreography
(Inform each part of the
system of its job and let it
work out the details)
Resources are Infinite and Cheap
Infrastructure is Immutable (don’t change and it won’t change… Changes are ultimately costly and very intense work)
Delegate some functions to large/proprietary/closed software: Black boxes. Integration only possible “by the book”
Very specialized and ignorant to other areas: Silo Based
Delivery. Done. Next Problem (Linear Thinking)
Very little (or sometimes none) Monitoring and Production Feedback.