The cloud is a platform devised to support a number of concurrently working applications that share the cloud’s resources; being a platform of common use, the cloud features complex interdependencies among hosted applications, as well as among applications and the underlying hardware platform. The paper stydies non-virtualized deployment when a number of applications are hosted on the same physical server without logical borders among applications (no partitions, virtual machines or similar technologies in place).
The article studies the impact of hardware virtualization on EA performance. We are using queuing models of EAs as scientific instruments for our research; methodological foundation for EA performance analysis based on queuing models can be found in the author’s book [Leonid Grinshpan. Solving Enterprise Applications Performance Puzzles: Queuing Models to the Rescue, Willey-IEEE Press, 2012, http://www.amazon.com/Solving-Enterprise-Applications-Performance-Puzzles/dp/1118061578/ref=ntt_at_ep_dpt_1].
Conceptual models of enterprise applications as instrument of performance ana...Leonid Grinshpan, Ph.D.
The article introduces enterprise applications conceptual models that uncover performance related fundamentals distilled of innumerable application particulars concealing the roots of performance issues. The value of conceptual models for performance analysis is demonstrated on two examples of virtualized and non-virtualized applications conceptual models.
Beyond IT optimization there is a (promised) land of application performance ...Leonid Grinshpan, Ph.D.
Presentation challenges widely accepted IT optimization practice as insufficient vehicle to deliver satisfactory performing enterprise applications that first and foremost have to meet their business user’s expectations in regard to service quality.
Model based transaction-aware cloud resources management case study and met...Leonid Grinshpan, Ph.D.
The presentation introduces a method of cloud resources allocation to enterprise applications (EA) depending on business transaction metrics. The approach is using queuing models; it was devised while working on a real-life EA capacity planning project requested by one of the Oracle customers. An implementation of a proposed solution brought a number of database servers from 40 to 21 without compromising transaction times.
The presentation describes components of proposed methodology: building application’s queuing model, obtaining input data for modeling (workload characterization and transaction profile), solving model and analyzing what-if scenarios. The presentation compares ways and means of collecting input data; it identifies instrumentation of software at its development stage as an ultimate solution and encourages research of technologies delivering instrumented EAs.
Takeaway: model-based transaction-aware cloud resources management significantly improves cloud profitability by minimizing a number of hardware servers hosting applications while delivering required service level.
The article provides guidance to Cloud users and Cloud providers on cost/revenue estimates of Cloud services. It explores cost/revenue models for two pay-per-use plans: pay-per-resource and pay-per-transaction.
This document discusses common performance testing mistakes and provides recommendations to avoid them. The five main "wrecking balls" that can ruin a performance testing project are: 1) lacking knowledge of the application under test, 2) not seeing the big picture and getting lost in details, 3) disregarding monitoring, 4) ignoring workload specification, and 5) overlooking software bottlenecks. The document emphasizes the importance of understanding the application, building a mental model to identify potential bottlenecks, and using monitoring to measure queues and resource utilization rather than just time-based metrics.
Designing for meaningful_experiences_i_xda slideshareDavid Kozatch
David Kozatch gave a presentation on designing for meaningful experiences. He discussed how technological innovation has evolved from one-way communication to greater collaboration. To design meaningful experiences, companies should focus on emotion, polite interfaces, and adaptive interfaces that anticipate user needs. An adaptive interface improves with user interaction by developing a model of their behavior. Creating meaningful experiences requires engaging users early and encouraging participation. Designers should match users' mental models and test for emotional language.
The presentation provides an introduction to enterprise applications capacity planning using queuing models. Oracle’s Consulting uses presented methodology to estimate hardware architecture and capacity of planned for deployment enterprise applications for Oracle customers.
The article studies the impact of hardware virtualization on EA performance. We are using queuing models of EAs as scientific instruments for our research; methodological foundation for EA performance analysis based on queuing models can be found in the author’s book [Leonid Grinshpan. Solving Enterprise Applications Performance Puzzles: Queuing Models to the Rescue, Willey-IEEE Press, 2012, http://www.amazon.com/Solving-Enterprise-Applications-Performance-Puzzles/dp/1118061578/ref=ntt_at_ep_dpt_1].
Conceptual models of enterprise applications as instrument of performance ana...Leonid Grinshpan, Ph.D.
The article introduces enterprise applications conceptual models that uncover performance related fundamentals distilled of innumerable application particulars concealing the roots of performance issues. The value of conceptual models for performance analysis is demonstrated on two examples of virtualized and non-virtualized applications conceptual models.
Beyond IT optimization there is a (promised) land of application performance ...Leonid Grinshpan, Ph.D.
Presentation challenges widely accepted IT optimization practice as insufficient vehicle to deliver satisfactory performing enterprise applications that first and foremost have to meet their business user’s expectations in regard to service quality.
Model based transaction-aware cloud resources management case study and met...Leonid Grinshpan, Ph.D.
The presentation introduces a method of cloud resources allocation to enterprise applications (EA) depending on business transaction metrics. The approach is using queuing models; it was devised while working on a real-life EA capacity planning project requested by one of the Oracle customers. An implementation of a proposed solution brought a number of database servers from 40 to 21 without compromising transaction times.
The presentation describes components of proposed methodology: building application’s queuing model, obtaining input data for modeling (workload characterization and transaction profile), solving model and analyzing what-if scenarios. The presentation compares ways and means of collecting input data; it identifies instrumentation of software at its development stage as an ultimate solution and encourages research of technologies delivering instrumented EAs.
Takeaway: model-based transaction-aware cloud resources management significantly improves cloud profitability by minimizing a number of hardware servers hosting applications while delivering required service level.
The article provides guidance to Cloud users and Cloud providers on cost/revenue estimates of Cloud services. It explores cost/revenue models for two pay-per-use plans: pay-per-resource and pay-per-transaction.
This document discusses common performance testing mistakes and provides recommendations to avoid them. The five main "wrecking balls" that can ruin a performance testing project are: 1) lacking knowledge of the application under test, 2) not seeing the big picture and getting lost in details, 3) disregarding monitoring, 4) ignoring workload specification, and 5) overlooking software bottlenecks. The document emphasizes the importance of understanding the application, building a mental model to identify potential bottlenecks, and using monitoring to measure queues and resource utilization rather than just time-based metrics.
Designing for meaningful_experiences_i_xda slideshareDavid Kozatch
David Kozatch gave a presentation on designing for meaningful experiences. He discussed how technological innovation has evolved from one-way communication to greater collaboration. To design meaningful experiences, companies should focus on emotion, polite interfaces, and adaptive interfaces that anticipate user needs. An adaptive interface improves with user interaction by developing a model of their behavior. Creating meaningful experiences requires engaging users early and encouraging participation. Designers should match users' mental models and test for emotional language.
The presentation provides an introduction to enterprise applications capacity planning using queuing models. Oracle’s Consulting uses presented methodology to estimate hardware architecture and capacity of planned for deployment enterprise applications for Oracle customers.
This document discusses the challenges cloud providers face in managing the performance of enterprise applications deployed in the cloud. It outlines how queuing models can be used to analyze application performance, identify bottlenecks, determine optimal resource allocation, and ensure performance meets SLAs. The key points are:
1) Cloud providers must monitor application workloads, characterize transactions and usage patterns, and plan capacity based on changing demands.
2) Queuing models can simulate application behavior under different workloads and help size resources needed to meet performance targets.
3) Both hardware and software bottlenecks must be identified and addressed, as insufficient tuning parameters can impact performance more than hardware capacity.
Presented enterprise applications capacity planning methodology providing estimate of utilization of hardware resources
as well as transaction response times
Enterprise applications in the cloud: a roadmap to workload characterization ...Leonid Grinshpan, Ph.D.
This article provides a road map to enterprise application workload characterization and prediction by:
- Identifying the constituents of EA transactional workload and specifying the metrics to quantify it.
- Reviewing the technologies generating raw transactional data.
- Examining Big Data Analytic ability to extract workload characterization from raw transactional data.
- Assessing the methods that discover the workload variability patterns.
The document describes the requirements for developing a mobile banking application called U-Mobile. It outlines the need for the app as people spend a lot of time visiting banks for transactions. The app will allow users to transfer money, recharge mobiles, and perform other banking activities without visiting a bank. The document includes sections on the problem statement, software requirements specification, use case diagram, activity diagrams, class diagram, sequence diagrams, communication diagram, state diagram, component diagram, and deployment diagram. The diagrams model the workflows and interactions between the user, admin, and system for various functions like transactions, recharges, and updating information.
Research Inventy : International Journal of Engineering and Scienceinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
The document outlines a methodology for sizing virtual machines when migrating enterprise applications from non-virtualized to virtualized servers. It involves monitoring application CPU utilization and transaction times in the non-virtualized environment. Queuing models are then used to evaluate deployment scenarios and identify the minimum number of virtual CPUs needed for each application's VM to maintain acceptable performance levels once virtualized. The methodology aims to determine optimal virtual CPU allocations based on actual physical resource needs rather than guesses, to avoid overcommitting resources and performance issues.
Cloud Foundry - Second Generation Code (CCNG). Technical Overview Nima Badiey
The document provides an overview of the Cloud Foundry technical platform. It describes how Cloud Foundry simplifies application deployment by allowing developers to push applications to the cloud with simple commands. It then summarizes the key components of Cloud Foundry, including the router, cloud controller, health manager, DEAs, buildpacks, messaging, service brokers, and BOSH. BOSH allows Cloud Foundry to be deployed and managed on an IaaS through the use of stemcells, agents, and a cloud provider interface.
Toll application - .NET and Android - SRSArun prasath
The document provides a software requirements specification for a toll application. It includes sections on introduction, overall description, and specific requirements. The introduction describes the methodology, purpose, scope and overview of the toll application. The overall description covers the product perspective, functions, interfaces, users, constraints, architecture and use case model. The specific requirements section details use case reports, activity diagrams and sequence diagrams. The toll application is meant to enable automatic payment at toll gates by tracking a user's GPS location and deducting payment when they cross virtual toll fences.
Silk Performer allows you to record and simulate realistic load tests for web and mobile applications. It uses virtual users (VUsers) to emulate real users and load test applications. The recorder captures live application traffic and generates scripts in BDL (Benchmark Description Language) format. These scripts can then be replayed to simulate concurrent loads and analyze performance. Key features include simulating thousands of users, protocol support for web, ERP, middleware etc., real-time monitoring, customizable reporting and root cause analysis using TrueLog Explorer. Load testing with Silk Performer helps answer questions around capacity, response times, bottlenecks and more.
This document discusses client-server software engineering. It defines client-server architecture as one where the server provides services and the client demands them. There are two main types: two-tier architecture with thin and fat client models, and three-tier architecture. The thin client model puts most functionality on the server, while the fat client model puts more on the client. The three-tier architecture separates presentation, application processing, and data management layers across different machines. An example given is internet banking, with presentation on the client browser, application processing in the middle, and database on the server.
The document provides an overview and agenda for a LoadRunner training course. It introduces LoadRunner and its components, including VuGen for recording scripts, the Controller for managing tests, and Analysis for reporting. It discusses the LoadRunner workflow and how it emulates real users to load test applications. Key topics covered include virtual users (Vusers), scripts, scenarios, protocols, and runtime settings.
Iwsm2014 performance measurement for cloud computing applications using iso...Nesma
This document discusses measuring performance of cloud computing applications using ISO 25010 standard characteristics. It presents a case study of a private cloud hosting a Microsoft Exchange application. The study collected performance log data from nodes over one week. It analyzed the data focusing on the time behavior characteristic. It calculated statistics on measures like transmission rate and created a performance index to identify peaks and valleys in system performance over time. The study demonstrated mapping measures to ISO characteristics but noted challenges in data collection, processing and representation for large cloud infrastructures.
This presentation provides an introduction to API Facade pattern. It describes what the problem is, how the pattern solves the problem and how such a pattern can be utilized in real deployments.
Application Timeline Server - Past, Present and FutureVARUN SAXENA
Naganarasimha G R and Varun Saxena are technical leads at Huawei who have been actively contributing to Apache Hadoop. They discuss the need for a new application history server beyond the existing JobHistory server, which only supports MapReduce applications. They describe the initial Application History Server and Timeline Server V1, which had limitations around storage, queries, and supporting live applications. They then introduce Timeline Server V2, which aims to address these limitations through a distributed, scalable architecture with HBase storage and new data modeling capabilities.
Application Timeline Server - Past, Present and FutureVARUN SAXENA
How YARN Application timeline server evolved from Application History Server to Application Timeline Server v1 to ATSv2 or ATS Next gen, which is currently under development.
This slide was present at Hadoop Big Data Meetup at eBay, Bangalore, India.
AWS Step Functions is a web service that enables you to coordinate the components of distributed applications and microservices using visual workflows.
This white paper presents a solution to test performance and analyze the results for web services that are deployed on the webMethods Integration Server using Apache JMeter.
The document discusses microservice architecture and data stream processing. It provides a history of these approaches and challenges they aim to address like growing application complexity and data size. Microservices are proposed as a solution, breaking applications into small, independent, communicating services. Advantages include fault tolerance, scalability, and easier development. Disadvantages include additional complexity for deployment, updates and monitoring. Examples and implementation suggestions are also provided.
Solving enterprise applications performance puzzles queuing models to the r...Leonid Grinshpan, Ph.D.
This document discusses how queuing models can help troubleshoot performance issues in enterprise applications. It presents an overview of queuing models that emulate performance bottlenecks and help visualize causes and consequences. Key points covered include how queuing models map applications to hardware components, identify CPU and I/O bottlenecks, and evaluate how configuration changes like adding resources can address bottlenecks. Real-world workload specifications and their impact on tuning decisions are also examined.
The document discusses error isolation and management in agile multi-tenant cloud applications. It proposes an 8-phase framework called Mapricot to isolate and manage errors. The 8 phases are: Measurable space (store errors), Analyze errors (categorize and count errors), Prioritize errors, Release correlation, Improved logging, Code improvement, Offer urgent help, and Training. The framework was evaluated on two cloud applications and showed improvements in isolating and managing errors over a control period.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
More Related Content
Similar to Enterprise applications in the cloud: non-virtualized deployment
This document discusses the challenges cloud providers face in managing the performance of enterprise applications deployed in the cloud. It outlines how queuing models can be used to analyze application performance, identify bottlenecks, determine optimal resource allocation, and ensure performance meets SLAs. The key points are:
1) Cloud providers must monitor application workloads, characterize transactions and usage patterns, and plan capacity based on changing demands.
2) Queuing models can simulate application behavior under different workloads and help size resources needed to meet performance targets.
3) Both hardware and software bottlenecks must be identified and addressed, as insufficient tuning parameters can impact performance more than hardware capacity.
Presented enterprise applications capacity planning methodology providing estimate of utilization of hardware resources
as well as transaction response times
Enterprise applications in the cloud: a roadmap to workload characterization ...Leonid Grinshpan, Ph.D.
This article provides a road map to enterprise application workload characterization and prediction by:
- Identifying the constituents of EA transactional workload and specifying the metrics to quantify it.
- Reviewing the technologies generating raw transactional data.
- Examining Big Data Analytic ability to extract workload characterization from raw transactional data.
- Assessing the methods that discover the workload variability patterns.
The document describes the requirements for developing a mobile banking application called U-Mobile. It outlines the need for the app as people spend a lot of time visiting banks for transactions. The app will allow users to transfer money, recharge mobiles, and perform other banking activities without visiting a bank. The document includes sections on the problem statement, software requirements specification, use case diagram, activity diagrams, class diagram, sequence diagrams, communication diagram, state diagram, component diagram, and deployment diagram. The diagrams model the workflows and interactions between the user, admin, and system for various functions like transactions, recharges, and updating information.
Research Inventy : International Journal of Engineering and Scienceinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
The document outlines a methodology for sizing virtual machines when migrating enterprise applications from non-virtualized to virtualized servers. It involves monitoring application CPU utilization and transaction times in the non-virtualized environment. Queuing models are then used to evaluate deployment scenarios and identify the minimum number of virtual CPUs needed for each application's VM to maintain acceptable performance levels once virtualized. The methodology aims to determine optimal virtual CPU allocations based on actual physical resource needs rather than guesses, to avoid overcommitting resources and performance issues.
Cloud Foundry - Second Generation Code (CCNG). Technical Overview Nima Badiey
The document provides an overview of the Cloud Foundry technical platform. It describes how Cloud Foundry simplifies application deployment by allowing developers to push applications to the cloud with simple commands. It then summarizes the key components of Cloud Foundry, including the router, cloud controller, health manager, DEAs, buildpacks, messaging, service brokers, and BOSH. BOSH allows Cloud Foundry to be deployed and managed on an IaaS through the use of stemcells, agents, and a cloud provider interface.
Toll application - .NET and Android - SRSArun prasath
The document provides a software requirements specification for a toll application. It includes sections on introduction, overall description, and specific requirements. The introduction describes the methodology, purpose, scope and overview of the toll application. The overall description covers the product perspective, functions, interfaces, users, constraints, architecture and use case model. The specific requirements section details use case reports, activity diagrams and sequence diagrams. The toll application is meant to enable automatic payment at toll gates by tracking a user's GPS location and deducting payment when they cross virtual toll fences.
Silk Performer allows you to record and simulate realistic load tests for web and mobile applications. It uses virtual users (VUsers) to emulate real users and load test applications. The recorder captures live application traffic and generates scripts in BDL (Benchmark Description Language) format. These scripts can then be replayed to simulate concurrent loads and analyze performance. Key features include simulating thousands of users, protocol support for web, ERP, middleware etc., real-time monitoring, customizable reporting and root cause analysis using TrueLog Explorer. Load testing with Silk Performer helps answer questions around capacity, response times, bottlenecks and more.
This document discusses client-server software engineering. It defines client-server architecture as one where the server provides services and the client demands them. There are two main types: two-tier architecture with thin and fat client models, and three-tier architecture. The thin client model puts most functionality on the server, while the fat client model puts more on the client. The three-tier architecture separates presentation, application processing, and data management layers across different machines. An example given is internet banking, with presentation on the client browser, application processing in the middle, and database on the server.
The document provides an overview and agenda for a LoadRunner training course. It introduces LoadRunner and its components, including VuGen for recording scripts, the Controller for managing tests, and Analysis for reporting. It discusses the LoadRunner workflow and how it emulates real users to load test applications. Key topics covered include virtual users (Vusers), scripts, scenarios, protocols, and runtime settings.
Iwsm2014 performance measurement for cloud computing applications using iso...Nesma
This document discusses measuring performance of cloud computing applications using ISO 25010 standard characteristics. It presents a case study of a private cloud hosting a Microsoft Exchange application. The study collected performance log data from nodes over one week. It analyzed the data focusing on the time behavior characteristic. It calculated statistics on measures like transmission rate and created a performance index to identify peaks and valleys in system performance over time. The study demonstrated mapping measures to ISO characteristics but noted challenges in data collection, processing and representation for large cloud infrastructures.
This presentation provides an introduction to API Facade pattern. It describes what the problem is, how the pattern solves the problem and how such a pattern can be utilized in real deployments.
Application Timeline Server - Past, Present and FutureVARUN SAXENA
Naganarasimha G R and Varun Saxena are technical leads at Huawei who have been actively contributing to Apache Hadoop. They discuss the need for a new application history server beyond the existing JobHistory server, which only supports MapReduce applications. They describe the initial Application History Server and Timeline Server V1, which had limitations around storage, queries, and supporting live applications. They then introduce Timeline Server V2, which aims to address these limitations through a distributed, scalable architecture with HBase storage and new data modeling capabilities.
Application Timeline Server - Past, Present and FutureVARUN SAXENA
How YARN Application timeline server evolved from Application History Server to Application Timeline Server v1 to ATSv2 or ATS Next gen, which is currently under development.
This slide was present at Hadoop Big Data Meetup at eBay, Bangalore, India.
AWS Step Functions is a web service that enables you to coordinate the components of distributed applications and microservices using visual workflows.
This white paper presents a solution to test performance and analyze the results for web services that are deployed on the webMethods Integration Server using Apache JMeter.
The document discusses microservice architecture and data stream processing. It provides a history of these approaches and challenges they aim to address like growing application complexity and data size. Microservices are proposed as a solution, breaking applications into small, independent, communicating services. Advantages include fault tolerance, scalability, and easier development. Disadvantages include additional complexity for deployment, updates and monitoring. Examples and implementation suggestions are also provided.
Solving enterprise applications performance puzzles queuing models to the r...Leonid Grinshpan, Ph.D.
This document discusses how queuing models can help troubleshoot performance issues in enterprise applications. It presents an overview of queuing models that emulate performance bottlenecks and help visualize causes and consequences. Key points covered include how queuing models map applications to hardware components, identify CPU and I/O bottlenecks, and evaluate how configuration changes like adding resources can address bottlenecks. Real-world workload specifications and their impact on tuning decisions are also examined.
The document discusses error isolation and management in agile multi-tenant cloud applications. It proposes an 8-phase framework called Mapricot to isolate and manage errors. The 8 phases are: Measurable space (store errors), Analyze errors (categorize and count errors), Prioritize errors, Release correlation, Improved logging, Code improvement, Offer urgent help, and Training. The framework was evaluated on two cloud applications and showed improvements in isolating and managing errors over a control period.
Similar to Enterprise applications in the cloud: non-virtualized deployment (20)
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Enterprise applications in the cloud: non-virtualized deployment
1. Enterprise Applications in the Cloud:
Non-virtualized Deployment
Leonid Grinshpan, Oracle Corporation (www.oracle.com)
Subject
The cloud is a platform devised to support a number of concurrently working
applications that share the cloud’s resources; being a platform of common use, the
cloud features complex interdependencies among hosted applications, as well as among
applications and the underlying hardware platform.
Enterprise Applications (EAs) can be deployed in the cloud in two ways:
1. Non-virtualized setup hosts on the same physical servers different EAs without
logical borders between them (no partitions, virtual machines or similar
technologies in place).
2. Virtualized arrangement separates EAs logically from each other by employing
the above-mentioned techniques.
Both deployment models have advantages and disadvantages. The performance
penalty introduced by virtualization (which we will analyze in the next article) prevents
many EA vendors from recommending EA deployment in virtual environments. As an
example, here is a policy of Thomson Reuters Elite business
(http://www.elite.com/virtualization_servers/):
2. Elite generally recommends against using virtualization environments (e.g., Virtual Machines
from VMware or Microsoft Virtual Server) for primary production servers hosting Elite
products. Elite makes no performance warranties in relation to Elite applications hosted on
Virtual Machines.
In non-virtualized clouds an allocation of resources for different EAs is carried out by
operating systems that provision software processes representing EAs. This
environment makes all processing power of the physical servers available to the
applications. Furthermore, it enables collection of reliable performance metrics by
directly monitoring the server’s counters. It is quite possible that for those reasons,
Google applications are not embedded into virtual environments. Another example of a
non-virtualized cloud is the popular project management and collaboration tool
Basecamp [Is Virtualization a Cloud Prerequisite?
http://gigaom.com/2009/08/30/is-virtualization-a-cloud-prerequisite/]
One shortcoming of non-virtualized cloud is obvious: Instability of any EA resulting in
hardware downtime affects availability of all EAs. But what happens much more often
is that an EA might suffer performance degradation in response to the changes in
workload and service demand experienced by any other EA. We analyze that
phenomenon simulating happenings in the cloud using queuing models of EAs;
methodological foundation for EA performance analysis based on queuing models can
be found in the author’s book [Leonid Grinshpan. Solving Enterprise Application
Performance Puzzles: Queuing Models to the Rescue, Willey-IEEE Press; available in bookstores
and from Web booksellers from January 2012].
Performance Impact of Workload Fluctuations
The queuing model on Figure 1 represents simplified three-tiered Cloud with Web,
Application, and Database servers. The cloud hosts three EAs (App A, App B, App C)
serving three user groups, one group per EA. Each server corresponds to the model’s
node with a number of processing units equal to the number of CPUs in a server. The
users of each EA, as well as the network, are modeled by dedicated nodes. All servers
are physical ones without any partitioning among applications. Web and Application
servers have 8 CPUs each; Database server has 16 CPUs.
3. Figure 1 Model 1 of the cloud hosting three enterprise applications
The models in this article were analyzed using TeamQuest solver
[http://teamquest.com/products/model/index.htm]. Here is a description of Model 1
components in TeamQuest terms:
Workload 1 for Model 1 is characterized in Table 1. For each application it is
represented by transactions identified by application name. A user initiates transaction
a number of times indicated in column “Number of transaction executions per user per
hour.” We analyze the model for 200, 400, 600, and 800 users.
4. Table 1
Workload 1 for Model 1
Number of users Number of
Transaction name Total Total Total Total Total transaction
3 200 400 600 800 executions per
user per hour
App A transaction 1 100 200 300 400 10
App B transaction 1 50 100 150 200 20
App C transaction 1 50 100 150 200 5
To solve the model we have to specify the profile of each transaction (Table 2). The
Transaction Profile is a set of time intervals (service demands) a transaction has spent in
all processing units it has visited while served by application.
Table 2
Transaction Profiles (seconds)
Time in Time in Web Time in App Time in Database
Network node server node server node server node
App A transaction 0.001 0.2 1.0 5.0
App B transaction 0.0015 0.1 0.5 2.5
App C transaction 0.003 0.2 5.0 5.0
Model 1 estimates that transaction times for all applications will start increasing when
the number of users exceeds 400 (Figure 2).
Figure 2 Transaction response times for three applications
5. To find a reason for transaction time degradation, let’s look at utilization of the cloud’s
servers (Figure 3).
Figure 3 Utilization of the cloud’s servers
When the number of users is close to 600, utilization of Database server exceeds 85%
and causes noticeable increase in transaction times for all applications. Any further
growth in the number of users maxes out the Database server and results in exponential
explosion of transaction time. Non-virtualized cloud does not discriminate—it punishes
all applications by increasing their transaction times.
Model 1 helps to determine the contribution of each application into Database server
utilization (Figure 4).
Figure 4 Breakdown of utilization of Database server by App A, B, and C (percentage)
6. Per Figure 4 the largest “consumer” of Database server capacity is App A. We will
analyze how the decrease of App A number of users affects all other applications.
Workload 2 in Table 3 shows that when we keep the number of users of App A limited
to 100, for two other applications they remain the same as was specified in Table 1.
Table 3
Workload 2 for Model 1
Number of users Number of
Transaction name Total Total Total Total Total transaction
3 200 300 400 500 executions per
user per hour
App A transaction 1 100 100 100 100 10
App B transaction 1 50 100 150 200 20
App C transaction 1 50 100 150 200 5
Transaction times and utilization of the cloud’s server for Workload 2 are pictured on
Figures 5 and 6; charts suggest that all applications now provide acceptable services for
their users.
Figure 5 Transaction response times
for three applications for Workload 2
7. Figure 6 Utilization of the cloud’s servers for Workload 2
Performance Impact of Service Demand Fluctuations
The transaction’s service demand for a particular hardware component characterizes
the time this component has to spend to process the transaction. A transaction can be in
two states: either waiting for resource or using resource. A service demand for a
particular resource means that a time transaction is using this resource. Service demand
in general depends on two parameters:
- Resource’s processing speed
- Volume of data to be processed by resource
The first parameter characterizes hardware resource (for example, disk transfer rate is
1000 Mbit/second). For a resource such as CPU, a processing speed depends on clock
speed (for example, 3 GHz), as well as on software algorithms. Resource processing
speed is a constant for given hardware component and software release.
To the contrary, the second parameter’s prevailing trend is an increase in data volume,
because over time, business accumulates more data (for example, more sales executed
in June than in January). If EA transaction represents a financial report on sales volume,
then generation of the report’s data for June will take more Database server time than
8. generation of the report’s data for January. We analyze an impact of service demand
fluctuations on EA performance using Model 2.
Model 2 has the same topology and the same workload as Model 1 (Figure 1 and Table
1); the difference is that we analyze Model 2 for three values of service demands from
App C transaction for processing in Database (5, 8, and 10 seconds; see Table 4).
Table 4
Transaction Profiles (seconds) with Different Service Demands
Time in Time in Web Time in App Time in Database server
Network node server node server node node
App A transaction 0.001 0.2 1.0 5.0
App B transaction 0.0015 0.1 0.5 2.5
App C transaction 0.003 0.2 5.0 Model analyzed for
5.0, 8.0, 10.0 seconds
Model 2 predicts that times of all application transactions degrade faster with an
increase of service demand from one application (Figure 7).
9. Figure 7 Transaction response times for different service demands
(sd in the legend stands for service demand)
Monitoring Resource Consumptions by Applications
In a non-virtualized environment, each application is “materialized” by the operating
system as the number associated with its processes. To find out resource consumptions
by application, we have to monitor its processes. Figure 8 pictures Windows Task
Manager reporting performance data for two processes belonging to the same
application: Process beasvc.exe is an application server and process oracle.exe is a
database. CPU column shows CPU usage as a percentage of time that process used the
CPU since last update; Mem Usage column delivers a size of current working set of a
process in kilobytes; Threads column counts a number of threads running in a process;
remaining columns relate to I/O system.
10. More detailed information on processes can be collected using Windows Performance
Monitor. For example, Performance Monitor reports for each process a speed of I/O
operations, number of I/O operations per second, memory paging information, thread
count and state of each thread, usage of virtual memory, etc. Performance Monitor can
save collected data into log files; analyses of such log files discover trends in data
collected in performance counters.
Figure 8 Performance data reported by Windows Task Manager
for different processes
In UNIX environment process level data can be examined, for example, by using
prstat –a command (Figure 9): Process ID is reported in first column PID, columns SIZE
and RSS specify memory usage and column CPU delivers CPU utilization. This
command helps to associate process name (last column) and process ID.
Figure 9 Process information in UNIX environment
as reported by prstat –a command
If we have to log data on process resource utilization over some period of time, we can
use the following command for a process named <process name>:
while true; do ps -eo vsz,rss,pcpu,comm| grep <process name> >> log.txt; sleep 5; done
This command collects the counters every five seconds:
vsz - Total size of the process in virtual memory, in kilobytes.
11. rss - Resident set size of the process, in kilobytes.
pcpu - Percentage of CPU utilization.
We mentioned a few basic ways of providing insight into resource consumption by
each hosted application; operating systems and third party monitoring products offer a
broad spectrum of means to collect process-level data to make confident decisions on
cloud capacity management.
Take Away from the Article
1. In non-virtualized cloud performance, bottlenecks created by a shortage of
hardware resources affect all applications by increasing their transaction times.
2. Cloud-wide bottleneck can be caused by increase in workload of any hosted
application, as well as by fluctuations in its service demand while processing
large or complex data.
3. A contribution of each application to resources utilization can be found by
monitoring software processes created by the operating system for a particular
application. Queuing models provide estimates of such contribution.
About the Author
During the last fifteen years as an Oracle consultant, the author was engaged in hands-
on performance tuning and sizing of enterprise applications for various corporations
(Dell, Citibank, Verizon, Clorox, Bank of America, AT&T, Best Buy, Aetna, Halliburton,
Pfizer, Astra Zeneca, Starbucks, etc).