As interest in cloud solutions and their use with enterprise applications has increased, MavenWire has taken a lead in implementing and benchmarking several instances of OTM using Amazon Web Services (AWS) and Elastic Cloud Compute (EC2). This presentation outlines how the instances were set up and configured; potential benefits of OTM in the cloud; cost and performance comparisons between the cloud and "traditional" server configurations; areas of concern and issues to be aware of when implementing OTM in the cloud. In addition, we will also outline what we believe the future direction of cloud OTM will be, as well as where we believe it is best suited to customer needs.
More and more clients are looking to understand the capabilities of the OTM/G-Log architecture and configuration in order better tune OTM. Usually, this is required because of poor OTM performance or as preparation for significant changes to OTM configuration, volume, or platform. The client may be experience poor performance throughout the entire system or for a very specific use cases. The primary objective of a Performance Tuning Exercise is to understand how OTM is being utilized and to recommend solution to improve the performance of OTM.
We recommend and will take the audience through a “ground-up” performance tuning exercise, starting with hardware and infrastructure, moving to Java and App server tuning, then to OTM technical tuning and finally to the OTM functional tuning (data, agents, etc).
These audits may identify hardware constraints at each tier, networking, or other infrastructure constraints causing sub-optimal system performance. Simply stated, the performance audit will identify all bottlenecks in the system if they exist.
In many cases the largest performance is impacts are not hardware, but rather how the data is configured within the application. So as part of the exercise we will analyze database performance, individual SQL queries, OTM Queues, bulk planning parameters, agents, rates and the settlement process.
Understanding the methods which will best identify these bottlenecks will help you avoid performance issues early in your project and save considerable time and expense as you near go-live. This presentation will guide you through the steps necessary to better understand what is impacting performance and how to best handle it. It will provide lessons learned and tools that are available to you better manage and maintain a healthy OTM environment.
Presented by Chris Plough at MavenWire
Benchmarking OTM and Java - Is Your Platform Limiting PerformanceMavenWire
This document discusses benchmarking various hardware platforms and operating systems for optimal OTM performance. It provides an agenda for a presentation that will teach how to benchmark OTM platforms using tools like VolanoMark, DaCapo, Soap Stone and Hammerora. The presentation will show hands-on exercises for running the benchmarks and interpreting the results. Higher scores are better for VolanoMark and Soap Stone, while lower scores indicate better performance for DaCapo and Hammerora. Online resources for monitoring performance and learning more about the benchmarks are also provided.
VMworld 2013: Big Data: Virtualized SAP HANA Performance, Scalability and Bes...VMworld
VMworld 2013
Bob Goldsand, VMware
Todd Muirhead, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Tricks And Tradeoffs Of Deploying My Sql Clusters In The CloudMySQLConference
1. MySQL databases can be deployed and managed in cloud computing environments like Amazon EC2 using tools that provide automation for launching slaves, backing up databases, and handling failover.
2. RightScale has been operating MySQL databases on Amazon EC2 since 2006 and provides a cloud management system and replicated MySQL product to automate the deployment and management of MySQL databases on EC2.
3. Some benefits of using cloud computing with databases include infinite computing resources, availability on demand with pay per use, and fully automatable database infrastructure management.
The document provides an agenda for a performance optimization workshop for XPages applications to be held from March 11-13, 2013 at the Maritim Hotel in Gelsenkirchen, Germany. Topics to be covered include performance issues related to Java vs JavaScript, view navigation vs getting documents, string concatenation vs StringBuilder, partial updates/execution, scoped variables, and tools for profiling XPages applications. The presenter is listed as Ulrich Krause, an experienced Notes/Domino developer and IBM Champion.
The document discusses the WebLogic Server plugin which allows WebLogic Server to communicate with other web servers like Apache HTTP Server and Microsoft IIS. It specifically focuses on the Apache HTTP Server plugin, describing how it allows requests to be proxied from Apache to WebLogic Server so that dynamic functionality is handled by WebLogic Server. It provides instructions for installing the Apache plugin, which involves copying files and configuring Apache modules, and testing the installation.
The document discusses various techniques for optimizing UI performance, including optimizing caching, minimizing round-trip times, minimizing request size, minimizing payload size, and optimizing browser rendering. Specific techniques mentioned include leveraging browser and proxy caching, minimizing DNS lookups and redirects, combining external JavaScript, minimizing cookie and request size, enabling gzip compression, and optimizing images. Profiling and heap analysis tools are also discussed for diagnosing backend performance issues.
More and more clients are looking to understand the capabilities of the OTM/G-Log architecture and configuration in order better tune OTM. Usually, this is required because of poor OTM performance or as preparation for significant changes to OTM configuration, volume, or platform. The client may be experience poor performance throughout the entire system or for a very specific use cases. The primary objective of a Performance Tuning Exercise is to understand how OTM is being utilized and to recommend solution to improve the performance of OTM.
We recommend and will take the audience through a “ground-up” performance tuning exercise, starting with hardware and infrastructure, moving to Java and App server tuning, then to OTM technical tuning and finally to the OTM functional tuning (data, agents, etc).
These audits may identify hardware constraints at each tier, networking, or other infrastructure constraints causing sub-optimal system performance. Simply stated, the performance audit will identify all bottlenecks in the system if they exist.
In many cases the largest performance is impacts are not hardware, but rather how the data is configured within the application. So as part of the exercise we will analyze database performance, individual SQL queries, OTM Queues, bulk planning parameters, agents, rates and the settlement process.
Understanding the methods which will best identify these bottlenecks will help you avoid performance issues early in your project and save considerable time and expense as you near go-live. This presentation will guide you through the steps necessary to better understand what is impacting performance and how to best handle it. It will provide lessons learned and tools that are available to you better manage and maintain a healthy OTM environment.
Presented by Chris Plough at MavenWire
Benchmarking OTM and Java - Is Your Platform Limiting PerformanceMavenWire
This document discusses benchmarking various hardware platforms and operating systems for optimal OTM performance. It provides an agenda for a presentation that will teach how to benchmark OTM platforms using tools like VolanoMark, DaCapo, Soap Stone and Hammerora. The presentation will show hands-on exercises for running the benchmarks and interpreting the results. Higher scores are better for VolanoMark and Soap Stone, while lower scores indicate better performance for DaCapo and Hammerora. Online resources for monitoring performance and learning more about the benchmarks are also provided.
VMworld 2013: Big Data: Virtualized SAP HANA Performance, Scalability and Bes...VMworld
VMworld 2013
Bob Goldsand, VMware
Todd Muirhead, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Tricks And Tradeoffs Of Deploying My Sql Clusters In The CloudMySQLConference
1. MySQL databases can be deployed and managed in cloud computing environments like Amazon EC2 using tools that provide automation for launching slaves, backing up databases, and handling failover.
2. RightScale has been operating MySQL databases on Amazon EC2 since 2006 and provides a cloud management system and replicated MySQL product to automate the deployment and management of MySQL databases on EC2.
3. Some benefits of using cloud computing with databases include infinite computing resources, availability on demand with pay per use, and fully automatable database infrastructure management.
The document provides an agenda for a performance optimization workshop for XPages applications to be held from March 11-13, 2013 at the Maritim Hotel in Gelsenkirchen, Germany. Topics to be covered include performance issues related to Java vs JavaScript, view navigation vs getting documents, string concatenation vs StringBuilder, partial updates/execution, scoped variables, and tools for profiling XPages applications. The presenter is listed as Ulrich Krause, an experienced Notes/Domino developer and IBM Champion.
The document discusses the WebLogic Server plugin which allows WebLogic Server to communicate with other web servers like Apache HTTP Server and Microsoft IIS. It specifically focuses on the Apache HTTP Server plugin, describing how it allows requests to be proxied from Apache to WebLogic Server so that dynamic functionality is handled by WebLogic Server. It provides instructions for installing the Apache plugin, which involves copying files and configuring Apache modules, and testing the installation.
The document discusses various techniques for optimizing UI performance, including optimizing caching, minimizing round-trip times, minimizing request size, minimizing payload size, and optimizing browser rendering. Specific techniques mentioned include leveraging browser and proxy caching, minimizing DNS lookups and redirects, combining external JavaScript, minimizing cookie and request size, enabling gzip compression, and optimizing images. Profiling and heap analysis tools are also discussed for diagnosing backend performance issues.
The document discusses several issues with utilizing utilization as a metric for measuring resource usage and performance in modern computing systems. It argues that utilization metrics are broken due to unsafe assumptions about workload characteristics, system architecture like multi-core CPUs, and measurement errors. Alternative metrics that take these factors into account, like response time and capability utilization for storage, are suggested to provide more accurate performance insights.
Surviving the Crisis With the Help of Oracle Database Resource ManagerMaris Elsins
The document summarizes the results of performance testing done to evaluate the impact of enabling Oracle Database Resource Manager. Testing was done on Oracle 11.1 and 11.2 databases under different workload scenarios both with and without a resource manager plan. The results showed that in CPU-intensive workloads, enabling even a simple resource manager plan to evenly distribute sessions among consumer groups had negligible performance impact, with total execution times varying by only seconds.
Capacity Planning for Virtualized Datacenters - Sun Network 2003Adrian Cockcroft
Presentation I made at the Sun Network conference in 2003 on how to do capacity planning for virtualized systems, tied into the N1 product that Sun was pushing at the time. This project was structured as a design for six sigma (DFSS) project.
VMworld 2013: Strategic Reasons for Classifying Workloads for Tier 1 Virtuali...VMworld
This document discusses the importance of classifying workloads before virtualizing tier 1 applications. Workload classification involves measuring existing application and database workloads to properly size and place them in a new virtualized environment. This reduces risks and speeds up implementation by providing the proper analysis. The document outlines challenges, opportunities, models, metrics, tools and an example MolsonCoors used workload classification to virtualize their SAP landscape.
Migrating from Pivotal tc Server on-prem to IBM Liberty in the cloudJohn Donaldson
The airline company was trying to create a new mobile check-in solution hosted on the cloud, to improve availability in peak usage check-in times which are unpredictable during any given time in the week. They saw the cloud as the way to accomplish this without maintaining costly disaster recovery centers.
This document summarizes the migration of an Oracle database from Solaris on SPARC hardware to Linux on AMD Opteron hardware. It involved moving from Oracle 10g to 10.2, changing the operating system from Solaris 8 to Red Hat Linux, and changing the database storage from raw devices to ASM. Transportable tablespaces and Data Pump were used to move the data due to issues encountered. The migration reduced load on servers and improved query performance.
Tips on implementing SAP adaptive computing design with SAP LaMa on Microsoft Azure. We discuss the best options for SAP and some of the challenges faced.
The document outlines the hardware and software requirements, new features, and best practices for designing and implementing an infrastructure for SharePoint 2013, including new service applications, the use of claims-based authentication, and recommendations for farm topology based on organization size from single server to large virtual environments. It also discusses high availability, disaster recovery, security, and optimization strategies.
The document discusses key maintenance activities for an AEM implementation including backup, compaction, purging, cloning, and other approaches. It provides details on planning and executing online and offline backups, online and offline compaction, version purging, workflow purging, audit log purging, and cloning publish instances. The document emphasizes the importance of backups, compaction, and purging to optimize storage usage, improve performance, and maintain an optimal AEM instance.
WebLogic is an application server that supports Java EE and SOA applications. It provides services for web applications, EJBs, JMS, and web services. WebLogic offers high availability features like clustering, replication, and workload management. It also includes tools for administration, deployment, performance monitoring, and security.
The document discusses tuning MySQL server settings for performance. Some key points covered include:
- Settings are workload-specific and depend on factors like storage engine, OS, hardware. Tuning involves getting a few settings right rather than maximizing all settings.
- Monitoring tools like SHOW STATUS, SHOW INNODB STATUS, and OS tools can help evaluate performance and identify tuning opportunities.
- Memory allocation and settings like innodb_buffer_pool_size, key_buffer_size, query_cache_size are important to configure based on the workload and available memory.
Practical Performance: Understand and improve the performance of your applica...Chris Bailey
This session discusses how you can maximize the performance of your application deployment with tools that are native to your server platform as well as cross-platform Java analysis and monitoring tools. The session begins with systematic steps you can take to locate a performance problem in a complex system and moves on to analysis you can do to understand the root cause of the problem. The picture is completed by consideration of the tools and techniques available to monitor application performance in normal operation so that you can catch performance issues before they build up into serious problems.
Presented at JavaOne 2012
Video available from Parleys.com:
https://www.parleys.com/talk/the-hidden-world-your-java-application-what-its-really-doing
This document discusses YARN high availability (HA) features. It describes the YARN architecture and how the ResourceManager is a single point of failure. It then covers how YARN HA implements an active-standby ResourceManager pair with shared state storage to enable failover. The document provides details on state persistence, automatic election of the active ResourceManager, fencing to prevent split-brain scenarios, and client-side failover transparency.
This document discusses using caching to accelerate ColdFusion applications. It provides an overview of caching concepts and implementations in ColdFusion, including Ehcache, query caching, ORM caching, template caching, object caching, and distributed caching. Specific caching strategies and configurations are demonstrated.
Using Snap Clone with Enterprise Manager 12cPete Sharman
This document discusses Oracle Enterprise Manager Snap Clone, which allows instant cloning of large databases while significantly reducing storage costs. It outlines the current challenges with database refresh processes and storage costs for development and test environments. The presentation then demonstrates how Enterprise Manager's Snap Clone feature addresses these challenges by enabling thin clones of databases across different storage solutions in a completely automated, self-service manner. It also provides security, governance, and comprehensive APIs for management.
Juniper Networks provides WX/WXC platforms to accelerate enterprise applications over the WAN. The platforms compress, cache, and accelerate applications to improve performance. This allows organizations to consolidate servers, simplify administration, and provide instant response times to users while reducing costs, increasing productivity and ensuring regulatory compliance. Over 1,400 customers use the WX/WXC platforms to achieve these business and IT objectives.
The document discusses an approach to addressing the "right to forget" requirement of the GDPR using an integrated solution with Alfresco, computer vision, and natural language processing. The solution includes a GDPR Watchdog subsystem that uses machine learning models to analyze content for personal data. It can detect information in images using computer vision and text using NLP. The subsystem exposes a GDPR service and integrates with Alfresco through a webscript and repository action. A demonstration of the solution is provided.
This document discusses how a quiz application achieved high performance and scalability. It started with 30,000 concurrent users and 9 million page views per hour. To optimize, the developers analyzed logs, added indexing, eager loading, caching, bulk writes, master-slave replication, and load testing. They switched from Mongrel to Ebb web servers, seeing a 40% performance gain. Monitoring also revealed browser incompatibilities causing crashes, solved by switching from Nginx to Lighttpd. In total, optimizations led to a 25x performance gain and the ability to handle 5000 simultaneous users.
This document summarizes an event introducing Oracle Transportation Management (OTM) in the Cloud. It provides background on OTM and the speakers, and outlines the discussion topics which included an OTM overview, Oracle Cloud overview, Inspirage's OTM Cloud solution, timeline and costs, and a question and answer session. Key benefits of OTM highlighted are reducing freight costs through optimization and analytics. Moving OTM to the Cloud offers benefits like faster implementation, lower upfront costs, and no need for on-site IT resources but requires addressing customizations and integration approaches.
2013 OTM EU SIG: Integrating SAP with OTM PresentationMavenWire
The document discusses integrating SAP with OTM. It provides background on SAP and OTM history, highlights key design considerations, and common challenges. The real challenge is to fully understand the end-to-end supply chain business process and requirements to define necessary system support. Clarifying business ownership of supply chain segments and addressing pressures on the IT landscape are also important.
The document discusses several issues with utilizing utilization as a metric for measuring resource usage and performance in modern computing systems. It argues that utilization metrics are broken due to unsafe assumptions about workload characteristics, system architecture like multi-core CPUs, and measurement errors. Alternative metrics that take these factors into account, like response time and capability utilization for storage, are suggested to provide more accurate performance insights.
Surviving the Crisis With the Help of Oracle Database Resource ManagerMaris Elsins
The document summarizes the results of performance testing done to evaluate the impact of enabling Oracle Database Resource Manager. Testing was done on Oracle 11.1 and 11.2 databases under different workload scenarios both with and without a resource manager plan. The results showed that in CPU-intensive workloads, enabling even a simple resource manager plan to evenly distribute sessions among consumer groups had negligible performance impact, with total execution times varying by only seconds.
Capacity Planning for Virtualized Datacenters - Sun Network 2003Adrian Cockcroft
Presentation I made at the Sun Network conference in 2003 on how to do capacity planning for virtualized systems, tied into the N1 product that Sun was pushing at the time. This project was structured as a design for six sigma (DFSS) project.
VMworld 2013: Strategic Reasons for Classifying Workloads for Tier 1 Virtuali...VMworld
This document discusses the importance of classifying workloads before virtualizing tier 1 applications. Workload classification involves measuring existing application and database workloads to properly size and place them in a new virtualized environment. This reduces risks and speeds up implementation by providing the proper analysis. The document outlines challenges, opportunities, models, metrics, tools and an example MolsonCoors used workload classification to virtualize their SAP landscape.
Migrating from Pivotal tc Server on-prem to IBM Liberty in the cloudJohn Donaldson
The airline company was trying to create a new mobile check-in solution hosted on the cloud, to improve availability in peak usage check-in times which are unpredictable during any given time in the week. They saw the cloud as the way to accomplish this without maintaining costly disaster recovery centers.
This document summarizes the migration of an Oracle database from Solaris on SPARC hardware to Linux on AMD Opteron hardware. It involved moving from Oracle 10g to 10.2, changing the operating system from Solaris 8 to Red Hat Linux, and changing the database storage from raw devices to ASM. Transportable tablespaces and Data Pump were used to move the data due to issues encountered. The migration reduced load on servers and improved query performance.
Tips on implementing SAP adaptive computing design with SAP LaMa on Microsoft Azure. We discuss the best options for SAP and some of the challenges faced.
The document outlines the hardware and software requirements, new features, and best practices for designing and implementing an infrastructure for SharePoint 2013, including new service applications, the use of claims-based authentication, and recommendations for farm topology based on organization size from single server to large virtual environments. It also discusses high availability, disaster recovery, security, and optimization strategies.
The document discusses key maintenance activities for an AEM implementation including backup, compaction, purging, cloning, and other approaches. It provides details on planning and executing online and offline backups, online and offline compaction, version purging, workflow purging, audit log purging, and cloning publish instances. The document emphasizes the importance of backups, compaction, and purging to optimize storage usage, improve performance, and maintain an optimal AEM instance.
WebLogic is an application server that supports Java EE and SOA applications. It provides services for web applications, EJBs, JMS, and web services. WebLogic offers high availability features like clustering, replication, and workload management. It also includes tools for administration, deployment, performance monitoring, and security.
The document discusses tuning MySQL server settings for performance. Some key points covered include:
- Settings are workload-specific and depend on factors like storage engine, OS, hardware. Tuning involves getting a few settings right rather than maximizing all settings.
- Monitoring tools like SHOW STATUS, SHOW INNODB STATUS, and OS tools can help evaluate performance and identify tuning opportunities.
- Memory allocation and settings like innodb_buffer_pool_size, key_buffer_size, query_cache_size are important to configure based on the workload and available memory.
Practical Performance: Understand and improve the performance of your applica...Chris Bailey
This session discusses how you can maximize the performance of your application deployment with tools that are native to your server platform as well as cross-platform Java analysis and monitoring tools. The session begins with systematic steps you can take to locate a performance problem in a complex system and moves on to analysis you can do to understand the root cause of the problem. The picture is completed by consideration of the tools and techniques available to monitor application performance in normal operation so that you can catch performance issues before they build up into serious problems.
Presented at JavaOne 2012
Video available from Parleys.com:
https://www.parleys.com/talk/the-hidden-world-your-java-application-what-its-really-doing
This document discusses YARN high availability (HA) features. It describes the YARN architecture and how the ResourceManager is a single point of failure. It then covers how YARN HA implements an active-standby ResourceManager pair with shared state storage to enable failover. The document provides details on state persistence, automatic election of the active ResourceManager, fencing to prevent split-brain scenarios, and client-side failover transparency.
This document discusses using caching to accelerate ColdFusion applications. It provides an overview of caching concepts and implementations in ColdFusion, including Ehcache, query caching, ORM caching, template caching, object caching, and distributed caching. Specific caching strategies and configurations are demonstrated.
Using Snap Clone with Enterprise Manager 12cPete Sharman
This document discusses Oracle Enterprise Manager Snap Clone, which allows instant cloning of large databases while significantly reducing storage costs. It outlines the current challenges with database refresh processes and storage costs for development and test environments. The presentation then demonstrates how Enterprise Manager's Snap Clone feature addresses these challenges by enabling thin clones of databases across different storage solutions in a completely automated, self-service manner. It also provides security, governance, and comprehensive APIs for management.
Juniper Networks provides WX/WXC platforms to accelerate enterprise applications over the WAN. The platforms compress, cache, and accelerate applications to improve performance. This allows organizations to consolidate servers, simplify administration, and provide instant response times to users while reducing costs, increasing productivity and ensuring regulatory compliance. Over 1,400 customers use the WX/WXC platforms to achieve these business and IT objectives.
The document discusses an approach to addressing the "right to forget" requirement of the GDPR using an integrated solution with Alfresco, computer vision, and natural language processing. The solution includes a GDPR Watchdog subsystem that uses machine learning models to analyze content for personal data. It can detect information in images using computer vision and text using NLP. The subsystem exposes a GDPR service and integrates with Alfresco through a webscript and repository action. A demonstration of the solution is provided.
This document discusses how a quiz application achieved high performance and scalability. It started with 30,000 concurrent users and 9 million page views per hour. To optimize, the developers analyzed logs, added indexing, eager loading, caching, bulk writes, master-slave replication, and load testing. They switched from Mongrel to Ebb web servers, seeing a 40% performance gain. Monitoring also revealed browser incompatibilities causing crashes, solved by switching from Nginx to Lighttpd. In total, optimizations led to a 25x performance gain and the ability to handle 5000 simultaneous users.
This document summarizes an event introducing Oracle Transportation Management (OTM) in the Cloud. It provides background on OTM and the speakers, and outlines the discussion topics which included an OTM overview, Oracle Cloud overview, Inspirage's OTM Cloud solution, timeline and costs, and a question and answer session. Key benefits of OTM highlighted are reducing freight costs through optimization and analytics. Moving OTM to the Cloud offers benefits like faster implementation, lower upfront costs, and no need for on-site IT resources but requires addressing customizations and integration approaches.
2013 OTM EU SIG: Integrating SAP with OTM PresentationMavenWire
The document discusses integrating SAP with OTM. It provides background on SAP and OTM history, highlights key design considerations, and common challenges. The real challenge is to fully understand the end-to-end supply chain business process and requirements to define necessary system support. Clarifying business ownership of supply chain segments and addressing pressures on the IT landscape are also important.
OTM Value for International Logistics including Ocean Vessel TransportMavenWire
In the less harmonized International and European Supply Chain industry, this paper will outline the strategic value of OTM (Oracle Transportation Management). The paper will outline real solutions and real benefits from implemented OTM case studies in Europe. Highlighting some of the additional considerations when designing OTM solutions for International logistics.
Presented by Barry Hayes at MavenWire.
Macsteel Service Centers USA is a leading metals processor and distributor with over 30 locations in North America. They implemented Oracle EBS R12 and Oracle Transportation Management (OTM) to standardize processes, replace legacy systems, and gain efficiencies. Some key benefits seen include better management of shipping costs and carrier rates/contracts. However, challenges included a steep learning curve for their user base and additional labor needs. Next steps include further automating processes and expanding the use of OTM and analytics capabilities.
Overview of our rateManagerASP Electronic Freight Rate Management Application, used by Shippers and 3PL's to accurately calculate LTL/TL freight charges anywhere between points in Canada, the US and Mexico.
2013 OTM EU SIG evolv applications Data ManagementMavenWire
This document discusses the history of Oracle Transportation Management (OTM) implementation processes in Europe and outlines best practices for data management and user access management. It describes how early OTM implementations relied on individual efforts which led to inconsistencies. As the user base grew, common tools and processes were developed but still varied between projects. The document advocates defining standardized practices to improve consistency, supportability and efficiency across implementations. It provides recommendations for best practices in loading reference data, managing data changes over time, and provisioning user access roles and privileges in a centralized manner.
Turning the best product into the best solution requires the best people. An overview of customers across the world, how they are using OTM and the value that our global team brings them.
Traditional TMS has not ventured into the world of more specialist logistics solutions such as Air, Rail networks and Postal services. However more and more we are seeing OTM being delivered in these industries across Europe . In the presentation we will demonstrate with use cases how OTM is being applied in these complex, previously off limits industries.
International Logistics & Warehouse Management Thomas Tanel
This presentation is designed to take an astute quick look at international logistics and warehouse management, both in terms of today's global supply chain and in the demand flow management process, so you can know how to make the most of this strategically. You've probably heard something about these topics. You may even be somewhat familiar with them. But how much do you really know about their strategic importance?
In an international logistics and warehouse management system, cost-to-cost "trade-offs" available through systems analysis are easy to identify. One example is using premium transportation for small, time-phased purchased lots to reduce inventory investment and lower safety stock. Another might be using a distribution center for freight consolidation or Crossdocking to improve customer service levels and avoid material handling inefficiencies. Yet another might be the use of a blanket agreement (with a rolling forecast) with your supplier. By aligning supplier capacity to your customer schedules and your inventory goals, you gain pipeline visibility through automated order tracking and alerts in addition to lowering costs and raising customer service levels. The overall goal, to achieve a fully integrated logistics approach, is to realize maximum trade-offs among basic functional activities such as warehousing.
Traditional Logistics and Warehousing channels are indeed changing. As organizations move from mass production and mass distribution to lean manufacturing, postponement, and mass customization, creative approaches are needed in the management of logistics and warehousing. The challenge is always present, because different customers may demand different levels of service. Demand often cannot be forecasted, especially if one must deliver customized products or services exactly where the customer needs them on a global scale at multiple locations.
Businesses today must understand that they are competing on the basis of time more than on any other factor. The rigors of international logistics require that you take action to meet your customers’ demand for faster, more frequent, and more reliable deliveries. Your suppliers need to meet increasingly precise inbound schedules. Tomorrow’s customers are more likely to be in another country or continent than they are likely to be from across town, in another state, or in another province. In addition, diverse countries use different formats for weights and other units of measures, as well as many countries and localities have different licensing requirements and charge different duties, value-added taxes (VAT), and fees, which altogether amount to a major content-management challenge for your Global Trade and Logistics IT systems.
This document discusses logistics management strategies and their formulation and implementation. It covers linking a firm's strategy to its logistics strategy, setting logistics goals and making decisions, analyzing logistics networks, formulating logistics strategies including different channel strategies, and implementing and measuring performance of logistics strategies. Key aspects covered include aligning business and logistics strategies, common logistics challenges, and key performance indicators for evaluating service and inventory management.
source: http://www.sfbayacm.org/?p=1394
The specifics of a cloud’s computing architecture may have an impact on application design. This is particularly important in Infrastructure as a Service (IaaS) cloud environments.
This presentation analyzes aspects of the Amazon EC2 IaaS cloud environment that differ from a traditional datacenter and introduces general best practices for ensuring data privacy, storage persistence, and reliable DBMS backup. Best practices for application robustness and scalability on demand are reviewed and are especially significant in leveraging the full potential of an IaaS cloud. The need for a cloud application management and configuration system is briefly reviewed and two alternate approaches to cloud application management are described (RightScale and Kaavo).
The 2014 AWS Enterprise Summit - TCO and Cost Optimization Amazon Web Services
Optimizing Total Cost of Ownership for AWS discusses how to compare the total cost of running infrastructure on AWS versus on-premises. It provides examples of how InfoSpace was able to significantly reduce their costs and improve performance by migrating services to AWS. Key points include comparing the full costs of on-premises infrastructure versus variable AWS pricing, optimizing AWS usage over time, and InfoSpace's results of 31-87% reductions in costs and improved response times.
Cloud architecture and deployment: The Kognitio checklist, Nigel Sanctuary, K...CloudOps Summit
CloudOps Summit 2012, Frankfurt, 20.9.2012 Track 2 - Build and Run
by Nigel Sanctuary, VP Propositions at Kognitio (www.kognitio.com)
http://cloudops.de/sprecher/#nigelsanctuary
Find the video of this talk at http://youtu.be/wQrHQNOMlKc
An insider view of some of the innovations that help make the AWS cloud unique. We will show examples of innovative service offerings and will continue to discuss data center, power, and networking innovations used across the AWS platform. Join this session and walk away with a deeper understanding of the underlying innovations powering the cloud.
2011 State of the Cloud: A Year's Worth of Innovation in 30 Minutes - Jinesh...Amazon Web Services
A Year's Worth of Innovation in 30 Minutes -
In this Keynote talk, Jinesh Varia discuss all the new features and services that AWS released in 2011 and discusses AWS growth and innovation along with customers and partners.
The speaker notes contain the links to the blog posts of announcements.
MetaCDN: Enabling High Performance, Low Cost Content Storage and Delivery via...James Broberg
My talk on MetaCDN for the Cloudslam 2009 virtual conference.
Many 'Cloud Storage' providers have launched in the last two years, providing internet accessible data storage and delivery in several continents that is backed by rigorous Service Level Agreements (SLAs), guaranteeing specific performance and uptime targets. The facilities offered by these providers is leveraged by developers via provider-specific Web Service APIs. For content creators, these providers have emerged as a genuine alternative to dedicated Content Delivery Networks (CDNs) for global file storage and delivery, as they are significantly cheaper, have comparable performance and no ongoing contract obligations. As a result, the idea of utilising Storage Clouds as a 'poor mans' CDN is very enticing. However, many of these 'Cloud Storage' providers are merely basic storage services, and do not offer the capabilities of a fully-featured CDN such as intelligent replication, failover, load redirection and load balancing. Furthermore, they can be difficult to use for non-developers, as each service is best utilised via unique web services or programmer APIs. In this presentation, we describe the design, architecture, implementation and user-experience of MetaCDN, a system that integrates these 'Cloud Storage' providers into an unified CDN service that provides high performance, low cost, geographically distributed content storage and delivery for content creators. MetaCDN harnesses the power of 'Cloud Storage' for novices and seasoned users alike, offering an easy to use web portal and a sophisticated Web Service API.
Cost is often the conversation starter when customers think about moving to the cloud. AWS helps lower costs for customers through its “pay only for what you use” pricing model, frequent price drops, and pricing model choice to support variable & stable workloads. In this session, you will learn about the financial considerations of owning and operating a traditional data center or managed hosting provider versus utilizing AWS. We will detail our TCO methodology and showcase cost comparisons for some common customer use-cases. We’ll also cover a few AWS cost optimization areas, including Spot and Reserved Instances, EC2 Auto Scaling, and consolidated billing.
This document provides an overview and introduction to Amazon Web Services (AWS) by Jeff Barr, an evangelist for AWS. The summary includes:
1) Barr introduces AWS and discusses its goals of showing what others are doing with cloud computing, alerting the audience to possibilities, and starting conversations about cloud computing.
2) AWS provides scalable computing resources like servers, storage, databases, and more via web services that can be accessed on-demand using a pay-as-you-go model. This solves problems around managing infrastructure and reduces costs.
3) Barr highlights some key AWS services including Elastic Compute Cloud (EC2) for virtual servers, Simple Storage Service (S3) for online storage
Cloud Architectures - Jinesh Varia - GrepTheWebjineshvaria
- Cloud computing platforms like Amazon Web Services allow companies to focus on innovation rather than infrastructure maintenance by providing scalable, pay-as-you-go cloud services.
- Amazon's cloud services like EC2, S3, and SQS were used to build GrepTheWeb, a distributed text search service that can quickly search very large datasets by distributing work across elastic compute resources.
- GrepTheWeb coordinates distributed processing using SQS, stores input files in S3, runs jobs on EC2 instances, and stores results in SimpleDB to provide fast, scalable text searches without having to manage physical infrastructure.
This document discusses 4K media workflows on AWS. It introduces the concept of a "content lake" where all digital content is stored in Amazon S3 regardless of format or resolution. The content lake provides durable, scalable storage that can be accessed from anywhere. Content in the lake can be processed using auto-scaling compute resources like EC2 and then delivered to users. This infrastructure allows for cost-effective ingestion, processing, management and delivery of 4K and other high resolution content in the cloud.
(DAT303) Oracle on AWS and Amazon RDS: Secure, Fast, and ScalableAmazon Web Services
AWS and Amazon RDS provide advanced features and architectures that enable graceful migration, high performance, elastic scaling, and high availability for Oracle database workloads. Learn best practices for realizing the benefits of the cloud while reducing costs, by running Oracle on AWS in a variety of single- and multi-instance topologies. This session teaches you to take advantage of features unique to AWS and Amazon RDS to free your databases from the confines of the conventional data center.
The document discusses exploring cloud computing to reduce costs and improve performance for computing resources. It proposes a hybrid scheduler that would allocate additional machines from Amazon EC2's cloud services when local CPU usage exceeds a threshold, and deallocate them when usage decreases. This could help pay only for resources that are used, improve peak performance, and reduce costs compared to maintaining idle hardware. Key questions to answer include determining accurate cost models and the software to use on virtual machines.
- CloudStack is an open source cloud computing platform that was donated to the Apache Software Foundation in 2012. It provides infrastructure as a service and supports various hypervisors and physical hardware.
- CloudStack has a scalable architecture designed to support thousands of hosts and VMs across multiple availability zones. It provides rich networking and storage capabilities.
- CloudStack can support both traditional server virtualization workloads as well as "Amazon-style" workloads with software defined networks and object storage.
- The CloudStack community is growing rapidly and encourages participation through mailing lists, IRC, forums and meetup groups.
1) AWS provides a range of application and infrastructure services including compute, storage, database, and networking.
2) Amazon Redshift is a fast, powerful petabyte-scale data warehouse service that is delivered as a managed service.
3) Jaspersoft integrates seamlessly with Amazon Redshift through automatic discovery of the PostgreSQL SQL driver and can provide business intelligence capabilities in the cloud for less than $2 per hour.
This document discusses different architectural approaches that can be used when deploying workloads on AWS like startups. It summarizes virtual machine-based n-tier architectures, container-based architectures using ECS, and serverless architectures using Lambda. It also discusses how these architectures impact cost, performance, reliability and other factors. The document recommends letting development teams choose the right tools for their needs and adopting a microservices approach to scale complexity over time.
This document provides an overview of AWS Cloud services and tools for building scalable and highly available cloud infrastructure. It discusses compute, storage, database, messaging/notification services, and automation/orchestration tools. It also covers availability zones, elasticity, identity and access management, and networking/connectivity options like VPC and VPN. The document aims to help readers understand how to architect for scale and redundancy using AWS building blocks.
O'Reilly Webcast: Architecting Applications For The CloudO'Reilly Media
This presentation analyzes aspects of the Amazon EC2 IaaS cloud environment that differ from a traditional data center and introduces general best practices for ensuring data privacy, storage persistence, and reliable DBMS backup. Presented by Jorge Noa, CTO of Hyperstratus
Prepare your IT Infrastructure for ThanksgivingHarish Ganesan
Prepare your IT Infrastructure for Thanksgiving and Holiday season . Taking ecommerce to the cloud.
Retail E-commerce Landscape – Intro ,Intro to AWS
Why consider AWS for E-commerce,Amazon Auto Scaling Demo
The document provides an overview of MavenWire's LogisticsWired solution. LogisticsWired is a pre-configured Oracle Transportation Management environment that supports industry standard logistics flows. It offers rapid deployment, best-in-class hosting, and a proven support infrastructure. The solution details section describes the modules, users, workflows, integration capabilities, and support services provided. Solution flows show order management, planning, execution, tracking and freight settlement processes.
Having the ability to analyze why a particular process in OTM did not output the desired results dramatically increases the value of your OTM team and their overall productivity. Understanding the detailed content provided within Explanations, Logs, and Diagnostics will allow your users to become super users of their own domains.
Designing Highly-Available Architectures for OTMMavenWire
The document discusses designing highly available architectures for OTM applications. It begins by emphasizing the importance of understanding business requirements and budget constraints when designing redundancy. It then outlines some real-world risks like hardware and application failures. The presentation provides an overview of traditional HA solutions and emerging virtualization technologies. It also includes a cheat sheet on options for scaling and clustering the web, application, and database tiers based on service level agreements.
MavenWire is a global provider of logistics consulting services and hosting solutions. They specialize in Oracle Transportation Management and related technologies. MavenWire aims to become a trusted advisor to their clients by offering customized solutions and helping clients succeed. Their experienced consultants and proven project management methodology help deliver streamlined solutions on time and on budget.
Virtualizing OTM - Real World Experiences and PitfallsMavenWire
Virtualization is here to stay. Between the many competing technologies (VMWare, Xen, Oracle VM, Cloud Computing, etc) and the proven benefits (cost reductions, improved management, DR and HA) – virtualization is on the "Must Do" list for most IT organizations.
However, questions remain. How does this affect my OTM deployment? Will it run? Will performance be affected? Will it be supported? Which technologies are best to base my architecture on? What pitfalls are out there that I can avoid? What are the best practices for deployment?
These and many other questions will be covered as Chris Plough shares many of the lessons learned while testing and deploying OTM with multiple clients and within MavenWire's Hosting Architecture.
Presented by Chris Plough at MavenWire.
Integrating EBS And OTM - Process Flows And Avoiding Pitfalls.pdfMavenWire
The presentation will include an overview of how the two systems integrate at a high level. We will then delve deeper and describe and diagram the flows between EBS and OTM, for the following processes: Sales Orders, Purchasing, Payables, Rates and Ship Methods. With that complete, we will discuss lessons learned from the projects, including the relative maturity of this integration offering, data mapping issues (particularly around units of measure and delivery dates), synchronization of shared data (including locations, carriers and service levels) and understanding several key terminology differences between the two products. Finally, we will end with a Q and A session, so that attendees can get answers to related questions or delve deeper into particular segments for greater detail.
Presented by Chris Plough at MavenWire.
Designing OTM for a Multi-Customer EnvironmentMavenWire
Averitt Express and MavenWire will walk you through the methodology taken to designing OTM for a 3PL. A TMS solution poses a unique challenge for 3PL’s in how to build a scalable solution that is easy to implement and support. Looking at unique customer requirements, business goals, and overall ease of use to ensure the solution can be reusable for customer on boarding.
Presented by Samuel Levin at MavenWire.
OTM DELIVERED: How Business Process Outsourcing and Preconfigured Solutions...MavenWire
How to leverage BPO (Business Process Outsourcing) to reduce your OTM (Oracle Transportation Management) implementation costs and focus on your core competencies.
Presented by Samuel Levin at MavenWire.
The Right Collaboration, Leveraging Outsourcing Services to Focus on Core Co...MavenWire
Exel and MavenWire discuss lessons learned over 9 years while outsourcing the support for key applications, including OTM (Oracle Transportation Management).
Presented by Samuel Levin (MavenWire).
How to leverage the OTM (Oracle Transportation Management) FTI module to improve your logistics analytics.
Presented by Samuel Levin (MavenWire) during the 2008 OTM SIG Conference.
OTM - Coming Soon to Midmarket Companies Near You!MavenWire
MavenWire, Oracle and Styline Logistics discuss the "Mid-Market Challenge" and how companies can benefit from OTM (Oracle Transportation Management), due to lower barriers to entry. Presented by Samuel Levin at MavenWire.
MavenWire provides Global Trade Management (GTM) services to help companies comply with increasingly restrictive global trade regulations as sourcing from low-cost countries increases. MavenWire has expanded its skills to assist with implementing Oracle's GTM module within a single, integrated global trade compliance solution. MavenWire offers a multi-tenant, on-demand Oracle GTM service to help reduce the time, effort and cost of operating an Oracle GTM instance through outsourcing business processes.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
2. The Cloud
The Cloud is a set of services and
technologies that delivers real-time and on-
demand computing resources
Software as a Service (SaaS) delivers pre-
configured applications, usually through web
browsers
Platform as a Service (PaaS) delivers a
solution stack (like LAMP) tailored to certain
application types
Infrastructure as a Service (IaaS) delivers
complete server and network infrastructure
on-demand hosted by a cloud provider
3. Cloud Providers
Amazon AWS
Most popular and largest provider
PaaS and IaaS solutions
Large number of Cloud datacenters and
services
Rackspace
Offers Windows and Linux Cloud servers
Hybrid cloud model allows for half cloud,
half physical infrastructures
Microsoft
Windows Azure – Runs Windows and
Linux
Has IaaS and PaaS offerings
Many other providers, including leading
commodity hardware manufacturers
4. Benefits to the Cloud
Costs
No capital expenses, pay as you go
Scale on demand
Ease of maintenance, simplified infrastructure
Agility in responding to business needs
Instances dedicated to UAT, new projects,
patch/upgrade testing created on demand
Scripted deployments for fast server
creation and application installation
New projects can have server assets in
place in hours versus weeks or months
5. Downsides to the Cloud
Downtime Risks
Amazon AWS major outages
April 2011 - 36 hours, US East
August 2011 – 1 hour, US East
June 2012 – 6 hours, 14 hours, US
East
Audit and regulatory requirements
Major cloud providers have SSAE16/SAS70
reports, and are PCI-DSS Level 1 certified
Application expertise
High throughput, high performance cloud
offerings are not as fast as traditional
hardware
Cloud server configurations are limited
13. Costs - Yearly On Demand
$70,000.00
$60,000.00
$50,000.00
$40,000.00
Year 1
Year 2
$30,000.00 Year 3
$20,000.00
$10,000.00
$0.00
Mid-tier hardware Upper-tier hardware Amazon AWS Large Amazon AWS High IO Rackspace Large
Ongoing costs for hardware include power, colocation, and bandwidth
2 application/web and 1 database
configuration
14. Costs – AWS Reserved
$70,000.00
$60,000.00
$50,000.00
$40,000.00
Year 1
Year 2
$30,000.00 Year 3
$20,000.00
$10,000.00
$0.00
Mid-tier hardware Upper-tier hardware Amazon AWS Large Amazon AWS High IO Rackspace Large
Ongoing costs for hardware include power, colocation, and bandwidth
2 application/web and 1 database
configuration
15. Costs – 3 Year TCO On Demand
$140,000.00
$120,000.00
$100,000.00
$80,000.00 Year 3
Year 2
$60,000.00 Year 1
$40,000.00
$20,000.00
$0.00
Mid-tier hardware Upper-tier Amazon AWS Amazon AWS High Rackspace Large
hardware Large IO
Ongoing costs for hardware include power, colocation, and bandwidth
2 application/web and 1 database
configuration
16. Costs – 3 Year TCO with AWS Reserved
$120,000.00
$100,000.00
$80,000.00
Year 3
$60,000.00 Year 2
Year 1
$40,000.00
$20,000.00
$0.00
Mid-tier hardware Upper-tier hardware Amazon AWS Large Amazon AWS High Rackspace Large
IO
Ongoing costs for hardware include power, colocation, and bandwidth
2 application/web and 1 database
configuration
17. Benchmarks
DaCapo – Simulates single threaded loads
similar to bulk plans
VolanoMark – Simulates multi-threaded, high
subsystem I/O loads similar to agent
processing, also simulates web traffic
HammerOra – TPCC style Oracle OLTP
database benchmark, 70% read 30% write
18. DaCapo
25000
20000
Time (Milliseconds)
15000
Average Score
10000
5000
0
Rackspace Large Amazon AWS Large Amazon AWS High IO Mid-tier hardware Upper-tier hardware
Lower score is better
19. VolanoMark
450000
400000
350000
Connections per second
300000
250000
Average Score
200000 Average Per Core
150000
100000
50000
0
Rackspace Large Amazon AWS Large Amazon AWS High IO Mid-tier hardware Upper-tier hardware
Higher score is better
20. HammerOra
350000
300000
250000
Transactions per Minute (Higher is better)
200000 Amazon AWS Large
Amazon AWS High IO
Rackspace Large
150000 Mid-tier hardware
Upper-tier hardware
100000
50000
0
1 2 4 8 12 16 20 24 28 30 32 34 36
Virtual Users
21. Cost vs Performance - DaCapo
30
25
20
15 1 Year On Demand
1 Year Reserved
10
5
0
Rackspace Large Amazon AWS Large Amazon AWS High IO Mid-tier hardware Upper-tier hardware
Higher score is better
22. Cost vs Performance - VolanoMark
70
60
50
40
On Demand
AWS Reserved
30
20
10
0
Rackspace Large Amazon AWS Large Amazon AWS High IO Mid-tier hardware Upper-tier hardware
Higher score is better
23. Cost vs Performance - HammerOra
14
12
10
8
On Demand
AWS Reserved
6
4
2
0
Rackspace Large Amazon AWS Large Amazon AWS High IO Mid-tier hardware Upper-tier hardware
Higher score is better
24. Overall Cost vs Performance
AWS reserved instances make current Cloud
cost/performance exceed hardware in some
cases
Database performance per dollar spent is
higher on hardware
Storage I/O is the leading factor
Cost vs performance plays to the Cloud’s
existing strengths – horizontally scaled
applications
RDBMS and other applications that benefit
from vertical scale are less cost efficient
currently in the cloud
25. Disaster Recovery on AWS
Internet
Active DNS
Elastic Load Balancing Elastic Load Balancing
Route 53 Hosted Zone
EC2 Instance EC2 Instance EC2 Instance EC2 Instance
OTM App/Web 01 OTM App/Web 02 OTM App/Web 01 OTM App/Web 02
Security Group Security Group
EC2 Instance EC2 Instance
Database 01 Database 01
Mirroring / Replication
EBS Volume EBS Volume
Security Group Security Group
Availability Zone Availability Zone
US East 1a US West 1a
27. OTM Benefits and Usage
Development and Test Systems – Agility of
the cloud without the need for high
performance
Lower costs if reserved AWS servers are
Upgradeand if servers are powered off off-
used testing – Test new OTM versions
hours
without impacting existing development cycles
Disaster Recovery
Running versus non-running billing for
AWS
DR system is potentially lower throughput
Replicate databases, do not launch app
Training – Trainneeded cloud systems to
servers until users on
avoid impacting development cycles
28. OTM Benefits and Usage Con’t
Vendor certification/POC – Validate new
OTM related products with lower startup costs
High Performance Production – Cloud
performance still lags behind hardware
Support – Cloud technology is still new, bugs
and support difficulties may exist
Amazon and Oracle joint support
agreement for EC2 applications
Amazon RDS and Oracle
Future licensing
Troubleshooting Opacity – Opacity to
upstream issues can make troubleshooting
OTM performance more difficult
29. Future Cloud Growth
AWS prices are reduced 2-3 times per year,
on average
Amazon High I/O instance is the benchmark
for near-future Cloud performance
Google has joined the Cloud market with
Google Cloud Platform
Growth trends through 2010 show a faster
decrease in Cloud resource prices than
corresponding hardware, excluding storage
Future generations of enterprise applications
will be tailored to cloud deployments, both
public and private