This is part 1 of the Azure storage series, where we will build our understanding of Azure Storage, and will also learn about the storage data services, and the types of Azure Storage. Last but not least, we will also touch base on securing storage accounts
In the second part, we will continue with our demo on creating and utilizing the Azure Storage.
hbaseconasia2019 HBCK2: Concepts, trends, and recipes for fixing issues in HB...Michael Stack
Wellington Chevreuil of Cloudera
Track 1: Internals
https://open.mi.com/conference/hbasecon-asia-2019
THE COMMUNITY EVENT FOR APACHE HBASE™
July 20th, 2019 - Sheraton Hotel, Beijing, China
https://hbase.apache.org/hbaseconasia-2019/
This is part 1 of the Azure storage series, where we will build our understanding of Azure Storage, and will also learn about the storage data services, and the types of Azure Storage. Last but not least, we will also touch base on securing storage accounts
In the second part, we will continue with our demo on creating and utilizing the Azure Storage.
hbaseconasia2019 HBCK2: Concepts, trends, and recipes for fixing issues in HB...Michael Stack
Wellington Chevreuil of Cloudera
Track 1: Internals
https://open.mi.com/conference/hbasecon-asia-2019
THE COMMUNITY EVENT FOR APACHE HBASE™
July 20th, 2019 - Sheraton Hotel, Beijing, China
https://hbase.apache.org/hbaseconasia-2019/
More and more organizations are moving their ETL workloads to a Hadoop based ELT grid architecture. Hadoop`s inherit capabilities, especially it`s ability to do late binding addresses some of the key challenges with traditional ETL platforms. In this presentation, attendees will learn the key factors, considerations and lessons around ETL for Hadoop. Areas such as pros and cons for different extract and load strategies, best ways to batch data, buffering and compression considerations, leveraging HCatalog, data transformation, integration with existing data transformations, advantages of different ways of exchanging data and leveraging Hadoop as a data integration layer. This is an extremely popular presentation around ETL and Hadoop.
A brief overview of caching mechanisms in a web application. Taking a look at the different layers of caching and how to utilize them in a PHP code base. We also compare Redis and MemCached discussing their advantages and disadvantages.
The Rise Of Event Streaming – Why Apache Kafka Changes EverythingKai Wähner
Business digitalization trends like microservices, the Internet of Things or Machine Learning are driving the need to process events at a whole new scale, speed and efficiency. Traditional solutions like ETL/data integration or messaging are not build to serve these needs.
Today, the open source project Apache Kafka® is being used by thousands of companies including over 60% of the Fortune 100 to power and innovate their businesses by focusing their data strategies around event-driven architectures leveraging event streaming.We will discuss the market and technology changes that have given rise to Kafka and to Event Streaming, and we will introduce the audience to the key aspects of building an Event streaming platform with Kafka. Examples of productive use cases from the automotive, manufacturing and transportation sector will showcase the power of event streaming.
최고의 속도와 미션 크리티컬 업무를 수행하기 위한, 'In-Memory-Computing 글로벌 리더 GridGain 솔루션' 을 소개합니다. 그리드 게인은 데이터 집약적인 애플리케이션을 분산 컴퓨팅을 통해 가속화하고 확장하는 인메모리 컴퓨팅 플랫폼 솔루션입니다. www.all-dt4u.com/
[주요 특성]
1.속도 : 메모리에 데이터를 로드(Load)하여 최대 백만배 빠른 속도
2.확장성 : 분산 및 병렬 처리로 비즈니스 로직 전체 실행 시간 감소
3. 디지털 트렌스포메이션 : 메모리 집약적인 아키텍쳐 기반으로 신속한 접근과 처리 가능
4. 중앙 집중식 관리 : 클러스터 실시간 모니터링 및 특정 이벤트 발생 시 알림 기능
5. 고객에 최적화된 통합 : RDBMS, NoSQL, 하둡 등 데이터베이스와 통합 가능
"GridGain은 오픈소스 Apache Ignite 기반의 In-Memory Computing 플랫폼으로, 웹 규모 애플리케이션, SaaS 및 클라우드 Conputing, 모바일 및 IoT 백엔드, 실시간 데이터 처리, 빅데이터 분석 등 분야에 성능 향상을 지원 합니다. "
Building Cloud-Native App Series - Part 3 of 11
Microservices Architecture Series
AWS Kinesis Data Streams
AWS Kinesis Firehose
AWS Kinesis Data Analytics
Apache Flink - Analytics
High-speed Database Throughput Using Apache Arrow Flight SQLScyllaDB
Flight SQL is a revolutionary new open database protocol designed for modern architectures. Key features in Flight SQL include a columnar-oriented design and native support for parallel processing of data partitions. This talk will go over how these new features can push SQL query throughput beyond existing standards such as ODBC.
This presentation discusses Windows Azure Blob Storage, covering from the Windows Azure Storage Overview, Blob Storage Basic Concept, Blob Storage Advanced, and finally the Tip of the day.
When Kafka Meets the Scaling and Reliability needs of World's Largest Retaile...confluent
Synopsis How often were you told that you have to stream data at scale, process and analyze them to make real time decision making without losing a single event? How often are you told that the scale of the data we are talking about is in several billions and the cost of one message can go up to 10s of 1000s of dollars? How often did you have to deal with the challenge of doing real-time decision making, analytics, ML and auditing with data in motion leave apart the data at rest? Real Time Inventory and Replenishment System We have the requirement to develop a system that will enable us to do real time tracking of items moving within the supply chain as it is vital for making quicker replenishment decisions and other real-time use cases. To fulfill this requirement, we chose to build an event-driven system that will track this inventory information and create plans and orders in near real time. We have events at the heart of the systems. Through this journey to meet scale with reliability, we learnt a lot of lessons to leverage Kafka at scale and various optimized ways to produce and consume from Kafka. We look forward to meet you all and discuss in detail our journey and connect you with solutions to some of your problems. Key takeaways: - Leveraging kafka and the related ecosystem on Openstack and Azure - Saving Cost at scale with Kafka and related eco-system - Scaling Kafka Streams and Kafka Connector applications. - Tuning Kafka Streams to improve performance. - How to stabilize the kafka connectors operating at scale.
Introduction to Apache Flink - Fast and reliable big data processingTill Rohrmann
This presentation introduces Apache Flink, a massively parallel data processing engine which currently undergoes the incubation process at the Apache Software Foundation. Flink's programming primitives are presented and it is shown how easily a distributed PageRank algorithm can be implemented with Flink. Intriguing features such as dedicated memory management, Hadoop compatibility, streaming and automatic optimisation make it an unique system in the world of Big Data processing.
More and more organizations are moving their ETL workloads to a Hadoop based ELT grid architecture. Hadoop`s inherit capabilities, especially it`s ability to do late binding addresses some of the key challenges with traditional ETL platforms. In this presentation, attendees will learn the key factors, considerations and lessons around ETL for Hadoop. Areas such as pros and cons for different extract and load strategies, best ways to batch data, buffering and compression considerations, leveraging HCatalog, data transformation, integration with existing data transformations, advantages of different ways of exchanging data and leveraging Hadoop as a data integration layer. This is an extremely popular presentation around ETL and Hadoop.
A brief overview of caching mechanisms in a web application. Taking a look at the different layers of caching and how to utilize them in a PHP code base. We also compare Redis and MemCached discussing their advantages and disadvantages.
The Rise Of Event Streaming – Why Apache Kafka Changes EverythingKai Wähner
Business digitalization trends like microservices, the Internet of Things or Machine Learning are driving the need to process events at a whole new scale, speed and efficiency. Traditional solutions like ETL/data integration or messaging are not build to serve these needs.
Today, the open source project Apache Kafka® is being used by thousands of companies including over 60% of the Fortune 100 to power and innovate their businesses by focusing their data strategies around event-driven architectures leveraging event streaming.We will discuss the market and technology changes that have given rise to Kafka and to Event Streaming, and we will introduce the audience to the key aspects of building an Event streaming platform with Kafka. Examples of productive use cases from the automotive, manufacturing and transportation sector will showcase the power of event streaming.
최고의 속도와 미션 크리티컬 업무를 수행하기 위한, 'In-Memory-Computing 글로벌 리더 GridGain 솔루션' 을 소개합니다. 그리드 게인은 데이터 집약적인 애플리케이션을 분산 컴퓨팅을 통해 가속화하고 확장하는 인메모리 컴퓨팅 플랫폼 솔루션입니다. www.all-dt4u.com/
[주요 특성]
1.속도 : 메모리에 데이터를 로드(Load)하여 최대 백만배 빠른 속도
2.확장성 : 분산 및 병렬 처리로 비즈니스 로직 전체 실행 시간 감소
3. 디지털 트렌스포메이션 : 메모리 집약적인 아키텍쳐 기반으로 신속한 접근과 처리 가능
4. 중앙 집중식 관리 : 클러스터 실시간 모니터링 및 특정 이벤트 발생 시 알림 기능
5. 고객에 최적화된 통합 : RDBMS, NoSQL, 하둡 등 데이터베이스와 통합 가능
"GridGain은 오픈소스 Apache Ignite 기반의 In-Memory Computing 플랫폼으로, 웹 규모 애플리케이션, SaaS 및 클라우드 Conputing, 모바일 및 IoT 백엔드, 실시간 데이터 처리, 빅데이터 분석 등 분야에 성능 향상을 지원 합니다. "
Building Cloud-Native App Series - Part 3 of 11
Microservices Architecture Series
AWS Kinesis Data Streams
AWS Kinesis Firehose
AWS Kinesis Data Analytics
Apache Flink - Analytics
High-speed Database Throughput Using Apache Arrow Flight SQLScyllaDB
Flight SQL is a revolutionary new open database protocol designed for modern architectures. Key features in Flight SQL include a columnar-oriented design and native support for parallel processing of data partitions. This talk will go over how these new features can push SQL query throughput beyond existing standards such as ODBC.
This presentation discusses Windows Azure Blob Storage, covering from the Windows Azure Storage Overview, Blob Storage Basic Concept, Blob Storage Advanced, and finally the Tip of the day.
When Kafka Meets the Scaling and Reliability needs of World's Largest Retaile...confluent
Synopsis How often were you told that you have to stream data at scale, process and analyze them to make real time decision making without losing a single event? How often are you told that the scale of the data we are talking about is in several billions and the cost of one message can go up to 10s of 1000s of dollars? How often did you have to deal with the challenge of doing real-time decision making, analytics, ML and auditing with data in motion leave apart the data at rest? Real Time Inventory and Replenishment System We have the requirement to develop a system that will enable us to do real time tracking of items moving within the supply chain as it is vital for making quicker replenishment decisions and other real-time use cases. To fulfill this requirement, we chose to build an event-driven system that will track this inventory information and create plans and orders in near real time. We have events at the heart of the systems. Through this journey to meet scale with reliability, we learnt a lot of lessons to leverage Kafka at scale and various optimized ways to produce and consume from Kafka. We look forward to meet you all and discuss in detail our journey and connect you with solutions to some of your problems. Key takeaways: - Leveraging kafka and the related ecosystem on Openstack and Azure - Saving Cost at scale with Kafka and related eco-system - Scaling Kafka Streams and Kafka Connector applications. - Tuning Kafka Streams to improve performance. - How to stabilize the kafka connectors operating at scale.
Introduction to Apache Flink - Fast and reliable big data processingTill Rohrmann
This presentation introduces Apache Flink, a massively parallel data processing engine which currently undergoes the incubation process at the Apache Software Foundation. Flink's programming primitives are presented and it is shown how easily a distributed PageRank algorithm can be implemented with Flink. Intriguing features such as dedicated memory management, Hadoop compatibility, streaming and automatic optimisation make it an unique system in the world of Big Data processing.
Building scalable, highly-available applications that perform well is not an easy task. These features cannot be simply “bolted” onto an existing application – they have to be architected into it. Unfortunately, the things we need to do to achieve them are often in conflict with each other, and finding the right balance is crucial. In this session we will discuss why scaling web applications is difficult and will look at some of solutions we have come up with in the past to deal with the issues involved. We will then look at how in-memory data grids can make our jobs easier by providing a solid architectural foundation to build our applications on top of. If you are new to in-memory data grids, you are guaranteed to leave the presentation eager to learn more. However, even if you are already using one you will likely walk out with a few ideas on how to improve performance and scalability of your applications.
Coherence Overview - OFM Canberra July 2014Joelith
Slides from the July Oracle Middleware Forum held in Canberra, Australia. Provides an overview of Coherence. Check out our blog for more details: ofmcanberra.wordpress.com
Caching in Java - A review of different caching vendors (Oracle Coherence, Apache Cassandra, Infinispan, Ehcache/Terracotta, etc) and limitations presented by the underlying Java Platform.
Presented at RedHat Summit 2010, Boston
Speakers: SriSatish Ambati, Performance Engg
Manik Surtani, InfiniSpan Lead
Presentation details from RH Summit:
How to Stop Worrying & Start Caching in Java
SriSatish Ambati — Performance & Partner Engineer, Azul Systems, Inc.
Manik Surtani — Principal Software Engineer, Red Hat
Application data caching has come of age as distributed and large cache clusters are now common. The next generation of applications that depend on efficient caching has come into being and data and cache size explosion has set in.
In this session, Azul Systems’ SriSatish Ambati and Red Hat’s Manik Surtani will survey performance characteristics of different cache algorithms, their implementations (e.g., implementing a 200Gb data cache size), and how well they work in practical JVM deployments. In each scenario, they will present patterns of architecture that scale, and demonstrate where read and write performance stands in the context of increasing cache sizes and concurrency.
Throughout this discussion, they will recognize several villains, including heap fragmentation, long-lived objects, multi-VM communication, socket handlers, and queue managers. SriSatish and Manik will take a fun-filled “whodunit” approach to portray the roles played by each villain in killing cache performance.
http://www.redhat.com/promo/summit/2010/sessions/jboss.html
Oracle Coherence Strategy and Roadmap (OpenWorld, September 2014)jeckels
The Oracle Coherence strategy and roadmap session from OpenWorld 2014. Includes details on the 12.1.3 Cloud Application Foundation release (including WebLogic integration), a road map for the 12.2.1 release, and notable features including JCache (JSR-107) support, Memcached adapters, federated caching, recoverable caching, security enhancements, multitenancy support and more. As usual, all items and statements contained herein are subject to change based on slide 3 of this presentation.
[OracleCode SF] In memory analytics with apache spark and hazelcastViktor Gamov
Apache Spark is a distributed computation framework optimized to work in-memory, and heavily influenced by concepts from functional programming languages.
Hazelcast - open source in-memory data grid capable of amazing feats of scale - provides wide range of distributed computing primitives computation, including ExecutorService, M/R and Aggregations frameworks.
The nature of data exploration and analysis requires data scientists be able to ask questions that weren't planned to be asked—and get an answer fast!
In this talk, Viktor will explore Spark and see how it works together with Hazelcast to provide a robust in-memory open-source big data analytics solution!
Real time Analytics with Apache Kafka and Apache SparkRahul Jain
A presentation cum workshop on Real time Analytics with Apache Kafka and Apache Spark. Apache Kafka is a distributed publish-subscribe messaging while other side Spark Streaming brings Spark's language-integrated API to stream processing, allows to write streaming applications very quickly and easily. It supports both Java and Scala. In this workshop we are going to explore Apache Kafka, Zookeeper and Spark with a Web click streaming example using Spark Streaming. A clickstream is the recording of the parts of the screen a computer user clicks on while web browsing.
Learn about Positioning IBM Flex System 16 Gb Fibre Channel Fabric for Storage-Intensive Enterprise Workloads. This IBM Redpaper discusses server performance imbalance that can be found in typical application environments and how to address this issue with the 16 Gb Fibre Channel technology to provide required levels of performance and availability for the storage-intensive applications. For more information on Pure Systems, visit http://ibm.co/18vDnp6.
Visit the official Scribd Channel of IBM India Smarter Computing at http://bit.ly/VwO86R to get access to more documents.
Dynamo Amazon’s Highly Available Key-value Store Giuseppe D.docxjacksnathalie
Dynamo: Amazon’s Highly Available Key-value Store
Giuseppe DeCandia, Deniz Hastorun, Madan Jampani, Gunavardhan Kakulapati,
Avinash Lakshman, Alex Pilchin, Swaminathan Sivasubramanian, Peter Vosshall
and Werner Vogels
Amazon.com
ABSTRACT
Reliability at massive scale is one of the biggest challenges we
face at Amazon.com, one of the largest e-commerce operations in
the world; even the slightest outage has significant financial
consequences and impacts customer trust. The Amazon.com
platform, which provides services for many web sites worldwide,
is implemented on top of an infrastructure of tens of thousands of
servers and network components located in many datacenters
around the world. At this scale, small and large components fail
continuously and the way persistent state is managed in the face
of these failures drives the reliability and scalability of the
software systems.
This paper presents the design and implementation of Dynamo, a
highly available key-value storage system that some of Amazon’s
core services use to provide an “always-on” experience. To
achieve this level of availability, Dynamo sacrifices consistency
under certain failure scenarios. It makes extensive use of object
versioning and application-assisted conflict resolution in a manner
that provides a novel interface for developers to use.
Categories and Subject Descriptors
D.4.2 [Operating Systems]: Storage Management; D.4.5
[Operating Systems]: Reliability; D.4.2 [Operating Systems]:
Performance;
General Terms
Algorithms, Management, Measurement, Performance, Design,
Reliability.
1. INTRODUCTION
Amazon runs a world-wide e-commerce platform that serves tens
of millions customers at peak times using tens of thousands of
servers located in many data centers around the world. There are
strict operational requirements on Amazon’s platform in terms of
performance, reliability and efficiency, and to support continuous
growth the platform needs to be highly scalable. Reliability is one
of the most important requirements because even the slightest
outage has significant financial consequences and impacts
customer trust. In addition, to support continuous growth, the
platform needs to be highly scalable.
One of the lessons our organization has learned from operating
Amazon’s platform is that the reliability and scalability of a
system is dependent on how its application state is managed.
Amazon uses a highly decentralized, loosely coupled, service
oriented architecture consisting of hundreds of services. In this
environment there is a particular need for storage technologies
that are always available. For example, customers should be able
to view and add items to their shopping cart even if disks are
failing, network routes are flapping, or data centers are being
destroyed by tornados. Therefore, the service responsible for
managing shopping carts requires that it can always write to and
read from its data store, and ...
Architecting and Tuning IIB/eXtreme Scale for Maximum Performance and Reliabi...Prolifics
Abstract: Recent projects have stressed the "need for speed" while handling large amounts of data, with near zero downtime. An analysis of multiple environments has identified optimizations and architectures that improve both performance and reliability. The session covers data gathering and analysis, discussing everything from the network (multiple NICs, nearby catalogs, high speed Ethernet), to the latest features of extreme scale. Performance analysis helps pinpoint where time is spent (bottlenecks) and we discuss optimization techniques (MQ tuning, IIB performance best practices) as well as helpful IBM support pacs. Log Analysis pinpoints system stress points (e.g. CPU starvation) and steps on the path to near zero downtime.
The Future of Mainframe Data is in the CloudPrecisely
As organizations make the move to cloud platforms to support modern analytics and applications, they often need to move more than their on-premises data warehouses and lakes. Organizations often have legacy mainframe systems that run their mission-critical and revenue-generating applications, and that should be included in modern cloud projects.
Yet it can be difficult to replicate mainframe data onto cloud platforms to make it a part of your data-driven initiatives, due to the data’s complex format. When moving data to the cloud it needs to maintain accuracy, consistency, and context to be considered trusted data. At the same time, the data needs to be available in real time to cloud-based data management and analytics applications.
Join this session for a discussion with experts from Precisely and AWS about how to bring your data to applications and services on the cloud. Topics include:
The importance of mainframe data for organizationsHow to migrate/replicate mainframe data to cloud platformsHow to ensure trusted data on cloud platforms
These are slides of the session that Jim Liddle gave at GigaSpaces Cloud Crowd event in the UK on 11th November 2009.
These slides concentrate on GigaSpaces VMWARE integration and the value proposition for using GigaSpaces for Private Clouds.
Caching for Microservices Architectures: Session II - Caching PatternsVMware Tanzu
In the first webinar of the series we covered the importance of caching in microservice-based application architectures—in addition to improving performance it also aids in making content available from legacy systems, promotes loose coupling and team autonomy, and provides air gaps that can limit failures from cascading through a system.
To reap these benefits, though, the right caching patterns must be employed. In this webinar, we will examine various caching patterns and shed light on how they deliver the capabilities needed by our microservices. What about rapidly changing data, and concurrent updates to data? What impact do these and other factors have to various use cases and patterns?
Understanding data access patterns, covered in this webinar, will help you make the right decisions for each use case. Beyond the simplest of use cases, caching can be tricky business—join us for this webinar to see how best to use them.
Jagdish Mirani, Cornelia Davis, Michael Stolz, Pulkit Chandra, Pivotal
Oracle GoldenGate is the leading real-time data integration software provider in the industry - customers include 3 of the top 5 commercial banks, 3 of the top 3 busiest ATM networks, and 4 of the top 5 telecommunications providers.
Oracle GoldenGate moves transactional data in real-time across heterogeneous database, hardware and operating systems with minimal impact. The software platform captures, routes, and delivers data in real time, enabling organizations to maintain continuous uptime for critical applications during planned and unplanned outages.
Additionally, it moves data from transaction processing environments to read-only reporting databases and analytical applications for accurate, timely reporting and improved business intelligence for the enterprise.
While many enterprises consider cloud computing the savior of their data strategy, there is a process they should be following when looking to leveraging database-as-a-service. This includes understanding their own data requirements, selecting the right cloud computing candidate, and then planning for the migration and operations. A huge number of issues and obstacles will inevitably arise, but fortunately best practices are emerging. This presentation will take you through the process of moving data to cloud computing providers.
Learn about recent advances in MongoDB in the area of In-Memory Computing (Apache Spark Integration, In-memory Storage Engine), and how these advances can enable you to build a new breed of applications, and enhance your Enterprise Data Architecture.
Companies in today\'s challenging economy need to do more with less...see how the combination of Cisco, NetApp and VMWare can help you in your data center.
AI as a Service: the future has never been so simple with cloudEmiliano Pecis
Do you remember the futuristic movie called Minority Report, taken from the famous novel by Philip K. Dick, where a system automatically recognizes customers entering the store? The future is now, thanks to the AI services offered by Amazon. In this talk we will show you how easy it is to create a person's automatic recognition system by simply invoking services on the cloud and using cheap device like a RaspberryPi. We will show how to set up a complete serverless architecture with AWS Lambda to execute your code and interact with AWS services like S3, Rekognition and Polly. We will show to you how is simple to integrate the AWS world with a RaspberryPi (or any other IoT devie) through the AWS API Gateway service. We will show you all these thing with a real demo with real device.. because the future is now!
For this demo we are used a simple Raspberry Pi 3 Model B with the Camera borad V2 and Bright PI board v1, on which we have installed the linux distro Raspbian. With Python and OpenCV framework we are written a simple script that uses camera to detect faces, take the frames and send these to the Cloud. An API Rest, realized with AWS Api Gateway, is the entry point for the AWS world. After that an AWS Lambda function takes in charge the request and implements all the logic to save the photo on a S3 bucket and call AWS Rekognition to analyse and detect the face. The function try to associates metadata saved on a DynamoDB table with the data returned from AWS Rekognition service. With this data the Lambda creates a simple phrase with name and surname of the recognized person and a special charateristic revealed on the photo. Finally a call to AWS Polly, a text-to-speech service, is made to get the mp3 from the text. Then the audio file is returned to the API Rest call that was made by the RaspberryPi. Finally the device closes the loop with the play of this file.
Leadership. Le lezioni apprese da Genitore.Emiliano Pecis
Presentazione del talk tenuto a Roma all'Agile Lean Conference del 2017, dove descrivo la stretta relazione che c'è tra la Leadership e il difficile ruolo del genitore.
Where SOA and Monolitch EAR have failed. It's not simple to have your Apps scaling automagically without a very complex architecture. We're going to show pros and cons of so called Cloud-Native Applications based on Microservices, Caas, DevOps, Continuous Delivery....
Neo, wake up! SOA has you! :)
A complete accademic overview about the Web Oriented Architecture. A comparison between WOA and SOA is well described. What is ReST and why it is so important for the WOA. A proxy ReST-to-SOAP, based on Oracle Service Bus, is explained. Which products WOA lovers are searching for? This presentation has some "sponsored slides" from Oracle.
12. Oracle Coherence: Data Grid Uses Caching Applications request data from the Data Grid rather than backend data sources Analytics Applications ask the Data Grid questions from simple queries to advanced scenario modeling Transactions Data Grid acts as a transactional System of Record, hosting data and business logic Events Automated processing based on event
33. Betfair …. Bets on XTP Database Tier PL/SQL Stored Procedures Oracle DB Sun Solaris Oracle Coherence Linux Clustered Data Cache Application Logic JBoss Linux Application and Caching Tier User Tier Online Bettors/Gamblers Third-Party Applications The Internet
AGENDA Web 2.0 and Enterprise 2.0 Challenges and Solutions for Enterprise 2.0 Oracle’s Strategy for Enterprise 2.0
Action Item: Organizations depending on the TP application style to support their businesses should anticipate a dramatic change in their application architecture and technology infrastructures as a consequence of greater demand in terms of scalability, performance and availability . Improvements in hardware and network speed, mature middleware platforms, and real-time-oriented application architectures have enabled the notion of the real-time enterprise (RTE). This is a technology-enabled business concept by which organizations exploit real-time access to data to run core business processes. An RTE's competitive advantage is its ability to respond faster than competitors to business events. This concept can be used to optimize business models and enable new business scenarios, such as convergent networks in telecommunications or automated trading in financial services. The RTE can be deployed by specific applications, such as "microcommerce " and "micropayment " systems, global-class business-to-consumer (B2C) applications, real-time monitoring and management, real-time fraud detection, and real-time risk management. These applications are often transactional, although different from traditional transaction processing (TP) systems in their architectures. Typically positioned at the high end of the TP spectrum in performance and scalability needs, they're usually highly business-critical and have to deal with sensitive information. Therefore, they are also characterized by high-end requirements in terms of availability, security and monitoring/management. As a consequence, the most-high-end TP scenarios will be more common, and even the most extreme will enter mainstream adoption .
Today’s Application infrastructures are facing great demands in terms of service levels, scalability, flexibility. At the same time, hardware is commoditized yet increasingly powerful and capable to meet challenges. In order to turn challenges into opportunities for “future-proofing” environments, enterprises are “rethinking” their application infrastructures.
Data Grids provide key data juncture between disparate applications and disparate data sources. Designed for reliability: withstand faults, outages Built to scale out as needed and handle load gracefully
Data Grids are used for different purposes. These are the four most common uses. Caching Coherence was the first technology to proved reliable distributed caching Helped many organizations alleviate data bottleneck issues and scale out application tier Analytics Enables applications to efficiently run queries across entire data grid Support for heavy query loads, while improving responsiveness of each query Server failures do not impact correctness of “in flight” queries and analytics Transactions Data Grid provides optimal platform for joining data and business logic Greater business agility by moving database stored procedures into the Data Grid Coherence reliability allows not only in-memory data processing, but provides the ability to commit transactions in-memory Reliability is key to conducting in-memory transactions. Coherence provides absolute reliability – every transaction matters. Events Oracle Coherence Data Grid manages processing state, guaranteeing once-and-only-once event processing Data Grid provides scalable management of event processing
Distributed data management: single system image Consensus: nodes know who is a member, state of cluster Shared responsibilities, holding data, backups, diagnostics No interruptions in the event of a server failure First Build: Objects are distributed in memory among different JVMs on different Servers. Objects are held in primary format in only one place of the grid. A backup of the same data is also held in the memory on a different server. So from applications perspective, it just asks for the data to the Coherence grid and Coherence fetches it. Even if the primary server where the object was held is not available, it knows where to get the back up and get the data. Second Build: At the heart of Coherence is a consensus among the Coherence servers regarding which servers are currently participating in the Coherence grid. The logic behind this consensus is built into each one of the Coherence servers. The consensus is maintained automatically as servers are added or go down or are removed from the grid. Third Build
IBM shop Un unico shopping chart su 4 retail site
Betfair is the world's leading online betting exchange, a concept it has pioneered since 2000 Betfair processes 5 million transactions a day and more than 300 bets a second . Betfair is a profitable and debt-free company, with annual revenue exceeding £180 million . Betfair deals a growing transactional workload (greater than 500 updates and thousands of inquiry transactions per second) with 24/365 availability requirements across geographically distributed data centers (Betfair has a data center in Australia as mandated by local laws). The initial online betting exchange system was an ASP.NET/Oracle application that was replaced in 2004 by a second-generation system (Betex ), which used the existing Oracle DB but replaced the ASP front end with a Java/JSP application (built on the JBoss open-source application server) that leveraged the Oracle Coherence (then Tangosol Coherence) distributed caching platform The architecture proved very scalable and has supported the company's growth since its implementation. However, to meet ongoing scaling demands and enable an order of magnitude increase in workload, Betfair is developing the "Flywheel" third generation of its platform , which will be based on a revolutionary event-driven architecture foundation.
Customers should consider these requirements when looking at different solutions Most solutions add reliability on as an after-thought, Coherence was designed and built from the ground-up with reliability in mind Needs to be simple enough for corporate developers to easily adopt and integrate into existing applications
That is why the Oracle Enterprise 2.0 platform, that combine with all the different solutions and technologies that we have talked about today, really define and form a complete solution set that is designed to be uptaken granularly and modularly. This gradual evolution allows the new E20 capabilities to leverage and use those infrastructure and application investments that have already been made, and thereby maximize them. In this way, the triumvirate of users, information, and systems experiences maximized interaction, efficiency and, ultimately, evolution.