This document discusses building applications for the cloud and provides best practices. It notes that deploying applications on the cloud introduces challenges related to scalability, reliability, security, and management. It recommends that applications be designed to be elastic, memory-based, and easy to operate in order to fully take advantage of the cloud. Specific steps are outlined, such as using in-memory data grids for messaging and as the system of record, and auto-scaling the web tier.
S de0882 new-generation-tiering-edge2015-v3Tony Pearson
IBM offers a variety of storage optimization technologies that balance performance and cost. This session covers Easy Tier, Storage Analytics, and Spectrum Scale.
A review of IBM Virtual Storage Center and Spectrum Control Advanced Edition analytics for reporting on storage tiers, pool balancing, and volume transformation
How to Leverage Mainframe Data with Hadoop: Bridging the Gap Between Big Iron...Precisely
In this presentation from Syncsort and Cloudera, you'll learn how to bridge the technical, skill and cost gaps between mainframe and Hadoop. We discuss the top challenges of ingesting and processing mainframe data in Hadoop – and how to solve them.
Capacity Efficiency: Identifying the Right Solutions for the Right ChallengeHitachi Vantara
Justin Augat, Hitachi Data Systems Senior Product Marketing Manager shares strategies to identify current storage costs, measure the unit cost of data storage, and set preliminary plans to reduce the total cost of storage.
S de0882 new-generation-tiering-edge2015-v3Tony Pearson
IBM offers a variety of storage optimization technologies that balance performance and cost. This session covers Easy Tier, Storage Analytics, and Spectrum Scale.
A review of IBM Virtual Storage Center and Spectrum Control Advanced Edition analytics for reporting on storage tiers, pool balancing, and volume transformation
How to Leverage Mainframe Data with Hadoop: Bridging the Gap Between Big Iron...Precisely
In this presentation from Syncsort and Cloudera, you'll learn how to bridge the technical, skill and cost gaps between mainframe and Hadoop. We discuss the top challenges of ingesting and processing mainframe data in Hadoop – and how to solve them.
Capacity Efficiency: Identifying the Right Solutions for the Right ChallengeHitachi Vantara
Justin Augat, Hitachi Data Systems Senior Product Marketing Manager shares strategies to identify current storage costs, measure the unit cost of data storage, and set preliminary plans to reduce the total cost of storage.
Inter connect2016 yss1841-cloud-storage-options-v4Tony Pearson
This session will cover private and public cloud storage options, including flash, disk and tape, to address the different types of cloud storage requirements. It will also explain the use of Active File Management for local space management and global access to files, and support for file-and-sync.
DB2 Design for High Availability and ScalabilitySurekha Parekh
Are you overwhelmed by the growing amount of data in your environment? Are you maximizing application availability? As the number of tables with billions of rows continues to grow, so do the management challenges. In this session, we will discuss the challenges and solutions for optimum availability and performance, with techniques to efficiently and effectively manage very large amounts of data.
The Pendulum Swings Back - Understanding Converged and Hyperconverged Integrated Systems, presented Oct 17, 2017 at IBM Systems Technical University, New Orleans LA
IBM Storage and VMware – A Winning CombinationPaula Koziol
Get an overview of how IBM and VMware are partnering together to help their joint customers innovate and grow. Storage offerings from IBM are integrated and optimized for VMware, ensuring you can extend your investment as you transform to a digital business.
IBM Spectrum Virtualize v8.2.0 now supports 25GbE TCP/IP offload engine (TOE) cards and Deduplication with Data Reduction Pools. This session covers the latest features of version 8.1 and 8.2
MT48 A Flash into the future of storage…. Flash meets Persistent Memory: The...Dell EMC World
Several key technology trends are redefining the boundaries of the traditional storage infrastructure stack: In a rapidly changing world of system interconnects, emerging memory media, and storage semantics, Server Designers and Storage Architects are engaging and collaborating like never before to exploit breakthrough technology capabilities.
With the backdrop of Big Data volume, Cloud Data ubiquity and IoT Data velocity, Application Developers are entering the Post-POSIX world of real-time, high-frequency, low latency data management frameworks.
This session will address key technology trends in Storage, Networking, and Compute, as they define the parameters of a Memory Centric Architecture (MCA) and the Next Generation Data Center.
IBM DB2 Analytics Accelerator Trends & Directions by Namik Hrle Surekha Parekh
IBM DB2 Analytics Accelerator has drawn lots of attention from DB2 for z/OS users. In many respects it presents itself as just another DB2 access path (but what a powerful one!) and its deep integration into DB2 as well as application transparency makes it one of the most exciting DB2 enhancements in years. The IBM DB2 Analytics Accelerator complements DB2 by adding industry leading data intensive complex query performance thanks to being powered by the Netezza engine and enhances DB2 to the ultimate database management system that delivers the best of both worlds: transactional as well as analytical workloads. This presentation brings the latest news from the IDAA development and shows the trends and directions in which this technology develops.
Live Data: For When Data is Greater than MemoryMemVerge
At the Virtual HPC User Forum Special Event, Kevin Tubbs of Penguin Computing talks about a common HPC problem called DGM (data is greater than memory) and the Live Data solution incorporating Big Memory technology.
S sy0883 smarter-storage-strategy-edge2015-v4Tony Pearson
IBM Smarter Storage Strategy explains IBM's direction for its IBM System Storage product line. This includes support for Big Data analytics, optimizing for traditional workloads, and helping clients transition to Cloud.
S ss0885 spectrum-scale-elastic-edge2015-v5Tony Pearson
IBM Spectrum Scale offerings include the Spectrum Scale software that you can deploy on your own choice of hardware, Elastic Storage Server and Storwize V7000 Unified pre-built systems.
An overview of Converged and Hyperconverged Systems, including VersaStack and IBM Hyperconverged Systems. Presented in Orlando, FL IBM Technical University.
Inter connect2016 yss1841-cloud-storage-options-v4Tony Pearson
This session will cover private and public cloud storage options, including flash, disk and tape, to address the different types of cloud storage requirements. It will also explain the use of Active File Management for local space management and global access to files, and support for file-and-sync.
DB2 Design for High Availability and ScalabilitySurekha Parekh
Are you overwhelmed by the growing amount of data in your environment? Are you maximizing application availability? As the number of tables with billions of rows continues to grow, so do the management challenges. In this session, we will discuss the challenges and solutions for optimum availability and performance, with techniques to efficiently and effectively manage very large amounts of data.
The Pendulum Swings Back - Understanding Converged and Hyperconverged Integrated Systems, presented Oct 17, 2017 at IBM Systems Technical University, New Orleans LA
IBM Storage and VMware – A Winning CombinationPaula Koziol
Get an overview of how IBM and VMware are partnering together to help their joint customers innovate and grow. Storage offerings from IBM are integrated and optimized for VMware, ensuring you can extend your investment as you transform to a digital business.
IBM Spectrum Virtualize v8.2.0 now supports 25GbE TCP/IP offload engine (TOE) cards and Deduplication with Data Reduction Pools. This session covers the latest features of version 8.1 and 8.2
MT48 A Flash into the future of storage…. Flash meets Persistent Memory: The...Dell EMC World
Several key technology trends are redefining the boundaries of the traditional storage infrastructure stack: In a rapidly changing world of system interconnects, emerging memory media, and storage semantics, Server Designers and Storage Architects are engaging and collaborating like never before to exploit breakthrough technology capabilities.
With the backdrop of Big Data volume, Cloud Data ubiquity and IoT Data velocity, Application Developers are entering the Post-POSIX world of real-time, high-frequency, low latency data management frameworks.
This session will address key technology trends in Storage, Networking, and Compute, as they define the parameters of a Memory Centric Architecture (MCA) and the Next Generation Data Center.
IBM DB2 Analytics Accelerator Trends & Directions by Namik Hrle Surekha Parekh
IBM DB2 Analytics Accelerator has drawn lots of attention from DB2 for z/OS users. In many respects it presents itself as just another DB2 access path (but what a powerful one!) and its deep integration into DB2 as well as application transparency makes it one of the most exciting DB2 enhancements in years. The IBM DB2 Analytics Accelerator complements DB2 by adding industry leading data intensive complex query performance thanks to being powered by the Netezza engine and enhances DB2 to the ultimate database management system that delivers the best of both worlds: transactional as well as analytical workloads. This presentation brings the latest news from the IDAA development and shows the trends and directions in which this technology develops.
Live Data: For When Data is Greater than MemoryMemVerge
At the Virtual HPC User Forum Special Event, Kevin Tubbs of Penguin Computing talks about a common HPC problem called DGM (data is greater than memory) and the Live Data solution incorporating Big Memory technology.
S sy0883 smarter-storage-strategy-edge2015-v4Tony Pearson
IBM Smarter Storage Strategy explains IBM's direction for its IBM System Storage product line. This includes support for Big Data analytics, optimizing for traditional workloads, and helping clients transition to Cloud.
S ss0885 spectrum-scale-elastic-edge2015-v5Tony Pearson
IBM Spectrum Scale offerings include the Spectrum Scale software that you can deploy on your own choice of hardware, Elastic Storage Server and Storwize V7000 Unified pre-built systems.
An overview of Converged and Hyperconverged Systems, including VersaStack and IBM Hyperconverged Systems. Presented in Orlando, FL IBM Technical University.
This is a presentation on the importance of archaeology to those who study ancient civilizations. The focus here, though, is on the looting of archaeological sites and its impact on scholars' ability to study ancient cultures.
AWS Summit 2013 | Auckland - Building Web Scale Applications with AWSAmazon Web Services
AWS provides a platform that is ideally suited for deploying highly available and reliable systems that can scale with a minimal amount of human interaction. This talk describes a set of architectural patterns that support highly available services that are also scalable, low cost, low latency and allow for agile development practices. We walk through the various architectural decisions taken for each tier and explain our choices for appropriate AWS services and building blocks to ensure the security, scale, availability and reliability of the application.
RapidScale DaaS offering allows companies of all sizes to move their desktops into the cloud eliminating PC maintenance, support, and life cycle management.
Taming the cost of your first cloud - CCCEU 2014Tim Mackey
Today everyone is talking about clouds, and a few are building them, but far fewer are operating successful clouds. In this session we'll examine a variety of paradigm shifts IT makes when moving from a traditional virtualization and management mindset to operating a successful cloud. For most organizations, without careful planning the hype of a cloud solution can quickly overcome its capabilities and pre-existing best practices can combine to create the worst possible cloud scenario -- a cloud which isn't economical to operate, and which is more cumbersome to manage than a traditional virtualization farm.
Key topics covered include:
- Successful transition of operational and management paradigm
- How the VM density of clouds change Ops
- What it means to monitor the network in a cloud environment, at hyper-dense virtualization levels
- Preventing storage costs from outpacing delivery costs
Real World Use Cases and Success Stories for In-Memory Data Grids (TIBCO Acti...Kai Wähner
A lot of data grid products are available. TIBCO ActiveSpaces, Oracle Coherence, Infinispan, IBM WebSphere eXtreme Scale, Hazelcast, Gigaspaces, GridGain, Pivotal Gemfire to name most of the important ones. Not SAP HANA!
The goal of my talk was not very technical. Instead, I discussed several different real world use cases and success stories for using in-memory data grids. Here is the abstract for my talk:
NoSQL is not just about different storage alternatives such as document store, key value store, graphs or column-based databases. The hardware is also getting much more important. Besides common disks and SSDs, enterprises begin to use in-memory storages more and more because a distributed in-memory data grid provides very fast data access and update. While its performance will vary depending on multiple factors, it is not uncommon to be 100 times faster than corresponding database implementations. For this reason and others described in this session, in-memory computing is a great solution for lifting the burden of big data, reducing reliance on costly transactional systems, and building highly scalable, fault-tolerant applications.The session begins with a short introduction to in-memory computing. Afterwards, different frameworks and product alternatives are discussed for implementing in-memory solutions. Finally, the main part of this session shows several different real world uses cases where in-memory computing delivers business value by supercharging the infrastructure.
Protecting Your Power Systems with Cloud-based HA/DRPrecisely
If you haven’t evaluated the benefits of utilizing a cloud-based HA/DR solution for your Power System, you may be missing out on a tremendous opportunity. Choosing to utilize a cloud-based approach to your data protection can yield benefits in scalability, reliability, security and ease of use.
However, before enjoying these benefits, you need to be aware of how to address the challenges. These challenges range from data synchronization to testing to planning for fallback in the event of problems.
Join us for this webinar to hear about:
• Benefits of cloud-based HA/DR
• Important capabilities to consider when choosing a provider
• How cloud-based HA/DR can be easier than an on-premises approach
Effectively Plan for Your Move to the CloudPrecisely
Many companies using Power Systems running IBM i are looking to more some or all of their workloads to the cloud. Whether the motivation is to optimize their spending or allow for a more flexible consumption model, the cloud can provide unique opportunities to optimize their IBM i environment.
IBM Power Systems Virtual Server is one way to get the benefits of hybrid cloud, maintain the high performance of IBM Power Systems while modernizing at your pace and price point, on and off premises.
As companies move to a cloud environment, they need to consider the challenges of migrating their workload. Migrations always require detailed, coordinated planning and flawless execution. This is especially true today when downtime of any duration is completely unacceptable. So, above all other considerations, maintaining continuous uptime throughout the process is absolutely mandatory.
Watch this on-demand webinar to hear about:
• Benefits of a hybrid cloud approach for IBM i
• Ways the IBM Power VS can add value to your IBM i environment
• How to effectively scope and execute a migration to the cloud.
Solving enterprise challenges through scale out storage & big compute finalAvere Systems
Google Cloud Platform, Avere Systems, and Cycle Computing experts will share best practices for advancing solutions to big challenges faced by enterprises with growing compute and storage needs. In this “best practices” webinar, you’ll hear how these companies are working to improve results that drive businesses forward through scalability, performance, and ease of management.
The slides were from a webinar presented January 24, 2017. The audience learned:
- How enterprises are using Google Cloud Platform to gain compute and storage capacity on-demand
- Best practices for efficient use of cloud compute and storage resources
- Overcoming the need for file systems within a hybrid cloud environment
- Understand how to eliminate latency between cloud and data center architectures
- Learn how to best manage simulation, analytics, and big data workloads in dynamic environments
- Look at market dynamics drawing companies to new storage models over the next several years
Presenters communicated a foundation to build infrastructure to support ongoing demand growth.
Dynamic Resource Allocation Using Virtual Machines for Cloud Computing Enviro...SaikiranReddy Sama
In Dynamic Resource Allocation, WE PRESENT A SYSTEM THAT USES VIRTUALIZATION TECHNOLOGY TO ALLOCATE DATA CENTER RESOURCES DYNAMICALLY.
WE INTRODUCE THE CONCEPT OF “SKEWNESS”.
And BY MINIMIZING SKEWNESS, WE CAN COMBINE DIFFERENT TYPES OF WORKLOADS NICELY AND IMPROVE THE OVERALL UTILIZATION OF SERVER RESOURCES.
WE DEVELOP A SET OF HEURISTICS THAT PREVENT OVERLOAD IN THE SYSTEM EFFECTIVELY WHILE SAVING ENERGY USED.
Dynamic resource Allocation using Virtual Machines For Cloud Computing
Marketing Automation at Scale: How Marketo Solved Key Data Management Challen...Continuent
Marketo provides the leading cloud-based marketing software platform for companies of all sizes to build and sustain engaging customer relationships. Marketo's SaaS platform runs on MySQL and has faced data management challenges common to all 24x7 SaaS businesses:
- Keeping data available regardless of DBMS failures or planned maintenance
- Utilizing hardware optimized for multi-terabyte MySQL servers
- Keeping replicas caught up and ready for instant failover despite high transaction loads
In this webinar, Nick Bonfiglio, VP of Operations at Marketo, describes how Marketo manages thousands of customers and processes a billion marketing analytics transactions a day using Continuent Tungsten and MySQL atop an innovative hardware architecture. He explains how Tungsten parallel replication paved the way to rapid growth by solving Marketo's biggest MySQL challenge: keeping DBMS replicas up to date despite massive transaction loads.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Assuring Contact Center Experiences for Your Customers With ThousandEyes
A scalable server environment for your applications
1. Building Applications
for the Cloud -
Challenges
C a e ges & Best Practices
est act ces
Jeroen Remmerswaal
Tricode Professional Services
GigaSpaces Terrritory Partner BeNeLux
DDHS 2010
2. Why Now?
• No large upfront investments
• Need to do more with the same or less
resources
• Maturity of virtualization technologies
y g
• Faster CPUs, memory, disks
3. The Challenges:
• Deploying on the cloud introduces new
challenges:
• On demand scalability
• R li bilit
Reliability
• Data security
y
• Deployment, monitoring &
management t
4. Seasonal Peaks
1,300,000,000
A.B.S1
1,200,000,000
1,100,000,000
The Reality:
1,000,000,000
900,000,000
800,000,000
“A brokerage can lose up to $4M per 1ms
700,000,000
600,000,000
500,000,000
500 000 000
of latency” - The Tabb Group
400,000,000
300,000,000
“An additional 500ms delay resulted in
y
200,000,000
100,000,000
0
J‐04 M‐04 M‐04 J‐04 S‐04 N‐04 J‐05 M‐05 M‐05 J‐05 S‐05 N‐05 J‐06 M‐06 M‐06 J‐06 S‐06 N‐06 J‐07 M‐07 M‐07 J‐07 S‐07
-20% traffic” - Google
“An additional 100ms in latency resulted
An
in -1% sales” – Amazon
5. Slide 4
A.B.S1 animate them so they come one after the other
Alit Bar Sadeh; 11-3-2008
6. The Reality:
• “Every year, we take the busiest minute
of the busiest hour of the busiest day
and we built our systems to handle that
y
load and we went above and beyond
that.”
th t ”
– Scott Gulbransen, Intuit Spokesman
, p
9. Traditional Architectures – See the Problem?
Business tier
• Hard to Web Tier
install:
• Bound to static resources (IPs, disk drives, etc.)
Load
Balancer
• Separate clustering model for each tier
• Hard to maintain
Back-up
p
• Insecure Back-up
Back-up
• Non-scalable
Messaging
11. To take full advantage of
the cloud,
your application’s
architecture needs to
hit t d t
change
12. It needs to be elastic:
• Grow (and shrink) as needed, based on
an SLA (such as work load)
• But with no downtime, self-heal on
failure,
failure without data and transaction
data- transaction-
loss
• And with a corresponding ((predictable)
)
p
performance improvement
p
13. It needs to be memory-based:
• No permanent off-premise storage
• Not bound to static resources
N tb d t t ti
• Bonus: extreme performance
p
• Reliability achieved through memory
replication
li ti
• Optionally o oad data to on/off site
Opt o a y offload o /o s te
persistent store
14. It needs to be easy to operate:
y p
• Deploying & monitoring on the cloud as
simple and the same as doing it on-
premises
• Process should be repeatable
• Application should be modular –
update on the fly with no downtime
15. Web Business
Processing Processing
Units Units
Load
Balancer
The l i
Th solution:
Users
Application L
A li ti Level Virtualization
l Vi t li ti
Primaries Backups
16. GigaSpaces XAP:
• Linearly scalable and elastic via virtualization
of the processing, messaging and data tiers
f th i i d d t ti
• Secure and ultra fast via in-memory
in-
infrastructure
• Comprehensive cloud support for the simplest
provisioning, deployment & monitoring
• N -i t
Non-
Non intrusive:
i
• Adopts existing programming models
• Cross platform & language
17. Can Your Application Take the Heat?
How can your application
y pp
handle the load ???
Your Server
18. Can Your Application Take the Heat?
GigaSpaces XAP will
manage, monitor and scale your
application on the fly on the cloud
The Cloud
19. Some Practical Steps
Value IMDG as
Messaging System
of Record
Web Tier Remoting
Effort
On-demand provisioning Parallel Processing vs. Partitioned virtualized Partitioned virtualized
Architecture
vs. static, peak-based client-server servers vs. central server servers vs. central server
7 machines 90 machines 6x machines 6x machines
Savings Examples
(10 peak – 3 avg) (100 peak, 10 avg) (SBA/TBA benchmark) (SBA/TBA benchmark)
Self-healing Automatic failover Fast & Consistent
Basic caching Map/Reduce Commodity HW Low response time.
Additional Benefits
Auto deployment Async invocation latency (in-memory) Commodity db vs. high-
Location transparency end
20. Auto-Scale the Web-Tier
• If you have a standard J2EE WAR-file, deploy as-is into GigaSpaces
• Fail-over / Self-healing comes out of the box
• Add 'Auto-Scaling' for Scale-Up and Scale-Down
• Add Session-Clustering
21. Remoting on the Cloud
• Parallelize work over the cloud
– Move from J2EE Remoting to GigaSpaces remoting
– Giving you fault-tolerant, scalable, distributed remoting
– Parallelize instead of serialize
– Map/Reduce / Master/Worker / JSR223
22. Messaging on the Cloud
• Use the IMDG as the fault-tolerant messaging bus
– In-memory reliability
– Can be as simple as re-wiring your JMS provider to use GigaSpaces
– Use GigaSpaces Event Containers instead of MDB's
• Benchmarks on the same hardware show 6+ times more throughput
23. IMDG over the Cloud
• Fulfill your business transactions in memory
– Have (most of) the data available in memory
– Use the database because you want to, not because you have to
– Use the database asynchronously but reliable
• Benchmarks on the same hardware show 6-100 times more throughput
6 100
24. Typical use-cases and implementations
• Handling peak-loads (by cloud-bursting)
• Pay-per use
• Always-On / High A il bilit
Al O Hi h Availability
• High Performance / High Throughput
• Cost-reduction / Better utilization of hardware
• Large scale testing
• Disaster Recovery
25. Typical use-cases and implementations
• Telco
– Deploying discrete stand alone services in the Cloud
– D l i carrier grade VOIP service t th Cl d
Deploying i d i to the Cloud
• Global Media
– Using the Cloud to p
g process events for innovative new TV p g
programme
– Cloud makes concept cost effective
• Financial Services
– U i th Cl d f a t di exchange
Using the Cloud for trading h
– Cloud lowers barrier to entry and makes proposition possible
• Online Gaming
g
– Using the Cloud for testing and scaling
– Able to test large scale user support early / easy on cloud, hard otherwise
26. GigaSpaces Home Page:
g p g
http://www.gigaspaces.com http://www.gigaspaces.nl
http://twitter.com/gigaspaces
Tricode Home Page:
http://www.tricode.nl
http://twitter.com/tricode