Learn how to choose your e-commerce infrastructure, and how to forecast the TCO based on a simple model, including the explanations on how public, private and hybrid cloud computing work.
This meeting we'll host a discussion on Google Cloud Platform and Amazon Web Services to bring light to similarities and differences between platforms. If you have questions about how our platforms compare this is the meeting to attend!
Speaker: Utpal Thakrar - Product Manager, RightScale
Interest in private and hybrid clouds is exploding, and implementations are becoming real. In this talk, RightScale’s product manager in charge of private clouds will cover key considerations for designing and building private and hybrid clouds. You will learn how to tie strategy to decisions covering use cases, workloads, hardware, software, and implementation.
GumGum relies heavily on Cassandra for storing different kinds of metadata. Currently GumGum reaches 1 billion unique visitors per month using 3 Cassandra datacenters in Amazon Web Services spread across the globe.
This presentation will detail how we scaled out from one local Cassandra datacenter to a multi-datacenter Cassandra cluster and all the problems we encountered and choices we made while implementing it.
How did we architect multi-region Cassandra in AWS? What were our experiences in implementing multi-datacenter Cassandra? How did we achieve low latency with multi-region Cassandra and the Datastax Driver? What are the different Cassandra use cases at GumGum? How did we integrate our Cassandra with Spark?
This talk will walk through the journey of Cassandra at Netflix. It will go into 3-4 specific use cases where Cassandra stands out than the rest of the data-stores and is being used in Netflix, bringing great viewing experience to all customers globally. Roopa will go into the specifics of the data model being used and where Cassandra stands out with its strengths and which places where they learnt the hard way. Roopa will then share some of the best practices and self service platform being used for Cassandra to cater to their developer needs.
Maginatics @ SDC 2013: Architecting An Enterprise Storage Platform Using Obje...Maginatics
How did Maginatics build a strongly consistent and secure distributed file system? Niraj Tolia, Chief Architect at Maginatics, gave this presentation on the design of MagFS at the Storage Developer Conference on September 16, 2013.
For more information about MagFS—The File System for the Cloud, visit maginatics.com or contact us directly at info@maginatics.com.
This meeting we'll host a discussion on Google Cloud Platform and Amazon Web Services to bring light to similarities and differences between platforms. If you have questions about how our platforms compare this is the meeting to attend!
Speaker: Utpal Thakrar - Product Manager, RightScale
Interest in private and hybrid clouds is exploding, and implementations are becoming real. In this talk, RightScale’s product manager in charge of private clouds will cover key considerations for designing and building private and hybrid clouds. You will learn how to tie strategy to decisions covering use cases, workloads, hardware, software, and implementation.
GumGum relies heavily on Cassandra for storing different kinds of metadata. Currently GumGum reaches 1 billion unique visitors per month using 3 Cassandra datacenters in Amazon Web Services spread across the globe.
This presentation will detail how we scaled out from one local Cassandra datacenter to a multi-datacenter Cassandra cluster and all the problems we encountered and choices we made while implementing it.
How did we architect multi-region Cassandra in AWS? What were our experiences in implementing multi-datacenter Cassandra? How did we achieve low latency with multi-region Cassandra and the Datastax Driver? What are the different Cassandra use cases at GumGum? How did we integrate our Cassandra with Spark?
This talk will walk through the journey of Cassandra at Netflix. It will go into 3-4 specific use cases where Cassandra stands out than the rest of the data-stores and is being used in Netflix, bringing great viewing experience to all customers globally. Roopa will go into the specifics of the data model being used and where Cassandra stands out with its strengths and which places where they learnt the hard way. Roopa will then share some of the best practices and self service platform being used for Cassandra to cater to their developer needs.
Maginatics @ SDC 2013: Architecting An Enterprise Storage Platform Using Obje...Maginatics
How did Maginatics build a strongly consistent and secure distributed file system? Niraj Tolia, Chief Architect at Maginatics, gave this presentation on the design of MagFS at the Storage Developer Conference on September 16, 2013.
For more information about MagFS—The File System for the Cloud, visit maginatics.com or contact us directly at info@maginatics.com.
How Russia’s #1 Internet Provider Gets High Performance at Low Cost
What if you could store huge amounts of data using traditional low-cost HDDs while still maintaining low single-digit millisecond latencies? That’s just what they did at Mail.Ru, the largest email and Internet service provider in Russia.
Building a Real-time Streaming ETL Framework Using ksqlDB and NoSQLScyllaDB
Event streaming applications unlock new benefits by combining various data feeds. However, getting actionable insights in a timely fashion has remained a challenge, as the data has been siloed in disparate systems. ksqlDB solves this by providing an interactive SQL interface that can seamlessly combine and transform data from various sources.
In this webinar, we will show how streaming queries of high throughput NoSQL systems can derive insights from various push/pull queries via ksqlDB's User-Defined Functions, Aggregate Functions and Table Functions.
ScyllaDB recently announced Project Alternator, a new open source project that will enable Amazon DynamoDB users to easily migrate to an open-source database that runs anywhere — on most cloud platforms, on-premises, on bare-metal, virtual machines or via Kubernetes — all while preserving their investments in their existing application code.
Project Alternator will help DynamoDB users achieve much better and more reliable performance, reduce database costs by 80% - 90%, support large items (10s of MBs) and large partitions (multiple GBs), control the number of replicas, balance cost vs. redundancy, and much more.
Join ScyllaDB founders Avi Kivity and Dor Laor and lead engineer Nadav Har’El for a live webinar on September 25th, where they will share an overview of Project Alternator, including:
Alternator’s design implementation and goals
How to configure Alternator (ok, add alternator_port: 8000 to your scylla.yaml)
Demo how to easily run it from docker/rpm
Run several examples:
Tic-tac-toe based DynamoDB example with Alternator
How to benchmark Scylla Alternator with YCSB and considerations around it
How to run a serverless application along with Alternator
How to migrate DynamoDB data to Alternator using the Spark migrator
Discuss the current limitations of Alternator
Plus we will discuss current limitations of Alternator, describe different consistencies and active-active vs leader model, share the project roadmap, and answer your questions at the end.
MySQL Cluster (NDB) - Best Practices Percona Live 2017Severalnines
This presentation by Johan Andersson at Percona Live 2017 in Santa Clara, California gives detailed information on all you need to know to effectively deploy and manage MySQL Cluster technology in your environment.
OpenEBS Technical Workshop - KubeCon San Diego 2019MayaData Inc
Know how to navigate the journey to cloud-native data management with lessons learned and best practices to help you deploy Kubernetes, storage, and data management with confidence.
Building Event Streaming Architectures on Scylla and KafkaScyllaDB
Event streaming architectures require high-throughput, low-latency components to consistently and smoothly transfer data between heterogenous transactional and analytical systems. Join us and Confluent's Tim Berglund to learn how the Scylla and Confluent Kafka interoperate as a foundation upon which you can build enterprise-grade, event-driven applications, plus a use case from Numberly.
Cassandra on Google Cloud Platform (Ravi Madasu, Google / Ben Lackey, DataSta...DataStax
During this session Ben Lackey (DataStax) and Ravi Madasu (Google) will cover best practices for quickly setting up a cluster on Google Cloud Platform (GCP) using both Google Compute Engine (GCE) and Google Container Engine (GKE) which is based on Kubernetes and Docker.
About the Speakers
Ben Lackey Partner Architect, DataStax
I work in the Cloud Strategy group at DataStax where I concentrate on improving the integration between DataStax Enterprise and cloud platforms including Azure, GCP and Pivotal.
Ravi Madasu
Ravi Madasu is a program manager at Google, primarily focused on Google Cloud Launcher. He works closely with ISV partners to make their products and services available on the Google Cloud Platform providing a developer friendly deployment experience. He has 15+ years of experience, working in variety of roles such as software engineer, project manager and product manager. Ravi received a Masters degree in Information Systems from Northeastern University and an MBA from Carnegie Mellon University.
Netflix’s architecture involves thousands of microservices built to serve unique business needs. As this architecture grew, it became clear that the data storage and query needs were unique to each area; there is no one silver bullet which fits the data needs for all microservices. CDE (Cloud Database Engineering team) offers polyglot persistence, which promises to offer ideal matches between problem spaces and persistence solutions. In this meetup you will get a deep dive into the Self service platform, our solution to repairing Cassandra data reliably across different datacenters, Memcached Flash and cross region replication and Graph database evolution at Netflix.
Mongo DB Monitoring - Become a MongoDB DBASeveralnines
This presentation was presented by Art van Scheppingen at Percona Live 2017 in Santa Clara CA and covers what you need to know to effectively monitor MongoDB
Cisco: Cassandra adoption on Cisco UCS & OpenStackDataStax Academy
n this talk we will address how we developed our Cassandra environments utilizing Cisco UCS Open Stack Platform with the DataStax Enterprise Edition software. In addition we are utilizing OpenSource CEPH storage in our Infrastructure to optimize the Performance and reduce the costs.
RDBMS to NoSQL: Practical Advice from Successful MigrationsScyllaDB
When and how to migrate data from SQL to NoSQL are matters of much debate. It can certainly be a daunting task, but when your SQL systems hit architectural limits or your Aurora expenses skyrocket, it’s probably time to consider the move.
See a discussion of how best to migrate data from SQL to NoSQL, and how to get heterogenous data systems to communicate with each other effectively in real time. Get important architectural considerations, tips and tricks and several real-world use cases.
From this webinar you will learn:
Key differences between RDBMS and NoSQL, and how to know when it’s time to migrate
How to harness the greatest strengths out of both classes of databases, SQL and NoSQL
Migration techniques proven in the field
Modeling differences between RDBMS and NoSQL
Managing releases in NoSQL vs RDBMS
Scylla features and services that help with migrating from a relational database
Cloud Databases in Research and PracticeFelix Gessert
The combination of database systems and cloud computing is extremely attractive: unlimited storage capacities, elastic scalability and as-a-Service models seem to be within reach. This talk will give an in-depth survey of existing solutions for cloud databases that evolved in the last years and provide classification and comparison. This includes real-world systems (e.g. Azure Tables, DynamoDB and Parse) as well as research approaches (e.g. RelationalCloud and ElasTras). In practice however, there are some unsolved problems. Network latency, scalable transactions, SLAs, multi-tenancy, abstract data modelling, elastic scalability and polyglot persistence pose daunting tasks for many scenarios. Therefore, we conclude with „Orestes“ a research approach based on well-known techniques such as web caching, Bloom filters and optimistic concurrency control that demonstrates how existing cloud databases can be enhanced to suit specific applications.
Why you need benchmarks
Finding the right database solution for your use case can be an arduous journey. The database deployment touches aspects of throughput performance, latency control, high availability and data resilience.
You will need to decide on the infrastructure to use: Cloud, on-premise or a hybrid solution.
Data models also have an impact on finding the right fit for the use case. Once you establish a requirements set, the next step is to test your use case against the databases of choice.
In this workshop, we will discuss the different data points you need to collect in order to get the most realistic testing environment.
We will cover:
Data model impact on performance and latency
Client behavior related to database capabilities
Failover and high availability testing
Hardware selection and cluster configuration impact
We will show 2 benchmarking tools you can use to test and benchmark your clusters to identify the optimal deployment scenario for your use case.
Attend this virtual workshop if you are:
Looking to minimize the cost of your database deployment
Making a database decision based on performance and scale data
Planning to emulate your workload on a pre-production system where you can test, fail fast and learn.
IDM365 is developed for medium and large-scaled businesses, the user-centric interface allows business critical decisions to be made right where the knowledge and information is while keeping IT and Management in control.
The IDM365 Identity and Access Management backend can connect to almost any system or application on the market and provides the flexibility to adapt to each client's business. We have developed tools which allow us to speed up the implementation process, ensuring minimum costs while maintaining maximum accuracy and control.
www.idm365.com
How Russia’s #1 Internet Provider Gets High Performance at Low Cost
What if you could store huge amounts of data using traditional low-cost HDDs while still maintaining low single-digit millisecond latencies? That’s just what they did at Mail.Ru, the largest email and Internet service provider in Russia.
Building a Real-time Streaming ETL Framework Using ksqlDB and NoSQLScyllaDB
Event streaming applications unlock new benefits by combining various data feeds. However, getting actionable insights in a timely fashion has remained a challenge, as the data has been siloed in disparate systems. ksqlDB solves this by providing an interactive SQL interface that can seamlessly combine and transform data from various sources.
In this webinar, we will show how streaming queries of high throughput NoSQL systems can derive insights from various push/pull queries via ksqlDB's User-Defined Functions, Aggregate Functions and Table Functions.
ScyllaDB recently announced Project Alternator, a new open source project that will enable Amazon DynamoDB users to easily migrate to an open-source database that runs anywhere — on most cloud platforms, on-premises, on bare-metal, virtual machines or via Kubernetes — all while preserving their investments in their existing application code.
Project Alternator will help DynamoDB users achieve much better and more reliable performance, reduce database costs by 80% - 90%, support large items (10s of MBs) and large partitions (multiple GBs), control the number of replicas, balance cost vs. redundancy, and much more.
Join ScyllaDB founders Avi Kivity and Dor Laor and lead engineer Nadav Har’El for a live webinar on September 25th, where they will share an overview of Project Alternator, including:
Alternator’s design implementation and goals
How to configure Alternator (ok, add alternator_port: 8000 to your scylla.yaml)
Demo how to easily run it from docker/rpm
Run several examples:
Tic-tac-toe based DynamoDB example with Alternator
How to benchmark Scylla Alternator with YCSB and considerations around it
How to run a serverless application along with Alternator
How to migrate DynamoDB data to Alternator using the Spark migrator
Discuss the current limitations of Alternator
Plus we will discuss current limitations of Alternator, describe different consistencies and active-active vs leader model, share the project roadmap, and answer your questions at the end.
MySQL Cluster (NDB) - Best Practices Percona Live 2017Severalnines
This presentation by Johan Andersson at Percona Live 2017 in Santa Clara, California gives detailed information on all you need to know to effectively deploy and manage MySQL Cluster technology in your environment.
OpenEBS Technical Workshop - KubeCon San Diego 2019MayaData Inc
Know how to navigate the journey to cloud-native data management with lessons learned and best practices to help you deploy Kubernetes, storage, and data management with confidence.
Building Event Streaming Architectures on Scylla and KafkaScyllaDB
Event streaming architectures require high-throughput, low-latency components to consistently and smoothly transfer data between heterogenous transactional and analytical systems. Join us and Confluent's Tim Berglund to learn how the Scylla and Confluent Kafka interoperate as a foundation upon which you can build enterprise-grade, event-driven applications, plus a use case from Numberly.
Cassandra on Google Cloud Platform (Ravi Madasu, Google / Ben Lackey, DataSta...DataStax
During this session Ben Lackey (DataStax) and Ravi Madasu (Google) will cover best practices for quickly setting up a cluster on Google Cloud Platform (GCP) using both Google Compute Engine (GCE) and Google Container Engine (GKE) which is based on Kubernetes and Docker.
About the Speakers
Ben Lackey Partner Architect, DataStax
I work in the Cloud Strategy group at DataStax where I concentrate on improving the integration between DataStax Enterprise and cloud platforms including Azure, GCP and Pivotal.
Ravi Madasu
Ravi Madasu is a program manager at Google, primarily focused on Google Cloud Launcher. He works closely with ISV partners to make their products and services available on the Google Cloud Platform providing a developer friendly deployment experience. He has 15+ years of experience, working in variety of roles such as software engineer, project manager and product manager. Ravi received a Masters degree in Information Systems from Northeastern University and an MBA from Carnegie Mellon University.
Netflix’s architecture involves thousands of microservices built to serve unique business needs. As this architecture grew, it became clear that the data storage and query needs were unique to each area; there is no one silver bullet which fits the data needs for all microservices. CDE (Cloud Database Engineering team) offers polyglot persistence, which promises to offer ideal matches between problem spaces and persistence solutions. In this meetup you will get a deep dive into the Self service platform, our solution to repairing Cassandra data reliably across different datacenters, Memcached Flash and cross region replication and Graph database evolution at Netflix.
Mongo DB Monitoring - Become a MongoDB DBASeveralnines
This presentation was presented by Art van Scheppingen at Percona Live 2017 in Santa Clara CA and covers what you need to know to effectively monitor MongoDB
Cisco: Cassandra adoption on Cisco UCS & OpenStackDataStax Academy
n this talk we will address how we developed our Cassandra environments utilizing Cisco UCS Open Stack Platform with the DataStax Enterprise Edition software. In addition we are utilizing OpenSource CEPH storage in our Infrastructure to optimize the Performance and reduce the costs.
RDBMS to NoSQL: Practical Advice from Successful MigrationsScyllaDB
When and how to migrate data from SQL to NoSQL are matters of much debate. It can certainly be a daunting task, but when your SQL systems hit architectural limits or your Aurora expenses skyrocket, it’s probably time to consider the move.
See a discussion of how best to migrate data from SQL to NoSQL, and how to get heterogenous data systems to communicate with each other effectively in real time. Get important architectural considerations, tips and tricks and several real-world use cases.
From this webinar you will learn:
Key differences between RDBMS and NoSQL, and how to know when it’s time to migrate
How to harness the greatest strengths out of both classes of databases, SQL and NoSQL
Migration techniques proven in the field
Modeling differences between RDBMS and NoSQL
Managing releases in NoSQL vs RDBMS
Scylla features and services that help with migrating from a relational database
Cloud Databases in Research and PracticeFelix Gessert
The combination of database systems and cloud computing is extremely attractive: unlimited storage capacities, elastic scalability and as-a-Service models seem to be within reach. This talk will give an in-depth survey of existing solutions for cloud databases that evolved in the last years and provide classification and comparison. This includes real-world systems (e.g. Azure Tables, DynamoDB and Parse) as well as research approaches (e.g. RelationalCloud and ElasTras). In practice however, there are some unsolved problems. Network latency, scalable transactions, SLAs, multi-tenancy, abstract data modelling, elastic scalability and polyglot persistence pose daunting tasks for many scenarios. Therefore, we conclude with „Orestes“ a research approach based on well-known techniques such as web caching, Bloom filters and optimistic concurrency control that demonstrates how existing cloud databases can be enhanced to suit specific applications.
Why you need benchmarks
Finding the right database solution for your use case can be an arduous journey. The database deployment touches aspects of throughput performance, latency control, high availability and data resilience.
You will need to decide on the infrastructure to use: Cloud, on-premise or a hybrid solution.
Data models also have an impact on finding the right fit for the use case. Once you establish a requirements set, the next step is to test your use case against the databases of choice.
In this workshop, we will discuss the different data points you need to collect in order to get the most realistic testing environment.
We will cover:
Data model impact on performance and latency
Client behavior related to database capabilities
Failover and high availability testing
Hardware selection and cluster configuration impact
We will show 2 benchmarking tools you can use to test and benchmark your clusters to identify the optimal deployment scenario for your use case.
Attend this virtual workshop if you are:
Looking to minimize the cost of your database deployment
Making a database decision based on performance and scale data
Planning to emulate your workload on a pre-production system where you can test, fail fast and learn.
IDM365 is developed for medium and large-scaled businesses, the user-centric interface allows business critical decisions to be made right where the knowledge and information is while keeping IT and Management in control.
The IDM365 Identity and Access Management backend can connect to almost any system or application on the market and provides the flexibility to adapt to each client's business. We have developed tools which allow us to speed up the implementation process, ensuring minimum costs while maintaining maximum accuracy and control.
www.idm365.com
I presented a talk at FOSDEM on the subject of managing hybrid clouds with ManageIQ. ManageIQ is an open source platform for managing, automating, and creating cross-platform cloud services.
Minicurso apresentado no Dev Day da FATEC São Caetano em 2015
Patrocínio: Boolabs
Apoio: FATEC São Caetano, Centro Paula Souza, Governo do Estado de São Paulo
Python Básico:
- Variáveis e Tipos (Texto, Números, Booleanos, None, Listas, Tuplas, Dict, Set)
- Decisão
- Repetição
- Exceções
- Funções (com objetos de primeira classe)
- Módulos
- Classes (com herança múltipla)
- Duck typing
Extras:
- Heroku
- Programação Funcional
- Próximos tópicos
Likestilling og mangfold i SteinkjerlandbruketGrete Waaseth
Innlegg om det nasjonale pilotprosjektet "Likestilling og mangfold i Steinkjerlandbruket" for ressursgruppe sammensatt av utvalgte gårdbrukere i Steinkjer og regionale utviklingsaktører, august 2011.
Sosiale og kulturelle forhold i samfunnsplanleggingGrete Waaseth
Presentasjon for ansatte ved Høgskolen i Nord-Trøndelag (HINT) for å etablere samarbeid om å utvikle kunnskapsgrunnlag for lokal samfunnsplanlegging og -utvikling, september 2012
This presentation is about -
Based on as a service model,
• SAAS (Software as a service),
• PAAS (Platform as a service),
• IAAS (Infrastructure as a service,
Based on deployment or access model,
• Public Cloud,
• Private Cloud,
• Hybrid Cloud,
For more details you can visit -
http://vibranttechnologies.co.in/salesforce-classes-in-mumbai.html
this slide based on the emerging area known as cloud computing.....
by my knowledge and with the help of research paper ..i prepared this slide.....not in detail but good to take basic idea about cloud computing..
Uses, considerations, and recommendations for AWSScalar Decisions
From an information session on Amazon Web Services (AWS), looking at uses, considerations, and recommendations for leveraging AWS in your organization.
Topics covered:
- AWS Services Overview
- Some ideal use cases: Disaster Recovery, Backup and Archive, Test/Dev
- Data residency and security considerations
Web Component Development Using Servlet & JSP Technologies (EE6) - Chapter 1...WebStackAcademy
Let's see take an example:
Deploy Your Application to Oracle Application Container Cloud Service
Extract the content of the employees-app.zip file in your local system.
Log in to Oracle Cloud at http://cloud.oracle.com/. Enter your account credentials in the Identity Domain, User Name, and Password fields.
In the Oracle Cloud Services dashboard, click the Action menu Menu, and select Application Container.
In the Applications list view, click Create Application and select Java EE.
In the Application section, enter a name for your application and click Browse.
On the File Upload dialog box, select the employee-app.war file located in the target directory and click Open.
Keep the default values in the Instances and Memory fields and click Create.
Wait until the application is created. The URL is enabled when the creation is completed.
Click the URL of your application.
Cloud computing comes into focus only when you think about what IT always needs: a way to increase capacity or add capabilities on the fly without investing in new infrastructure, training new personnel, or licensing new software. Cloud computing encompasses any subscription-based or pay-per-use service that, in real time over the Internet, extends IT's existing capabilities.
Introduction to Google Cloud & GCCP CampaignGDSCVJTI
Topics covered:
🔴 Why Cloud?
🔴 Learn the basics of cloud.
🔴 Applications of cloud.
🔴 Introduction to the Google Cloud platform
🔴 Insights on the upcoming GCCP Campaign
** Diadem Technologies | Cloud Computing | Nasscom Workshop in Kolkata **
Diadem Technologies is a leading web hosting service provider, specialising in providing managed and customised hosted solutions for its 1500+ clients.
Similar to Cloud economics design, capacity and operational concerns (20)
Best Crypto Marketing Ideas to Lead Your Project to SuccessIntelisync
In this comprehensive slideshow presentation, we delve into the intricacies of crypto marketing, offering invaluable insights and strategies to propel your project to success in the dynamic cryptocurrency landscape. From understanding market trends to building a robust brand identity, engaging with influencers, and analyzing performance metrics, we cover all aspects essential for effective marketing in the crypto space.
Also Intelisync, our cutting-edge service designed to streamline and optimize your marketing efforts, leveraging data-driven insights and innovative strategies to drive growth and visibility for your project.
With a data-driven approach, transparent communication, and a commitment to excellence, InteliSync is your trusted partner for driving meaningful impact in the fast-paced world of Web3. Contact us today to learn more and embark on a journey to crypto marketing mastery!
Ready to elevate your Web3 project to new heights? Contact InteliSync now and unleash the full potential of your crypto venture!
Salma Karina Hayat is Conscious Digital Transformation Leader at Kudos | Empowering SMEs via CRM & Digital Automation | Award-Winning Entrepreneur & Philanthropist | Education & Homelessness Advocate
When listening about building new Ventures, Marketplaces ideas are something very frequent. On this session we will discuss reasons why you should stay away from it :P , by sharing real stories and misconceptions around them. If you still insist to go for it however, you will at least get an idea of the important and critical strategies to optimize for success like Product, Business Development & Marketing, Operations :)
Reflect Festival Limassol May 2024.
Michael Economou is an Entrepreneur, with Business & Technology foundations and a passion for Innovation. He is working with his team to launch a new venture – Exyde, an AI powered booking platform for Activities & Experiences, aspiring to revolutionize the way we travel and experience the world. Michael has extensive entrepreneurial experience as the co-founder of Ideas2life, AtYourService as well as Foody, an online delivery platform and one of the most prominent ventures in Cyprus’ digital landscape, acquired by Delivery Hero group in 2019. This journey & experience marks a vast expertise in building and scaling marketplaces, enhancing everyday life through technology and making meaningful impact on local communities, which is what Michael and his team are pursuing doing once more with Exyde www.goExyde.com
What You're Going to Learn
- How These 4 Leaks Force You To Work Longer And Harder in order to grow your income… improve just one of these and the impact could be life changing.
- How to SHUT DOWN the revolving door of Income Stagnation… you know, where new sales come into your magazine while at the same time existing sponsors exit.
- How to transform your magazine business by fixing the 4 “DON’Ts”...
#1 LEADS Don’t Book
#2 PROSPECTS Don’t Show
#3 PROSPECTS Don’t Buy
#4 CLIENTS Don’t Stay
- How to identify which leak to fix first so you get the biggest bang for your income.
- Get actionable strategies you can use right away to improve your bookings, sales and retention.
3. Cloud is not Dropbox or iCloud. Those are Cloud Storage services.
Cloud Computing is defined as [1]
1. On-demand self-service - customers can get compute/storage/networking resources
with just an email ID and a payment option (credit card)
2. Broad network access - resources can be accessed anywhere, anytime
3. Resource pooling - access to vast amounts of shared resources
4. Rapid elasticity - scale up or down, immediately
5. Measured service - pay as you go
[1] http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf
Cloud 101
4. Cloud 101: Storage and Network Resources
Cloud Storage services Cloud Network services
Including streaming
5. Cloud 101: Compute Resources
Pre-2006 [2]
- Web Hosting
- Virtual Private Servers
- Virtualized Servers
- Dedicated Server Hosting
- Collocation services
Post-2006 - same as before, plus:
- Infrastructure as a Service
- Platform as a Service
- (Software as a Service)
- Contanerized IaaS
- Lambda - code execution
service [3]
[2] https://en.wikipedia.org/wiki/Amazon_Web_Services#History
[3] https://aws.amazon.com/blogs/aws/run-code-cloud/
6. A web application is described using layers:
● The website Code and Images are executed by a
Web Server
● Code is written in a language (PHP, Java, Python or
Ruby) that requires Libraries to run.
● It needs a Database to save customer data
● All runs on top of an Operating System (Windows
Server, Linux, Solaris)
● The OS uses Drivers to abstract the Compute,
Network and Storage resource details
Cloud 101: Structure of a web application
Web Server
Website Code
Website Images
Database
Operating System
CPU
RAM
Network
Card
Storage
Drivers
Libraries
7. Web Hosting
It is simply a Web Server where we can upload our
website Code and Images, and use a shared Database.
Typical cost: 3-5$/month
Examples:
● 1and1.com
● godaddy.com
● ehost.com
● peer1.com
Web Server
Website Code
Website Images
Shared Database
Operating System
CPU
RAM
Network Card Storage
Drivers
Web Server
Website Code
Website Images
Shared Libraries
Customer A Customer B
Web Hosting subsystem
8. Web Server
Website Code
Customer B
Website Images
Database
Operating System
CPU
RAM
Network Card Storage
Drivers
Virtual Private Server
It offers a dedicated Operating System that runs on a
shared server with many other customers on it,
isolated thanks to a special VPS software. It contains
pre-installed Web Servers and Databases. Each
customer manages everything above that, and the
provider manages the OS and the layers below.
Typical cost: 5-10$/month
Examples
● DigitalOcean
● Softlayer VPS
● OVH VPS
Web Server
Website Code
Website Images
Shared Libraries
Customer A
VPS subsystem
DatabaseLibraries Libraries
9. Web Server
Website Code
Customer B
Website Images
Database
Operating System
CPU with VT
RAM
Network Card Storage
Drivers
Virtualized Server
It offers an isolated and dedicated Operating System
that runs on a shared physical server with few
customers on it, isolated thanks to a hardware feature
called VT (Virtualization Technology). The Virtualized
Server (or VM) contains empty Operating Systems that
can be different from the host. Each customer manages
everything inside their guest OS instance, and the
provider manages the host OS and the layers below.
Typical cost: fixed price 30 to 300$/month (per guest *)
Examples:
● peer1 virtual cloud servers
● Rackspace Managed vCloud
● OVH vSphere as a Service
● VMWare vCloud Air
* Often, a minimum amount of guests are required (i.e. 10)
Customer A: Empty
Libraries
Operating System 2Operating System 1
Virtual DriversVirtual Drivers
10. Dedicated Server
It offers an isolated and dedicated physical server, that
is collocated next to another customer’s server, so the
only shared resource is the network traffic, that has to
be isolated using the provider’s network features.
Typical cost: 80 to 500$/month
Examples
● peer1 virtual cloud servers
● Rackspace Managed vCloud
● OVH vSphere as a Service
● VMWare vCloud Air
Web Server
Website Code
Website Images
Databa
se
Operating System
CPU
RAM
Net.
Card
Storag
e
Drivers
Libraries
Web Server
Website Code
Website Images
Databa
se
Operating System
CPU
RAM
Net.
Card
Storag
e
Drivers
Libraries
Web Server
Website Code
Website Images
Databa
se
Operating System
CPU
RAM
Net.
Card
Storag
e
Drivers
Libraries
Web Server
Website Code
Website Images
Databa
se
Operating System
CPU
RAM
Net.
Card
Storag
e
Drivers
Libraries
Provider Network Infrastructure
Customer A Customer B
Customer B Customer A
11. Infrastructure as a Service
‘Invented’ in 2006 by Amazon Web Services, named Elastic
Compute Cloud (EC2). It’s an improved Virtualized Server, split in 3
core resources, all managed with a dedicated API (Application
Programmable Interface) and multiple pricing metrics. Customers
registered using an email-address and a credit card have access to
unlimited resources, as long as they can pay for it.
● Compute instance (hours per month)
● Storage
○ Operating System image (free)
○ Block space (Gigabytes used/hour)
○ Object (# Files and GB transferred/ hour)
● Network
○ Private IP settings (free)
○ Public IP settings (# of IPs per month)
○ DNS as a Service (changes per month)
○ Load Balancer as a Service (Gbps transferred / month)
○ Firewall as a Service (Gbps transferred / month)
Web Server
Website Code
Website Images
Database
Operating System
CPU
RAM
Network
Card
Storage
virtual Drivers
Libraries
Compute
Instance API
Service
Network API
Service
Storage API
Service
Customer Instance (VM)
Metered
Usage
(used hours/Gb
per month)
12. Virtualization vs Cloud/IaaS
Virtualization is for Virtual Machines that will last many
months, and when they fail we need to repair them as
quickly as possible.
Typically, only one VM contains a particular component
of the Web Application (SPOF - single point of failure).
So when the VM fails, it causes a module of our website
to fail, maybe even causing a total downtime.
Virtualization offers the best of breed protection
mechanisms for VMs, to reduce the probability of an
infrastructure failure
It is priced as a fixed amount per month
Cloud/IaaS is for Virtual Machines that will last hours,
days or weeks, and whenever they fail we just launch
a new one.
The Web Application software was deployed using
dozens of small VMs that will cover for the faulty one
(no SPOFs). The application understands failure and
the service was not affected during downtime.
IaaS offers no protection to the VMs, but the API will
signal the failure immediately to our software, so it
can react accordingly.
Pricing is variable and subject to usage metrics
Read this: http://www.theregister.co.uk/2013/03/18/servers_pets_or_cattle_cern/
13. Public vs Private Cloud
Public cloud is what we’ve already seen: IaaS
offered by a huge service provider, with
millions of resources available. It also offers
extra services on top of IaaS that are very
appreciated by software developers.
The use of shared resources involve extra
auditing efforts (i.e. PCI-DSS) to the customer.
Although some say it has more advantages
than disadvantages, it can quickly become
more expensive than expected if it’s not used
where it’s appropiate.
Private cloud is an IaaS deployment on a
limited amount of resources, owned by a
private entity, so exclusively used inside the
company’s perimeter. It is often seen as the
most secure option due to that isolation.
It has no economy of scale, as commercial
servers are more expensive than those used
by public cloud providers (see OCP [4])
Furthermore, employees need new skills to
operate private clouds, which means there is
a learning curve that may cause the private
cloud to be less reliable than the public cloud.
[4] http://www.opencompute.org/
15. Cloud revolution
By dynamically matching capacity to
demand, the infrastructure now allows a
lean growth model, key enabler of the
startup economy
http://www.dynco.co.uk/wp-content/uploads/2015/09/business-growth-1024x640.png
17. e-Commerce Infrastructure
Web Server
Website Code
Website Images
Database
Operating System
CPU
RAM
Network
Card
Storage
Drivers
Libraries
Same as before: Web Application layers
But what kind of Infrastructure do we need?
Typical answers
● Web Hosting is OK for basic eCommerce
● Virtualized Servers is OK - better isolation than VPS
● Collocated / Dedicated Servers makes PCI-DSS
compliance harder
● IaaS is OK for complex eCommerce
● PaaS, SaaS, Containers, Lambda, often too complex
and very new: only for big companies.
18. e-Commerce Traffic Analysis
A complex Web Application, with 2 main functions
● Display our products
○ Show pictures and detailed information
○ Customer reviews
○ Intelligent tracking of customer preferences (based on browsing history)
○ Uses SEO techniques to attract visitors from other sites (e-Marketing)
● Allow customers to purchase our products
○ Shopping Cart function
○ Integration with Credit Card processing systems, PayPal, or any other B2B systems
○ Storage of customer sensitive information, subject to government or industry regulation
(SOX, PCI-DSS, PIPED Act, etc.)
○ It’s a common target for hacker attacks, phising and other threats.
19. Sidenote: PCI-DSS
Anual audits required to prove that a company
that stores Credit Card information
● Builds and maintains a secure network
● Protects cardholder data
● Maintains a vulnerability management
program
● Implements strong access control
measures
● Regularly monitors and test networks
● Maintains an information security policy
20. e-Commerce Demand Analysis
Daily Variation (night vs day) Yearly Variation (high-season)
Note the 2 kinds of traffic, aligned with the 2 functions from earlier:
● visitors only browse our product listing while they decide to buy or not
● buyers click on the ‘order’ button and introduce their credit card information to do the purchase.
21. A slow website (>3 sec per page)
loses money [5]
You need to provide enough resources
to your website.
[5] http://www.peer1.ca/knowledgebase/how-slow-website-impacts-your-visitors-and-sales
23. A model for Web Site performance
Let’s suppose the following
● Compute Unit (CU): the amount of server’s resources (CPU, RAM, etc) required to display and
properly serve a website visitor during 10 minutes
○ A small server can handle 60 CUs per hour, 10 every 10-minutes.
● Visitor: a regular visitor that browses our website, clicks on images, reads the descriptions, etc.
○ Browsing our website catalog requires 1 CU.
● Buyer: the most important kind of visitors those browsing our Shopping Cart section, which
means they’re halfway their purchase process where they give us their personal details and
credit card information
○ Going through the purchase process requires 5 CUs.
● It takes more CU to serve a buyer than to serve a visitor (i.e. 5 times more), due to the storage
of personal data, credit card validation, checkout process, etc.
24. Are you smarter than a 5th grader?
Remember: 60 CUs per server/hour. 1 visitor = 1CU. 1 buyer = 5 CUs
● Number of visitors can 1 server serve per hour (on average) ?
Answer: 60
● Number of buyers can 1 server serve per hour (on average) ?
Answer: 12
● Maximum number of buyers can 1 server serve in 1 day?
Answer: 288
25. Daily Demand
Low vs High season
(example hourly values - best case)
Visitors/h Buyers/h
Night time min, low-season 60 6
Day time max, low-season 200 20
Night time min, high-season 100 10
Day time max, high-season 1000 100
26. Peak Demand
(best-case average vs worst-case)
Visitors
/h
Visitors
/10min
Buyers/
h
Buyers
/10min
Total CU
Equivalent / 10 min
Total CU per
hour
Worst-Case Total
Servers (ALL visits
in 10 min)
Best-Case Total
Servers (hourly
average)
Night time min, low-season 60 10 6 1 10+(1*5) = 15 60+(6*5) = 90 90/10= 9 (60+6*5)/60 = 1.5
Day time max, low-season 200 33 20 3.3 50 300 30 5.0
Night time min, high-season 100 16.7 10 1.7 25 150 15 2.5
Day time max, high-season 1000 167 100 16.7 250 1500 150 25
Remember: 60 CUs per server/hour. 1 visitor = 1CU. 1 buyer = 5 CUs
Equivalent to: 10 CUs per 10 minutes means 10/1=10 visitors every 10 minutes, or 10/5=2 buyers every 10 minutes.
6
1
10min 20min 30min 40min 50min 60min
Buyers’ demand - (similar for visitors)
27. 3-year Budget
With all variables in hand, can we prepare a budget for the infrastructure
needed for the next 3 years?
We’ll look at 2 scenarios: private cloud vs public cloud
29. More information
We’ve supposed a basic server would be able to calculate 60 C.U’s /hour
We know, thanks to our providers, that the average server sold nowadays can
perform 480 C.U’s per hour, thanks to multi-core technology
The average price is 3000$ (CAPEX)
Collocation, electricity and other maintenance fees amount to $2000 for the
first three years (OPEX)
Our accountant will amortize the servers over 3 years, as OPEX expenses will
increase and it will be recommended to renew servers every 3 years.
30. Rigth-scaling issues
When purchasing a Fixed Capacity,
we risk undersizing our
infrastructure, which means we’re
not able to serve our customers
during the peak hours, losing
potential revenue and maybe
damaging our website’s reputation
(too slow, unresponsive, faulty...)
Furthermore, we may also be
oversizing our infrastructure, which
means we’ve spent too much, risking
our financial health
We’ll see later how public cloud offers Elasticity as long as our software we can closely adjust itself to
add/remove capacity according to the real-time demand
31. Deciding the size of our Private Cloud
Remember: a $3000 server with 8 CPU cores is 8x more powerful than our basic server calculated before
Visitors
/10min
Buyers
/10min
Worst-Case Total
Servers (6x peak)
8-CPU
servers
Best-Case Total Servers
(no peaks)
8-CPU servers
Night time min, low-season 10 1.0 9
1.13
1.5
0.19
Day time max, low-season 33 3.3 30 3.75 5.0 0.63
Night time min, high-season 16.7 1.7 15 1.88 2.5 0.31
Day time max, high-season 167 16.7 150 18.75 25 3.13
How many servers do we buy?
Between 3.13 and 18.75, we need to compromise. We’re going to size only for instant peaks of 2x the average
hourly rate, so we pick 6 servers. Using a $3000 server (CAPEX) that costs $2000 over 3y to maintain (OPEX),
we need $30.000 during the 3y period (we are supposing the same demand every year).
33. More information
On average, a basic server able to do 60 C.U.s per hour costs around 20$ per
month. But it is billed per hour, so assume $0.028 per hour (we’re including
storage and network costs)
We’re assuming our software can
leverage the Elasticity and Autoscaling
features of the public cloud, so the
number of servers running will be
almost exactly those required to
properly serve the visitors/buyers any
given time.
34. * The sum of hours is 8640/24h = 360 days in a year (approximation)
Forecasting the size of our Public Cloud
In this case, we need to calculate the number of hours per year our basic servers will be powered on
Night time length, 12 h
Day time length 12 h
Low-season 9 months
High-season 3 months
Total CU: 3,045,600. If a basic server can do 60 CU/h, and costs 0.024 $/h, how much will it cost over 3 years?
(using 360 days/year) Hours/y Visitors /h Buyers /h CU/h
equivalent
Total CU/y
Night time min, low-season 3240 * 60 6 90
291600
Day time max, low-season 3240 200 20 300 972000
Night time min, high-season 1080 100 10 150 162000
Day time max, high-season 1080 1000 100 1500 1620000
We need 50,760 server hours per year x 0.024 $/h is $1,410 per year, $4,230 per 3 years
36. What is a CDN?
CDN is a key service that is contracted to help companies scale their web services and deal with
traffic peaks, specially with image-heavy content that permits a better browsing experience to the
user when they download from a CDN instead of a central server. It can only offload all ‘read-only’ or
non-transactional requests. Examples include Akamai, Amazon Cloudfront, etc.
It is priced with a fixed portion and a variable price depending on the traffic volume
Example for our case (2 million visitors/year)
$200/month
$0.5 per 1000 visitors/year
$2,400 per year (fixed fee)
$1,015 per year (variable fee)
3y fees: $10,245
37. Offloading visitors to the CDN
It can effectively remove all the load related to Visitors, leaving only the Buyers
to be treated in our servers (either Private or Public)
(Visitors now in CDN) Visito
rs
Buyers
/h
CU/h Total CU/y
Night time min, low-season - 6 30
97200
Day time max, low-season - 20 100
324000
Night time min, high-season - 10 50
54000
Day time max, high-season - 100 500
540000
Total 3 year cost $1,410
($2,820 savings)
Worst-Case Total
Servers
8-CPU
servers
Best-Case Total Servers 8-CPU servers
3
0.38
0.5
0.06
10 1.25 1.7 0.21
5 0.64 0.8 0.10
50 6.25 8.3 1.04
Total cost, 3y
2 servers (can handle 2x peaks)
$10,000
($20,000 savings)
We’ve effectively offloaded ⅔ of our traffic to the CDN
So we’re saving 66% in infrastructure costs, but we still need to cover the $10,245 for the CDN fees
38. looks like public cloudis the winner
there is quite a difference in the 3-year costs for Elastic Public Cloud ($4,230)
versus the 3-year server costs for Private Cloud at max.capacity ($30,000, or
$20,245 with CDN)
Even with a CDN, public cloud offers the lowest costs
But it’s not that simple...
40. Other CAPEX factors
Private Cloud Public Cloud
Procurement (HW)
Expensive Zero-cost
Software development
Moderate, more expensive than
traditional virtualization due to lack
of platform’s High-Availability
Expensive: it’s harder to write an Elastic
software than a traditional one
Auditing
Cheap when network is well
designed
Otherwise, expensive
Moderate
Systems design &
architecture
Moderate (not so different to
Traditional Virtualization)
Expensive
CDN setup Cheap Cheap although network fees may
increase if different provider
41. Other OPEX factors
Private Cloud Public Cloud
Salaries
Moderate (skills are almost the
same as traditional virtualization)
Expensive (high-demand)
HW Operating costs
Moderate: power, cooling,
replacement parts, etc
Zero-cost
SW maintenance
Cheap, we can apply security by
isolation, as our servers are inside a
secure perimeter.
Moderate, APIs may change over time and
we’ll be forced to update (lock-in factor)
HW Maintenance
Moderate, but cheaper than
virtualization, no need for
emergency repairs
Zero-cost
Tech Support
Cheap, less than traditional
virtualization
Expensive, extra SaaS services may be
needed (capacity optimization, security
and performance monitors, etc)
42. Other aspects to be considered
Financing
● Borrowing or raising money often includes a budget estimation, which makes the purchase of servers
more attactive than the outsourcing of public cloud.
● Physical servers can also be leased instead of purchased at very interesting rates.
Accounting
● Some may prefer to have assets (like actual) instad of having outsourced their IT servers
● Tax deductions may be available when buying servers (private cloud)
● Some servers can be amortized up to 5 years, reducing the CAPEX burden
Uncertainty and resilience
● There is no doubt that public loud offers a zero-engagement model that allows companies to cut back on
their fixed costs and allocate them as variable costs
● This makes the company more resilient to market fluctuation.
44. Hybrid is the most complex solution
It is often used as a way to combine the best of both worlds.For instance, we
store all sensitive information in our premises (private cloud) but we keep the
website parts that are not-sensitive in the public cloud.
This way, the PCI-DSS and other audits are smaller in size and in complexity
We can use the private cloud to do Dev/Test/QA and save on costs that
otherwise would have increased the Public Cloud expense
Other IT compliance and regulations my force the use of private cloud and
forbid CDN techniques, leaving us with only public cloud as the the scale-out
option for the times of the year when capacity demanded our private cloud
resources
45. When use Hybrid?
As a rule of thumb, keep the important
things (mission critical) close to your
business, and let others deal with less
important workloads
http://blogs.vmware.com/vcloud/files/2012/11/bluelockwebinar2.png
46. In conclusion
Public cloud is the best option for some (i.e. variable and bursty demand), for others it’s
Private cloud (i.e. more constant demand).
You need to analyize your particular case and consider all quantitative aspects (CAPEX,
OPEX, time to market) as well as qualitative (page load, flexibility, risk management,
security and vendor lock-in). Think before you make any final decision, such as buying
servers or re-architecting your software to run on public clouds
Remember: not every workload is suited to be run on a cloud, either public or private. If
it works, wherever it was hosted (virtualization, baremetal), don’t change it. There’s
often no need to jump on new technology unless you need that extra competitive
differentiation that a new architecture can bring.