Learn how to evaluate a new workload for the best managed database option based on specific application needs related to data shape, data size at limit, computational requirements, programmability, throughput and latency needs, and more. This session explains the ideal use cases for relational and non-relational database services, including Amazon Aurora, Amazon DynamoDB, Amazon ElastiCache for Redis, Amazon Neptune, and Amazon Redshift.
Laura Caicedo, Solutions Architect, Amazon Web Services
3. A market leader
Forrester Research positions Amazon Web
Services as a Leader in The Forrester WaveTM:
Database-as-a-Service.
“AWS not only has the largest adoption of
DBaaS, it also offers the widest range of
offerings to support analytical, operational,
and transactional workloads.”
“AWS’s key strengths lay in its dynamic scale,
automated administration, flexibility of
database offerings, strong security, and high-
availability capabilities, which make it a
preferred choice for customers”
The Forrester Wave™ is copyrighted by Forrester Research, Inc. Forrester and Forrester Wave™ are trademarks of Forrester Research, Inc. The Forrester Wave™ is a graphical representation of Forrester's call on a market and is plotted using a
detailed spreadsheet with exposed scores, weightings, and comments. Forrester does not endorse any vendor, product, or service depicted in the Forrester Wave. Information is based on best available resources. Opinions reflect judgment at the
time and are subject to change.
14. Fully compatible with
PostgreSQL and MySQL,
with 3x – 5x the throughput
Storage volume striped across
hundreds of storage nodes
distributed over 3 different
availability zones
Six copies of data on SSD, two
copies in each availability zone, to
protect against AZ+1 failures
Continuous backup to Amazon
S3 (built for 99.999999999%
durability)
Master Replica Replica Replica
Availability
Zone 1
Availability
Zone 2
Availability
Zone 3
Large relational databases with Amazon Aurora
Scale-out, distributed, multi-tenant architecture
15. Large relational databases with Amazon Aurora
Scale-out, distributed, multi-tenant architecture
• Your data is replicated 6 ways
across 3 AZs
• Storage grows up to
64 TB* seamlessly
• Up to 15 Aurora Replicas
with instant crash recovery
AZ 1 AZ 2 AZ 3
Virtualized, cross-AZ storage layer
Size for the peak load
-or-
Continuously monitor and
manually scale up/down
21. AWS purpose-built strategy
The right tool for the right job
Relational
Non-Relational
Aurora RDS
ElastiCacheDynamoDB
Key-value Document
Neptune
Graph
23. NoSQL vs. SQL for a new app: how to choose?
• Want simplest possible DB
management?
• Want app to manage DB integrity?
• Need joins, transactions, frequent
table scans?
• Want DB engine to manage DB
integrity?
• Team has SQL skills?
Amazon DynamoDB Amazon RDS
27. Amazon DynamoDB
Fully-managed nonrelational database for any scale
Secure
Encryption at rest and transit
Fine-grained access control
PCI, HIPAA, FIPS140-2 eligible
High performance
Fast, consistent performance
Virtually unlimited throughput
Virtually unlimited storage
Fully managed
Maintenance-free
Serverless
Auto scaling
Backup and restore
Global tables GlobalTables
High-performance, globally distributed
applications
Multi-region redundancy
and resiliency
Easy to set up and no application
rewrites required
28. Amazon DocumentDB
Fast, scalable, highly available, fully managed MongoDB-compatible database service
Secure and compliant
Simple
and fully managed
Same code, drivers, and tools
you use with MongoDB
Millions of requests per
second, millisecond latency
2x throughput of managed
MongoDB services
Deeply integrated with
AWS services
29. Managed services for open source software
Redis, Memcached, Elasticsearch, Apache Hadoop, etc.
Fully managed
AWS manages all hardware
and software setup,
configuration, monitoring
Extreme performance
In-memory data store and cache
for sub-millisecond response times
Easily scalable
Non-disruptive scaling
up and down to
meet changing
demands
Amazon ElastiCache
Open and Secure
Direct access to open-source APIs
Secure access withVPC
Amazon Elasticsearch Service
Apache Hadoop Ecosystem
19 open-source frameworks
Low costs with S3 storage and Spot
Amazon EMR
32. Highly connected data best represented in a graph
Relational model
Foreign keys used to represent relationships
Queries can involve nesting & complex joins
Performance can degrade as datasets grow
Graph model
Relationships are first-order citizens
Write queries that navigate the graph
Results returned quickly, even on large datasets
33. Amazon Neptune
Fully managed graph database
Fast & Scalable ReliableFlexible
Store billions of relationships; query
with millisecond latency
Six replicas of your data
across three AZs with full
backup and restore
Build powerful queries
with
Gremlin and SPARQL
Supports Apache
TinkerPop & W3C RDF
graph models
Gremlin
SPARQL
Open Standards
38. Amazon Redshift Spectrum
Extend the data warehouse to exabytes of data in S3 data lake
• Exabyte Redshift SQL queries against Amazon
S3
• Join data across Redshift and S3
• Scale compute and storage separately
• Stable query performance and unlimited
concurrency
• CSV, ORC, Grok, Avro, & Parquet data formats
• Pay only for the amount of data scanned
S3 data lakeAmazon
Redshift data
Redshift Spectrum
query engine
39. Amazon Elasticsearch Service
Fully-managed.
Deploy production-ready
clusters in minutes
Open
Direct access to Amazon ES
open-source APIs; supports
Logstash and Kibana
Secure
Secure access with VPC to
keep all traffic within AWS
network
Available
Zone awareness replicates
data between two AZs;
automatically monitors &
replaces failed nodes
Managed service to deploy, secure, operate, and scale Amazon Elasticsearch Service
Customers use Amazon ES for log analytics, full-text search & application monitoring
Fully Managed
AWS offers services across the full range of data tools
Business Intelligence and Machine Learning tools help make sense of data
Database services are the right tools for relational, nonrelational, and analytic jobs
The data lake combines data tools with scalable storage and data governance
Data movement tools let you get data between different formats and places
The Cloud is a fully managed environment
Using managed services frees you to focus on your mission instead of minutia of operational details
AWS customers use a broad and deep selection of fully managed services to support work at any scale
The Cloud is a fully managed environment
Using managed services frees you to focus on your mission instead of minutia of operational details
AWS customers use a broad and deep selection of fully managed services to support work at any scale
The Cloud is a fully managed environment
Using managed services frees you to focus on your mission instead of minutia of operational details
AWS customers use a broad and deep selection of fully managed services to support work at any scale
Many customers are still trapped using old-guard databases such as Oracle, Microsoft SQL Server, or IBM Db2
These databases are expensive, with proprietary lock-in and punitive licensing
Old-guard vendors will conduct audits (“you’ve got mail – audit coming!”) whenever they want to force extra payments
AWS helps customers escape for all of these limitations
The relational database world has been an unpleasant place for most customers. These customers have had to deal with old-guard database providers that are expensive, proprietary, have high lock-in, and impose punitive licensing terms. And, you occasionally get an email that says you’re being audited!
The Cloud is a fully managed environment
Using managed services frees you to focus on your mission instead of minutia of operational details
AWS customers use a broad and deep selection of fully managed services to support work at any scale
Open Source relational databases are widely-used and well supported
AWS customers want the low cost and community support of Open Source and the high performance and reliability of commercial databases
You can get fully managed Open Source with performance and reliability with Amazon RDS and Amazon Aurora
However, getting the same performance on open source databases as you get on commercial-grade databases is difficult. We have done this at Amazon.com, but it has required a lot of tuning. Customers that are moving to open source databases have asked us for the performance of commercial-grade databases with the pricing, freedom, and flexibility of open source databases. That's why we spent a few years building Amazon Aurora.
RDS is fully managed, automating patching, backup, high availability, encryption, and security.
With up to 16TB per database instance, run hundreds or thousands of DB instances w/out large staff commitments.
You can use both Open Source (MySQL, MariaDB, PostgreSQL) and Commercial (Oracle, Microsoft SQL Server) databases
RDS is fully managed, automating patching, backup, high availability, encryption, and security.
With up to 16TB per database instance, run hundreds or thousands of DB instances w/out large staff commitments.
You can use both Open Source (MySQL, MariaDB, PostgreSQL) and Commercial (Oracle, Microsoft SQL Server) databases
Aurora combines Open Source interfaces of MySQL and PostgreSQL with enterprise-class scalability, performance, and reliability
All data is stored in six copies across three independent physical facilities (we call these Availability Zones)
Aurora is high performance, with 3x – 5x the throughput of MySQL or PostgreSQL
Aurora combines Open Source interfaces of MySQL and PostgreSQL with enterprise-class scalability, performance, and reliability
All data is stored in six copies across three independent physical facilities (we call these Availability Zones)
Aurora is high performance, with 3x – 5x the throughput of MySQL or PostgreSQL
DMS enables customers to copy and move databases without downtime
No lock-in at AWS: you can move data to the Cloud, off of the Cloud, and between Clouds
AWS customers have used DMS to migrate over 90,000 databases
In addition to offering a broad portfolio of purpose-built database services, AWS makes it easy for you to migrate your database to the cloud. The AWS Database Migration Service (DMS) helps customers securely migrate their databases to AWS with minimal or no downtime. The source database remains fully operational during the migration, causing no interruption to applications that rely on that database. DMS can migrate your data from most widely used commercial and open-source databases. DMS supports migrations such as Oracle to Oracle migrations, as well as migrations between different database platforms, such as Oracle to Amazon Aurora. DMS offers six months of free usage for migrations to Amazon Aurora, Amazon DynamoDB, and Amazon Redshift. For large databases, where terabytes of data need to be migrated, you can use AWS Snowball, a petabyte-scale data transport service that uses secure appliances to transfer data into and out of AWS.
The Cloud is a fully managed environment
Using managed services frees you to focus on your mission instead of minutia of operational details
AWS customers use a broad and deep selection of fully managed services to support work at any scale
Using a single database for every purpose doesn’t work in today’s world of large scale
Developers choose relational databases, nonrelational databases, analytical databases, machine learning, visualization and other tools to do the job
AWS customers need flexibility, scale, and performance
Application requirements are changing, and a one-size-fits-all approach of using a relational database as the only data store for your application no longer works. An increasing number of developers now choose relational and nonrelational databases that are purpose-built to meet their application’s specific needs, like storing key-value pairs and documents.
If you are building an online retail site, you might choose a relational database to help ensure financial transactions related to an order are 100% correct.
If you want a shopping cart that can provide consistent single-digit-millisecond latency with virtually limitless scale to handle the likes of Amazon Prime Day, you can choose a key-value database.
If you want to show more personalized recommendations like accessories that friends of your users purchased, you can choose a graph database.
The characteristics of cloud applications is driving why different database services exist today. Developers are always looking for the right tool for the job and because they are so easy to gain access to, developers can enjoy a rich development flexibility without sacrificing scale and performance.
No one truck is right for every job, which is why there are tractor-trailers, pickup trucks, earth movers, and delivery trucks
No one data tool is right for every job, which is why there are AWS services for both relational and nonrelational data
This is one way to think about the different database choices developers have, as they think about using the right tool for the job often considering speed, scale, & programmability.
If I were standing here today, saying we have one database, that can literally do everything, it might be like me saying, you can use one vehicle that is a utility, earth mover, delivery truck and long-haul cargo mover that is equally efficient in all aspects of the job its being used for.
Relational data is important for many customers
A lot of new development uses nonrelational data
The key is using the right tool for the job
Amazon.com runs our own business largely with DynamoDB
DynamoDB is fully managed, with consistent high performance at any scale, with some customers storing over 1 PB in a single DynamoDB table
Global Tables enable true active-active databases across the world
Amazon DynamoDB is a fully managed NoSQL database service running in the AWS Cloud. The complexity of running a massively scalable, distributed NoSQL database is managed by the service itself, allowing software developers to focus on building applications rather than managing infrastructure. NoSQL databases are designed for scale, but their architectures are sophisticated, and there can be significant operational overhead in running a large NoSQL cluster. Instead of having to become experts in advanced distributed computing concepts, the developer need only to learn DynamoDB’s straightforward API using the SDK for the programming language of choice. In addition to being easy to use, DynamoDB is also cost effective. With DynamoDB, you pay for the storage you are consuming and the IO throughput you have provisioned. It is designed to scale elastically while maintaining high performance. When the storage and throughput requirements of an application are low, only a small amount of capacity needs to be provisioned in the DynamoDB service. As the number of users of an application grows and the required IO throughput increases, additional capacity can be provisioned on the fly. This enables an application to seamlessly grow to support millions of users making thousands of concurrent requests to the database every second. Finally, DynamoDB is secure with support for end to end encryption and fine grained access control.
Amazon.com runs our own business largely with DynamoDB
DynamoDB is fully managed, with consistent high performance at any scale, with some customers storing over 1 PB in a single DynamoDB table
Global Tables enable true active-active databases across the world
Amazon DynamoDB is a fully managed NoSQL database service running in the AWS Cloud. The complexity of running a massively scalable, distributed NoSQL database is managed by the service itself, allowing software developers to focus on building applications rather than managing infrastructure. NoSQL databases are designed for scale, but their architectures are sophisticated, and there can be significant operational overhead in running a large NoSQL cluster. Instead of having to become experts in advanced distributed computing concepts, the developer need only to learn DynamoDB’s straightforward API using the SDK for the programming language of choice. In addition to being easy to use, DynamoDB is also cost effective. With DynamoDB, you pay for the storage you are consuming and the IO throughput you have provisioned. It is designed to scale elastically while maintaining high performance. When the storage and throughput requirements of an application are low, only a small amount of capacity needs to be provisioned in the DynamoDB service. As the number of users of an application grows and the required IO throughput increases, additional capacity can be provisioned on the fly. This enables an application to seamlessly grow to support millions of users making thousands of concurrent requests to the database every second. Finally, DynamoDB is secure with support for end to end encryption and fine grained access control.
Amazon.com runs our own business largely with DynamoDB
DynamoDB is fully managed, with consistent high performance at any scale, with some customers storing over 1 PB in a single DynamoDB table
Global Tables enable true active-active databases across the world
Amazon DynamoDB is a fully managed NoSQL database service running in the AWS Cloud. The complexity of running a massively scalable, distributed NoSQL database is managed by the service itself, allowing software developers to focus on building applications rather than managing infrastructure. NoSQL databases are designed for scale, but their architectures are sophisticated, and there can be significant operational overhead in running a large NoSQL cluster. Instead of having to become experts in advanced distributed computing concepts, the developer need only to learn DynamoDB’s straightforward API using the SDK for the programming language of choice. In addition to being easy to use, DynamoDB is also cost effective. With DynamoDB, you pay for the storage you are consuming and the IO throughput you have provisioned. It is designed to scale elastically while maintaining high performance. When the storage and throughput requirements of an application are low, only a small amount of capacity needs to be provisioned in the DynamoDB service. As the number of users of an application grows and the required IO throughput increases, additional capacity can be provisioned on the fly. This enables an application to seamlessly grow to support millions of users making thousands of concurrent requests to the database every second. Finally, DynamoDB is secure with support for end to end encryption and fine grained access control.
AWS also has fully managed solutions for other popular Open Source packages
ElastiCache provides managed redis and memcached for sub-millisecond in-memory data
Elasticsearch is a managed search engine, both for text search and log analytics
Nineteen Apache Hadoop packages are managed with Amazon EMR, including Hbase, Spark, Presto, Hive, and others
While millisecond latency works for many applications, microsecond latency is required by real-time, data-intensive applications. For example, gaming leaderboards capture the scores of millions of online players every time their scores change and re-rank the players in real-time. A common solution for this is an in-memory data store where millions of data records can be written and accessed in microseconds. In-memory data stores can also function as stand-alone databases for transient data such as website user authentication tokens that expire at the end of the user session. Redis and Memcached are two popular choices for in-memory data stores. Redis is an open-source, in-memory, key-value store that offers a variety of built-in data structures such as sorted sets, lists, and geospatial data, making it faster to develop applications. Memcached is an open-source in-memory caching system that is easy to use. However, Redis and Memcached lack enterprise features such as scalability and reliability, and that's why we built Amazon ElastiCache.
Amazon ElastiCache offers Redis and Memcached as fully managed services. It automates management tasks such as hardware provisioning, software patching, setup, configuration, monitoring, and backups, making it easy to run Redis and Memcached on AWS. ElastiCache can scale-out, scale-in, and scale-up to meet changing application demands. ElastiCache for Redis allows you add up to five read replicas across multiple availability zones, enabling you to easily scale read capacity. And, if the primary read/write node fails, ElastiCache for Redis automatically promotes one of the read replicas to be the primary node, making your application more reliable. For scaling write capacity, ElastiCache for Redis lets you partition your data across multiple primary nodes, and distributes write requests across these nodes. ElastiCache for Redis provides encryption-at-rest and encryption-in-transit, helping you secure your data.
Key benefits of Amazon ElastiCache include:
Redis and Memcached Compatible
With Amazon ElastiCache, you get native access to Redis or Memcached in-memory environments. This enables compatibility with your existing tools and applications.
Extreme Performance
Amazon ElastiCache works as an in-memory data store and cache to support the most demanding applications requiring sub-millisecond response times. By utilizing an end-to-end optimized stack running on customer dedicated nodes, Amazon Elasticache provides you secure, blazing fast performance.
Fully Managed
You no longer need to perform management tasks such as hardware provisioning, software patching, setup, configuration, monitoring, failure recovery, and backups. ElastiCache continuously monitors your clusters to keep your workloads up and running so that you can focus on higher value application development.
Easily Scalable
Amazon ElastiCache can scale-out, scale-in, and scale-up to meet fluctuating application demands. Write and memory scaling is supported with sharding. Replicas provide read scaling.
Relational models, ironically, are not great for representing the relationships between data
Graph models are great for highly connected data, such as recommendation engines and social networks
The mathematics behind graph databases go back to the 1700’s, but actually implementing them has been difficult and expensive
Now consider an app like for recommendations, where someone wants to recommend some kind of organization, entity or sites of a certain type, in a particular city, that for example some of their connections also liked.
To do this, you need to put together a lot of connected datasets. To know the users, their connections & their likes.
You also need to know the organizations, entities and their attributes, such as museums, or schools, or places to eat.
In a rel. model, you end up with mult. tables, mult foreign keys, and soon, queries slow down & maint. is most difficult.
Alternatively, you can use an open source graph database, which are hard to scale and lack ent capabilities such as HA
Or commercial graph databases which are expensive, often proprietary, and you have to choose from graph models.
What we want is a graph DB compat. w/ldg graph models, open APIs, & also fast, reliable, scalable, & cost effective.
Amazon Neptune enable very large graph databases at low cost, high performance, and reliability
Just like the other databases, Neptune is fully managed, with AWS providing patching, backup, high availability, and high performance
Amazon Neptune is a fast, reliable, fully-man. graph DB. It makes it EZ to build & run apps that work w/highly conn. datasets.
It has a purpose-built, high-perf graph DB engine optimized for storing B’s of relationships & querying the graph w/ms latency.
Neptune supports the popular graph models, Property Graph and W3C's RDF
And their respective query languages Apache TinkerPop Gremlin and SPARQL.
Neptune is fully mang’d w/HA, read replicas, point-in-time recovery, continuous b/up to Amazon S3, & repl. across AZs.
Neptune is secure, w/support for encrypt. @ rest & in transit.
As data sizes grow, so does the need for analytics
AWS provides services for the full range of analytics, from visualization to queries to storage to security
Customers like NETFLIX, Zillow, NASDAQ, Yelp, iRobot, and FINRA trust AWS to run their analytics workloads.
AWS Big Data and Analytic services enable customers to easily run any analytic workload (batch, ad-hoc, real-time, IoT and predictive analytics) at any scale (GB to TB to PB to EB), in a secure fashion, at the lowest possible cost. AWS provides a highly scalable, available, secure, and cost effective data store that lets you store data in its native format and easily extract value from your data. (what people call a Data Lake). This is particularly true now that many customers see much of their new data created directly in the cloud, with Amazon S3 being home to the vast majority of it. With much more operating experience and scale, and a broader set analytics services available than anywhere else, S3 and our portfolio of Big Data & Analytics services is the clear number one choice for you to build your data lake and analytics solutions with.
Data Lakes extend analytics to any scale, from Gigabytes to Exabytes
With a Data Lake, you can use any analytical approach, from dashboards to reporting to predictive analytics powered by machine learning
These enable cust’s to build cloud data lakes to analyze all their data w/broadest set of analytical approaches including ML.
As a result, there are more organizations running their data lakes and analytics on AWS than anywhere else.
Redshift includes Spectrum, a feature that enables queries against data stored in files
Spectrum allows queries of very large data sets (Petabytes and Exabytes) at a low cost
Amazon Redshift Spectrum lets you to run Amazon Redshift SQL queries against exabytes of data in Amazon S3.
Extending the analytic power of Amazon Redshift beyond data stored on local disks in your DW.
You can query vast am’ts of unstruct data in your Amazon S3 “Data Lake” – w/out having to load or transform any data.
It uses sophisticated query optimization across 000s of nodes so results are fast – even w/large data sets & complex queries.
It dir. queries data in S3 using open data fmts like CSV, Grok, ORC, Parquet, RCFile, SequenceFile, TextFile, TSV, & others.
It supports the SQL syntax of Amazon Redshift so you can run sophisticated queries using the same BI tools you use today.
You can also run queries that span data stored locally in Amazon Redshift and your full data sets in Amazon S3.
You only pay for queries you run, w/S3 rates for data storage and Amazon Redshift instance rates for the clusters used.
Elasticsearch is an Open Source search engine that is popular, but hard to install and maintain
Amazon Elasticsearch service is fully managed, making it easy to deploy, secure, operate and scale Elasticsearch
Customers use Elasticsearch both for text search and for log analytics
Amazon Elasticsearch Service makes it easy to deploy, secure, operate, and scale Elasticsearch
This is for log analytics, full text search, application monitoring, and more.
It is a fully managed service that delivers Elasticsearch’s easy-to-use APIs and real-time analytics capabilities
But with the availability, scalability, and security that production workloads require.
It has built-in integrations w/Kibana, Logstash, & AWS so you can go from raw data to actionable insights quickly & securely.
These AWS integrations include Amazon Virtual Private Cloud (VPC), Amazon Kinesis Firehose, AWS Lambda, and Amazon CloudWatch
You get direct access to the Elasticsearch open-source API so existing Elasticsearch environments work seamlessly.
Humans use data better with pictures. QuickSight makes it easy to make data understandable for everyone
QuickSight can connect to data from almost any source, from AWS services to traditional BI services off-the-Cloud, to Excel spreadsheets
QuickSight is low cost and serverless, so you only pay for what you use, as you use it
Amazon QuickSight is a fast, cloud-powered business analytics service that makes it easy to build visualizations, perform ad-hoc analysis, and quickly get business insights from your data. Using our cloud-based service you can easily connect to your data, perform advanced analysis, and create stunning visualizations and rich dashboards that can be accessed from any browser or mobile device.
Insights for everyone: QuickSight enables self-serve/decentralized analytics in your organization better than any other system out there. As a Business Analyst can take an Analysis from concept to reality without depending on data engineers. You can create and prepare datasets, build your analysis, and share/collaborate with little to no intervention from IT or data engineers.
Seamless connectivity: Connecting to a data source (especially AWS data source) doesn’t involve back-end coding or complex setups. You simply click on the data source and enter your credentials, and QuickSight will auto-discover tables that you can select from – taking out the guesswork from selecting the right table to create your datasets. Then there is “Schedule Refresh”. If you setup your datasets to Refresh every day, every week - then you are assured of the latest data when you are looking at your analysis.
Fast analysis: Fast interactions with your charts and graphs – The charts and graphs built on SPICE data set are highly responsive. You can zoom-in/zoom-out, drill through, add filters on the fly with little to no time delay.
Serverless: With QuickSight it is completely serverless. Not only do you not anything installed or deployed for QuickSight, but in combination with S3, and Athena, you can have an end-to-end analytics solution without ever starting or managing servers.
Over 400,000 customers use AWS database and analytics.
While the database and analytics markets have been around for a while, with many mature offerings for customers to choose from.
We continue to see customers move to the cloud for a number of reasons and our recent growth in the database market is evidence of how rapidly the landscape is changing.
Customers move to the cloud to minimize time spent managing infrastructure
Customers are choosing the cloud and migrating more and more of their workloads to it. In the next 10 to 15 years, the majority of computing is going to be done in the cloud. In the fullness of time, very few companies will want to own their own data centers, manage infrastructure whether it is compute, storage, databases or analytics.
Customers move to the cloud for performance, scale, reliability and costIncreasingly, new applications need to be globally distributed, support millions of users and devices, work with petabytes of data, run 24/7 and be responsive.
As customers move to the cloud and to micro-service architectures, developers are increasingly the ones making technology decisionsAs customers move from monolithic apps to micro-service architectures with loosely coupled components and DevOps cultures. The developers are increasingly making decisions as part of their application development lifecycle on what frameworks and components do they use.