The document discusses the architecture of eBay and how it has evolved over time to handle massive scale. It describes how eBay had to horizontally scale and partition its databases, applications, and functions across multiple servers and locations to support over 212 million users, 1 billion daily page views, and growth of over 10x. Key strategies discussed include functional decomposition, asynchronous integration, virtualization of components, and designing for failure.
QCon San Francisco 2011: Agility in eBayDeepak Nadig
eBay manages over 97 million active users and processes over 80 petabytes of data daily. The company aims to rapidly adapt to changes by defining agility as the ability to efficiently sense changes and respond effectively. eBay has increased its agility by partitioning its architecture into tiers, domains, and services to eliminate coupling; decentralizing accountability; and enabling rapid iteration and easy composition of services through processes, organizational changes, and technology improvements. Examples include adopting Scrum, forming dedicated teams, and leveraging cloud technologies.
QCon San Francisco 2011: Agility in eBayDeepak Nadig
eBay manages over 97 million active users and processes over 80 petabytes of data daily. The company aims to rapidly adapt to changes by defining agility as the ability to efficiently sense changes and respond effectively. eBay has increased its agility by partitioning its architecture into tiers, domains, and services to eliminate coupling; decentralizing accountability; and enabling rapid iteration and easy composition of services through processes, organizational changes, and technology improvements. Examples include adopting Scrum, forming dedicated teams, and leveraging cloud technologies.
The document discusses the architecture of eBay's platform. It describes how eBay has scaled to support over 212 million registered users and handle over 1 billion page views per day. Some key points discussed are:
- eBay's architecture has evolved over time through several major versions to support exponential growth and handle over 300 new features per quarter.
- The data tier is horizontally scaled across multiple databases segmented by functional areas and further split. Application and search components are also horizontally scaled.
- Asynchronous integration, stateless applications, and caching are used to improve scalability. Strict separation of tiers and partitioning of code supports parallel development.
- Automated processes are used for code deployment, monitoring, and rollbacks to
The document discusses eBay's architecture and strategies for maintaining scalability and agility. It describes eBay's large scale, including billions of daily interactions. It also outlines eBay's transition to more automated, cloud-based infrastructure and a next generation service-oriented platform. This is intended to improve development productivity while allowing faster innovation and time-to-market through increased infrastructure and platform services.
This document discusses introducing continuous delivery practices at an organization. It provides four stories from different companies about their continuous delivery journeys. The first story describes challenges at Nokia with complicated dependencies and integration problems that were addressed by implementing delivery pipelines and consumer driven contracts. The second story focuses on delivering value and achieving a higher release frequency, shorter cycle times, and higher release success rates at another unnamed company. The third story discusses the architecture at eBay and improvements achieved by moving to more modular code and weekly releases. The final story cautions against skipping testing phases when moving to continuous delivery. Common themes that helped organizations were taking baby steps, establishing cross-functional teams, test automation, and focusing on delivering value.
This document discusses eBay's architectural principles for scaling its large ecommerce site. It outlines four main strategies: (1) Partition everything by data, load, or usage to split problems into manageable chunks and allow independent scalability. (2) Use asynchronous processing wherever possible to improve scalability, availability, and latency. (3) Favor automated and adaptive systems over manual processes to reduce costs and improve functionality. (4) Design all systems to be failure-tolerant by assuming failure, rapidly detecting and recovering from failures, and degrading gracefully when necessary. Specific patterns for implementing each strategy across databases, applications, search, and other areas are also discussed.
This document summarizes an eBay presentation on next generation data centers and cloud computing technologies. It provides examples of how eBay has leveraged these technologies, including virtualizing and scaling their database to handle extreme growth. The presentation discusses how next generation data centers are more than just technologies, and focus on running business processes driven by service level agreements.
This document makes the case for upgrading to the latest versions of GeneXus. It outlines the key benefits of upgrading such as improved developer experience, better performance, additional functionality like web reports and cloud deployment capabilities, and enhanced security features. While an upgrade requires investment, it argues that doing so allows applications to take advantage of new technologies and keep pace with changing needs, ultimately creating better applications. It invites readers to make the "right move" and upgrade to the GeneXus Evolutions.
In 3 sentences:
1) Moving SAP applications to the cloud can be done in 3 easy steps by virtualizing the application using application virtualization which packages it in a virtual application appliance (VAA) that is portable.
2) VAAs are fully portable, can be provisioned instantly across environments, and reduce software costs through consolidation and automation while increasing agility.
3) Using VAAs to deploy SAP applications provides benefits like faster provisioning, reduced software costs, increased utilization, and accelerated application lifecycles.
Building a Website to Scale to 100 Million Page Views Per Day and Beyond Trieu Nguyen
The document discusses the architecture of a website designed to scale to 200 million page views per day. It describes the requirements of supporting high traffic, legacy data, and increased speed. The architecture includes load balancing, caching, logging, PHP, MySQL, Redis, and other technologies. While the re-write took longer than planned due to decisions on technologies and staffing issues, the new site launched without downtime and was faster, making it an overall success story.
ContACT Internet Solutions offers various website hosting and domain registration services, including entry-level, business, and enterprise hosting plans with differing storage, bandwidth, and feature allowances. They also provide website development packages that include templates, modules, and tools for booking, calendars, photo galleries, and more. Custom website development, maintenance, and application building services are also available at hourly rates.
This document discusses Amazon Web Services and the benefits of running workloads in the AWS cloud. It describes Amazon's three main business lines of retail, seller services, and IT infrastructure. It then outlines key benefits of the AWS cloud like lower costs, increased agility, and removing constraints. It discusses how the AWS cloud provides scalability, reliability, security and acts as a foundation for 21st century architectures. It emphasizes choice and elasticity as fundamental properties of the cloud.
Scaling Continuous Integration Practices to Teams with Parallel DevelopmentIBM UrbanCode Products
Slides from an Urbancode and Accurev joint webinar: http://www.accurev.com/webinar/20120119-Scaling-CI-Parallel-Development
Continuous integration is simple with a single development team. But when software projects grow to multiple teams and dependencies, continuous integration loses effectiveness due to parallel projects, varying release schedules, and differing cadences between teams. As a result, many teams unknowingly lose the benefits of continuous integration, and therefore suffer from a lack of feedback and poor quality.
In this webinar, UrbanCode’s Eric Minick and AccuRev’s Chris Lucca will explain how to:
- Scale continuous integration builds across multiple development teams working on parallel projects
- Share only code that has passed continuous integration from other teams to avoid broken builds and confusion
- Automate the configuration of your test environment to handle fluid projects done in parallel
Composite Applications with SOA, BPEL and Java EEDmitri Shiryaev
The document discusses building composite applications using service-oriented architecture (SOA), BPEL, and Java EE. It introduces composite applications and how SOA allows applications to be composed of reusable parts that can be flexibly assembled. The benefits of SOA include flexibility, faster development, leveraging existing assets, and enabling new business opportunities. Key SOA concepts are introduced like services, service implementations, and service-oriented design.
This document provides an overview of JBoss and its mission to create the best Java application server and establish it as the de facto standard. It discusses JBoss's success in downloads and adoption, as well as its strategy of executing the "Professional Open Source" model through services like training, documentation, consulting, and production support.
Scalable Lifecycle Management via PerforcePerforce
How do you manage complex, dynamic production environments where there's a need not only for speed and accuracy of deployment but also a requirement to have deep information about the state of each environment and a guarantee of application integrity? Learn how NYSE Euronext addresses these challenges with Perforce.
The document discusses the evolution of logistics and supply chain management from isolated applications to more integrated systems. It describes how planning and execution used to be separate but are now joined up through composite applications. Tomorrow, planning will be more distributed, dynamic and optimized in real-time through increased automation. Two big ongoing challenges are data quality issues and supporting planners amid rapidly changing business environments. The future involves more distributed, adaptive operations through dynamic business webs that can quickly position inventory and leverage different transportation options.
Scaling up to 30M users - The Wix StoryAviran Mordo
Wix has scaled to serve 30 million users and over 1 million new websites per month. As the company grew, its initial architecture based on Tomcat and MySQL struggled to scale. Wix transitioned to a distributed system separating the editor and public segments for higher availability. It also developed Prospero, a media storage system using consistent hashing to shard files across servers. Caching and a CDN help further improve performance and scalability.
This document provides a summary of announcements and developments from Quantel at IBC 2010. Key points include:
- Cinnafilm Dark Energy noise reduction, grain addition, and sharpening tools have been integrated into Quantel's eQ, iQ, and Pablo systems.
- A major update for Quantel's integrated MAM, called Mission 2, includes improved file delivery, tighter archive integration, and expanded metadata editing and language support.
- Quantel Pablo was awarded a Lumiere Award for its contributions to advancing stereo 3D technology. New stereo 3D tools have been added to Pablo and iQ based on user feedback.
- Quantel is demonstrating stereo 3D workflows for sports production
Wordnik's architecture is built around a large English word graph database and uses microservices and ephemeral Amazon EC2 storage. Key aspects include:
1) The system is built as independent microservices that communicate via REST APIs documented using Swagger specifications.
2) Databases for each microservice are kept small by design to facilitate operations like backups, replication, and index rebuilding.
3) Services are deployed across multiple Availability Zones and regions on ephemeral Amazon EC2 storage for high availability despite individual host failures.
Lessons Learned From Internal CommunitiesPeter Kim
This document summarizes a discussion on using internal social networking at large companies. Representatives from IBM, EMC, Deloitte, and Dachis Corporation discussed their experiences launching internal social platforms, how they are used, and key metrics. They covered challenges around adoption, moderation, and measuring success.
The Sanger Institute generates large amounts of genomic data and requires significant compute resources to analyze it. It has experimented with running its analysis pipelines in the cloud to expand capacity and markets. However, moving large datasets into the cloud and ensuring fast access to the data within cloud compute resources has proved challenging. While individual components like web services have worked well, the high performance computing workloads that rely on large-scale data access and processing have not scaled effectively due to data transfer bottlenecks and lack of high-performance filesystems in the cloud.
The document discusses the architecture of eBay's platform. It describes how eBay has scaled to support over 212 million registered users and handle over 1 billion page views per day. Some key points discussed are:
- eBay's architecture has evolved over time through several major versions to support exponential growth and handle over 300 new features per quarter.
- The data tier is horizontally scaled across multiple databases segmented by functional areas and further split. Application and search components are also horizontally scaled.
- Asynchronous integration, stateless applications, and caching are used to improve scalability. Strict separation of tiers and partitioning of code supports parallel development.
- Automated processes are used for code deployment, monitoring, and rollbacks to
The document discusses eBay's architecture and strategies for maintaining scalability and agility. It describes eBay's large scale, including billions of daily interactions. It also outlines eBay's transition to more automated, cloud-based infrastructure and a next generation service-oriented platform. This is intended to improve development productivity while allowing faster innovation and time-to-market through increased infrastructure and platform services.
This document discusses introducing continuous delivery practices at an organization. It provides four stories from different companies about their continuous delivery journeys. The first story describes challenges at Nokia with complicated dependencies and integration problems that were addressed by implementing delivery pipelines and consumer driven contracts. The second story focuses on delivering value and achieving a higher release frequency, shorter cycle times, and higher release success rates at another unnamed company. The third story discusses the architecture at eBay and improvements achieved by moving to more modular code and weekly releases. The final story cautions against skipping testing phases when moving to continuous delivery. Common themes that helped organizations were taking baby steps, establishing cross-functional teams, test automation, and focusing on delivering value.
This document discusses eBay's architectural principles for scaling its large ecommerce site. It outlines four main strategies: (1) Partition everything by data, load, or usage to split problems into manageable chunks and allow independent scalability. (2) Use asynchronous processing wherever possible to improve scalability, availability, and latency. (3) Favor automated and adaptive systems over manual processes to reduce costs and improve functionality. (4) Design all systems to be failure-tolerant by assuming failure, rapidly detecting and recovering from failures, and degrading gracefully when necessary. Specific patterns for implementing each strategy across databases, applications, search, and other areas are also discussed.
This document summarizes an eBay presentation on next generation data centers and cloud computing technologies. It provides examples of how eBay has leveraged these technologies, including virtualizing and scaling their database to handle extreme growth. The presentation discusses how next generation data centers are more than just technologies, and focus on running business processes driven by service level agreements.
This document makes the case for upgrading to the latest versions of GeneXus. It outlines the key benefits of upgrading such as improved developer experience, better performance, additional functionality like web reports and cloud deployment capabilities, and enhanced security features. While an upgrade requires investment, it argues that doing so allows applications to take advantage of new technologies and keep pace with changing needs, ultimately creating better applications. It invites readers to make the "right move" and upgrade to the GeneXus Evolutions.
In 3 sentences:
1) Moving SAP applications to the cloud can be done in 3 easy steps by virtualizing the application using application virtualization which packages it in a virtual application appliance (VAA) that is portable.
2) VAAs are fully portable, can be provisioned instantly across environments, and reduce software costs through consolidation and automation while increasing agility.
3) Using VAAs to deploy SAP applications provides benefits like faster provisioning, reduced software costs, increased utilization, and accelerated application lifecycles.
Building a Website to Scale to 100 Million Page Views Per Day and Beyond Trieu Nguyen
The document discusses the architecture of a website designed to scale to 200 million page views per day. It describes the requirements of supporting high traffic, legacy data, and increased speed. The architecture includes load balancing, caching, logging, PHP, MySQL, Redis, and other technologies. While the re-write took longer than planned due to decisions on technologies and staffing issues, the new site launched without downtime and was faster, making it an overall success story.
ContACT Internet Solutions offers various website hosting and domain registration services, including entry-level, business, and enterprise hosting plans with differing storage, bandwidth, and feature allowances. They also provide website development packages that include templates, modules, and tools for booking, calendars, photo galleries, and more. Custom website development, maintenance, and application building services are also available at hourly rates.
This document discusses Amazon Web Services and the benefits of running workloads in the AWS cloud. It describes Amazon's three main business lines of retail, seller services, and IT infrastructure. It then outlines key benefits of the AWS cloud like lower costs, increased agility, and removing constraints. It discusses how the AWS cloud provides scalability, reliability, security and acts as a foundation for 21st century architectures. It emphasizes choice and elasticity as fundamental properties of the cloud.
Scaling Continuous Integration Practices to Teams with Parallel DevelopmentIBM UrbanCode Products
Slides from an Urbancode and Accurev joint webinar: http://www.accurev.com/webinar/20120119-Scaling-CI-Parallel-Development
Continuous integration is simple with a single development team. But when software projects grow to multiple teams and dependencies, continuous integration loses effectiveness due to parallel projects, varying release schedules, and differing cadences between teams. As a result, many teams unknowingly lose the benefits of continuous integration, and therefore suffer from a lack of feedback and poor quality.
In this webinar, UrbanCode’s Eric Minick and AccuRev’s Chris Lucca will explain how to:
- Scale continuous integration builds across multiple development teams working on parallel projects
- Share only code that has passed continuous integration from other teams to avoid broken builds and confusion
- Automate the configuration of your test environment to handle fluid projects done in parallel
Composite Applications with SOA, BPEL and Java EEDmitri Shiryaev
The document discusses building composite applications using service-oriented architecture (SOA), BPEL, and Java EE. It introduces composite applications and how SOA allows applications to be composed of reusable parts that can be flexibly assembled. The benefits of SOA include flexibility, faster development, leveraging existing assets, and enabling new business opportunities. Key SOA concepts are introduced like services, service implementations, and service-oriented design.
This document provides an overview of JBoss and its mission to create the best Java application server and establish it as the de facto standard. It discusses JBoss's success in downloads and adoption, as well as its strategy of executing the "Professional Open Source" model through services like training, documentation, consulting, and production support.
Scalable Lifecycle Management via PerforcePerforce
How do you manage complex, dynamic production environments where there's a need not only for speed and accuracy of deployment but also a requirement to have deep information about the state of each environment and a guarantee of application integrity? Learn how NYSE Euronext addresses these challenges with Perforce.
The document discusses the evolution of logistics and supply chain management from isolated applications to more integrated systems. It describes how planning and execution used to be separate but are now joined up through composite applications. Tomorrow, planning will be more distributed, dynamic and optimized in real-time through increased automation. Two big ongoing challenges are data quality issues and supporting planners amid rapidly changing business environments. The future involves more distributed, adaptive operations through dynamic business webs that can quickly position inventory and leverage different transportation options.
Scaling up to 30M users - The Wix StoryAviran Mordo
Wix has scaled to serve 30 million users and over 1 million new websites per month. As the company grew, its initial architecture based on Tomcat and MySQL struggled to scale. Wix transitioned to a distributed system separating the editor and public segments for higher availability. It also developed Prospero, a media storage system using consistent hashing to shard files across servers. Caching and a CDN help further improve performance and scalability.
This document provides a summary of announcements and developments from Quantel at IBC 2010. Key points include:
- Cinnafilm Dark Energy noise reduction, grain addition, and sharpening tools have been integrated into Quantel's eQ, iQ, and Pablo systems.
- A major update for Quantel's integrated MAM, called Mission 2, includes improved file delivery, tighter archive integration, and expanded metadata editing and language support.
- Quantel Pablo was awarded a Lumiere Award for its contributions to advancing stereo 3D technology. New stereo 3D tools have been added to Pablo and iQ based on user feedback.
- Quantel is demonstrating stereo 3D workflows for sports production
Wordnik's architecture is built around a large English word graph database and uses microservices and ephemeral Amazon EC2 storage. Key aspects include:
1) The system is built as independent microservices that communicate via REST APIs documented using Swagger specifications.
2) Databases for each microservice are kept small by design to facilitate operations like backups, replication, and index rebuilding.
3) Services are deployed across multiple Availability Zones and regions on ephemeral Amazon EC2 storage for high availability despite individual host failures.
Lessons Learned From Internal CommunitiesPeter Kim
This document summarizes a discussion on using internal social networking at large companies. Representatives from IBM, EMC, Deloitte, and Dachis Corporation discussed their experiences launching internal social platforms, how they are used, and key metrics. They covered challenges around adoption, moderation, and measuring success.
The Sanger Institute generates large amounts of genomic data and requires significant compute resources to analyze it. It has experimented with running its analysis pipelines in the cloud to expand capacity and markets. However, moving large datasets into the cloud and ensuring fast access to the data within cloud compute resources has proved challenging. While individual components like web services have worked well, the high performance computing workloads that rely on large-scale data access and processing have not scaled effectively due to data transfer bottlenecks and lack of high-performance filesystems in the cloud.
This document discusses using the MXML compiler (mxmlc) to compile Flex projects from the command line rather than within Flex Builder. It provides an example command to compile a FlexMXML file located in the user's Documents folder. Additional command line arguments are also demonstrated, such as specifying the output SWF file location and adding library paths. The document recommends adding the Flex SDK bin directory to the system PATH environment variable so mxmlc can be called directly from the command line without specifying the full SDK path.
1. The eBay Architecture
Striking a balance between site stability, feature velocity, performance, and cost
.
nc
,I
SD Forum 2006
ay
eB
Presented By: Randy Shoup and Dan Pritchett
Date: November 29, 2006