Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Big Data Day LA 2016/ Use Case Driven track - From Clusters to Clouds, Hardware Still Matters, Eric Lesser, Director, Operations, PSSC Labs


Published on

Today’s Software Defined environments attempt to remove the weakness of computing hardware from the operational equation. There is no doubt that this is a natural progress away from overpriced, proprietary compute and storage layers. However, even at the heart of any Software Defined universe is an underlying hardware stack that must be robust, reliable and cost effective. Our 20+ years experience delivering over 2000 clusters and clouds has taught us how to properly design and engineer the right hardware solution for Big Data, Cluster and Cloud environments. This presentation will share this knowledge allowing user to make better design decisions for any deployment.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Big Data Day LA 2016/ Use Case Driven track - From Clusters to Clouds, Hardware Still Matters, Eric Lesser, Director, Operations, PSSC Labs

  1. 1. In Today’s Software Defined Universe, Hardware Still Matters
  2. 2. In 2016, priority for the C-Suite is reducing CAPEX and OPEX. As a result, IT Departments are leaner than ever before and the trend will only continue. Public Clouds utilization is growing with many companies moving to AWS, Microsoft Azure, and Softlayer. Some companies deploy hybrid clouds, mixing their own infrastructure with public cloud services. Although the “Software Defined” movement has removed some reliance on the hardware, customers must select the right hardware to support their organization’s operational needs.
  3. 3. Over the course of 26 years, we have delivered 2,000+ clusters for HPC, Big Data, and Cloud environments. With white-glove US-based service and support, we are the “American Supercomputer Company.” Through unique engineering, PSSC Labs develops unique server platforms for Hadoop, SQL / noSQL, virtual machines, app servers and common data center applications. Selected customers include:  MIT  Stanford  Rubicon Project  OpenX  FireEye  Roche Diagnostics
  4. 4. PSSC Labs has reached three key determinations:  Hardware is to configure, manage and support. With expertise developed over 26 years, we excel in supporting and managing hardware for the duration of its lifecycle.  Each data center deployment is unique. We understand the critical factors involved in a successful deployment: rack space, power consumption, time to production, and the overall business and cost savings related to these factors. We answer these questions for the C-suite.  Public Clouds allow for easier scaling but present cost, performance, and control issues. In that vein, Facebook and Google, which are driving the Open Compute movement, have proven that using commodity hardware can maximize cost-effectiveness, performance and control for IT organizations.
  5. 5. CAPEX: the actual cost of the hardware. Today, many server vendors are relatively close in cost, usually within 10-20%. OPEX: the cost to operate the equipment over time. This includes:  Data center floor space costs  Power consumption  Network connectivity PSSC Labs is keenly aware of these OPEX expenses and delivers servers that offer 2x the density and draw 50% less power than other vendors. Over time, this reduces the Total Cost of Ownership (TCO) of the infrastructure, which will impact any decision by the C-suite to move forward with hardware deployments.
  6. 6. With any hardware considerations, the C-Suite should ask: how will the business benefit? Begin by working at the application- and workload-level first. Understand the needs of users and then consider hardware solutions. One of our customers, Optoro, began to move away from relying only a public cloud service earlier this year. They needed a more stable and controlled environment. In this instance, CAPEX cost increased but the OPEX cost decreased an even greater amount. For Optoro, the result was an overall lower TCO because they considered their specific needs first, and then determined hardware.
  7. 7. Beware of over-engineered platforms: today’s movement to Software Defined Infrastructure allows for more cost effective hardware platforms utilizing commodity based components. Focus on flexibility: applications and users’ needs change over time. Make sure you are not buying special purpose hardware that will lose value when changes materialize. Design for savings: OPEX savings can be realized by selecting platforms that are more energy efficient, offer greater density, more reliability, and are easier to support.
  8. 8. The ease of use and scalability of public cloud services can be difficult to move past. CTOs are always looking for ways of saving on CAPEX. Companies that move past CAPEX considerations and realize the overall impact of TCO will consider the benefits of deploying their own infrastructure. Once that decision is made, the real work begins in ensuring a successful deployment and operational efficiency over time. A trusted partner can help here, like PSSC Labs, that understands how to successfully deploy hardware, and support it over time for organization-specific needs.