How do we know? We know that decision making within businesses is moving from strategic, where there was plenty of time to analyze and really think about everything to tactical. The tactical is emerging because information is flowing very, very quickly from a variety of sources, and so businesses are tempting to move more than automated style, if you will, of information acknowledgement and faster decision making.
Businesses have a need to respond in real time. Data Analysis enables companies to focus less on what happened (its automated) and more on why it happened. Drill down in the OLAP process…
Modeling and Predicting
Leverage information for predictive purposes
Utilize advanced data mining and predictive power of algorithms
Real-Time, Operational Analytics
Information must be extremely up to date
Query Response times must be measured in seconds to accommodate real-time, operational decision making
Automated Decision Governance
More and More decisions become executed with event driven triggers to initiate fully automated decision processes based on real-time data
We talk a lot about things such as big data, but in fact, we have more data that's coming much faster and it's much more complex. The complex comes from the variety of sources that come together. That could be data from Facebook or Instagram coupled with information systems, may be coupled with some data that may actually be coming out of the supply chain itself. All of this has to be aggregated and pulled together much faster than it has been.
This is what everyone likes to talk about. – (Big Data)
Big Data is comprised of continuously collecting data from many sources but needs to be acted on quickly.
Analytical overload causes mission critical problems.
“Big data is part of an iterative life cycle that should be part of an over-arching enterprise information strategy” – Pete Tseronis, Department of Energy
“There’s a risk of getting lost in a sea of detail or spending too much time thinking about volume, and not enough time thinking about actual utility or value” – Mike Olson, Cloudera
“At the heart of big data: “Funding and prioritizing the need to solve a real mission-critical problem.” – Van Ristau, DTL Solutions
So Reducing time-to-value, Understanding what needs to be done is becoming a competitive table stake for many, many companies. They have more data, they need to manipulate it faster and they need to do this to get to a better decision much faster than they ever had before.
Carpe Diem – seize the moment.
In reality, you need to seize the moment so whenever possible you can make close to the moment decisions. Decisions may not always be made by humans. Transactions are happening, fast intelligent analysis needs to occur so you can make best-in-class business decisions.
The world is bigger than data. Need the ability to have more data, act on that data faster, then that is what leads you to faster decisions.
Take BIG DATA –transform it to FAST DATA to get QUICKER DECISIONS and INSIGHTS!
BIG DATA CIRCLE: (Data coming from many different places)
More data leads to more decisions
…becoming more informed
More data from more sources is a robust collection of individual data points. (Richer knowledge set)
Collect
FAST DATA CIRCLE:
Acting on data quickly leads to faster decisions
…getting an understanding of your information faster
Transact, Process and Analyze your individual data points to compile business intelligence
Act
Rich Information (outcome of faster decisions)
Faster decisions create a competitive advantage
…leading to an outcome, a decisions
Make decisions that turn opportunities into sustained competitive advantage
Decide
We built UCS to:
reduce time to provision, cabling, management costs, power and cooling costs and
offer world-record application performance benchmarks.
Customers still told us they were struggling to apply flash/solid-state technology in the environment
USC is about the platform, and making computing simpler to manage. Infrastructure extracted, assets added faster, UCS manager allows you to organize and reorganize quicker.
It still doesn’t solve customers end to end problem. Which is where customers left their data.
Need to get it quickly.
It needs to be consistent.
HDD cant do it, causing a gap
In the 52 weeks in a year, how many do you want to spend configuring the infrastructure. Answer=0. Every week you spend is a week you cant do business at the rate you want to.
UCS Invicta plugs in the missing piece to achieve this.
Here’s how:
Data Acceleration Layer for Applications
Address scale of new data sets
Address new velocity requirements
Manage as a System
Simplify infrastructure to drive TCO
Multi-Workload / Multi-Tenant Flexibility
The UCS Founding principles are strikingly similar to the UCS Invicta Series Solid State Systems principles.
The addition of flash memory to UCS’s already serves and network/storage access create UCS.
The combined solution benefits are the integration of performance characteristics much closer to the computing domain. (this changes milliseconds changes into microseconds) (Give an example of what this can translate to in business terms) Reduction of either a batch cycle for many hours that runs in minutes, or a long running report that now runs immediately. (Giving them the opportunity to do something they couldn’t do before) ie – TTX story – 48 hour process that now runs in 3 years.
Title changed
While the storage landscape IS very large, Cisco UCS Invicta really only directly competes with 2 other companies (Kaminario, Violin…TMS not so much). That’s not to say we don’t ever find ourselves competing for customers in the field because we certainly do; technology wise we are performance focused while Nimbus, Skyera and the like focus more on price per gigabyte. Hard Disk Drive Enhancement
HYBRID
Hard Disk Drive Enhancement
Flash & HDD
Slower than All Flash
Higher energy
CAPACITY ENHANCED
Data Optimization
Inline Data Reduction
Increased Storage Utilization
Effectiveness is Data Dependent\
SERVER EMBEDDED
Tactical Workload Acceleration
Single workload
Not Sharable
Not Scalable
SHARED RESOURCE POOL
Strategic Workload Acceleration
Multiple workloads
Sharable
Scalable
We solve three basic problems, and we address three basic objectives. The first is workload acceleration. Applications come on to our platforms, and they go faster. More data can be consumed, and it can be done very, very quickly (low latency). Next, we can eliminate redundant data. By eliminating redundant data, we can actually store more information on our technology making our platforms very efficient. Then finally for the datacenter, energy consumption and floor space, are both reduced, meaning we consume less of it, and management overhead is reduced because there is just less to manage with our technology it’s much simpler use.
Our portfolio consists primarily of two products. The t appliance has very high performance, has very good bandwidth, and it has very good storage characteristics. That foundation appliance can be deployed as parte of a UCS Invicta Scalable System, allowing the customer to scale on both performance and capacity axes. It's scalability, modularity, the ability to accelerate an application, the ability to optimize the data, also to handle multiple workloads at the same time, which can all be done without sacrificing performance.
Performance numbers are based upon 4K block over FC connection.
A Note from Engineering: Benchmarking a data reduction node will never by 100% accurate. Performance will vary based on use case. Engineering advice is to use DR performance numbers as a guideline. Engineering has indicated as we continue to test various models we will share key findings.
---
What are our two sets of building blocks, if you will? If you have the appliance, you’re really looking at the ability to either accelerate workloads or reduce the amount of data that were actually stored. It's a unique offer. We're really the only vendor in the marketplace today that allows our customer to choose the kind of appliance they would like to use. Those same building blocks, enfold into our UCS Solid State Systems technology, become what we refer to as silicon nodes. The customer can choose one or the other, or they can actually combine them under the same architecture. Of course what you can see here are the performance and capacity characteristics.
When we combine it all together, what we adopt with here is a Scale Up/Scale Out architecture. You can start as little as pair of routers and a pair of nodes. If you decide that you need to drive more throughputs to the nodes, you can add more routers. If you decide that you would like to add more performance and capacity, you can add more nodes. Every time you add a node, you're adding CPU, memory, operating system, independently managed function, and most importantly, flash management. This is what sets us apart from many of our competitors.
The INFINITY architecture, when broken out, looks like this. We have a switched fabric located at the top, and acceleration or data reduction nodes at the bottom. You can scale multiple routers and multiple nodes in a variety of combinations.
In terms of laying out how our products sort of stack in terms of features, what you'll notice is that most of them have the same features, except when we get into UCS Solid State Systems and Scale Up/Out architecture, you'll notice that there are a couple of additions. We can provide things such as mirroring. When you move over to INFINITY, we actually have this switch fabric, and that's what allows you to scale to the increased number of routers and an even greater number of nodes. This is what allows an organization to really bring all of their applications, if you will, into our platform regardless of what needs data reduction or IO and throughput.
Cells on Chips, Chips on Drives – 960 GB max on ours right now….
Google flash cell diagrams for ideas - MLC
It sure doesn’t look like a hard drive, so why treat it like one? Flash stores its data by trapping electrons into the floating gate. Each time it does this, it weakens the cell. Thus some careful management techniques need to be employed to reduce the amount of wear we place on the media. In addition, once a cell is programed it cannot be reprogrammed again until it is FLASH erased back to a clean state. Making matters worse, these cells are not independently erasable. Due to packaging constraints you must ERASE large blocks at a time. This unfortunately takes TIME (MS).
Labs Deck Slides, and then Symbols slide from the sales deck next
Symmetrical I/O occurs because we take random data and we actually make it sequential. By making it sequential, you make it highly efficient for flash. A few things occur. We can write to the media very, very quickly while protecting it. We also reduce what's referred to as wear on the media because we're not really allowing the flash itself to be involved and determining how that information will be written. We handle all of that.
Many of our competitors don't directly manage the flash. They allow the actual local flash media to make all kinds of decisions about how the data should be applied, which is not very efficient at all.
Here is what we built UCS to do
Customers told us: they were struggling to apply flash/solid-state technology in the environment
Data Acceleration Layer for Applications
Address scale of new data sets
Address new velocity requirements
Manage as a System
Simplify infrastructure to drive TCO
Multi-Workload / Multi-Tenant Flexibility
Pooled Resource
Deterministic performance
Operational Efficiency