This document discusses big data and its implications for data center architecture. It provides examples of big data use cases in telecommunications, including analyzing calling patterns and subscriber usage. It also discusses big data analytics for applications like genome sequencing, traffic modeling, and spam filtering on social media feeds. The document outlines necessary characteristics for data platforms to support big data workloads, such as scalable compute, storage, networking and high memory capacity.
Introduction
Big Data may well be the Next Big Thing in the IT world.
Big data burst upon the scene in the first decade of the 21st century.
The first organizations to embrace it were online and startup firms. Firms like Google, eBay, LinkedIn, and Face book were built around big data from the beginning.
Like many new information technologies, big data can bring about dramatic cost reductions, substantial improvements in the time required to perform a computing task, or new product and service offerings.
Overview of analytics and big data in practiceVivek Murugesan
Intended to give an overview of analytics and big data in practice. With set of industry use cases from different domains. Would be useful for someone who is trying to understand Analytics and Big Data.
Looking at what is driving Big Data. Market projections to 2017 plus what is are customer and infrastructure priorities. What drove BD in 2013 and what were barriers. Introduction to Business Analytics, Types, Building Analytics approach and ten steps to build your analytics platform within your company plus key takeaways.
Introduction
Big Data may well be the Next Big Thing in the IT world.
Big data burst upon the scene in the first decade of the 21st century.
The first organizations to embrace it were online and startup firms. Firms like Google, eBay, LinkedIn, and Face book were built around big data from the beginning.
Like many new information technologies, big data can bring about dramatic cost reductions, substantial improvements in the time required to perform a computing task, or new product and service offerings.
Overview of analytics and big data in practiceVivek Murugesan
Intended to give an overview of analytics and big data in practice. With set of industry use cases from different domains. Would be useful for someone who is trying to understand Analytics and Big Data.
Looking at what is driving Big Data. Market projections to 2017 plus what is are customer and infrastructure priorities. What drove BD in 2013 and what were barriers. Introduction to Business Analytics, Types, Building Analytics approach and ten steps to build your analytics platform within your company plus key takeaways.
How telecommunication companies can leverage power Hadoop and Big Data to derive use cases.
Based on Cloudera Whitepaper - Big Data Use Cases for Telcos
Strategizing Big Data in Telco
Big data feels to be a very hot topic nowadays. Some industries depend on it completely, some have opportunities to roll out their strategies and execute, some just considering when it is a right time to hop in.
To my mind, Big Data is not about technology. Big data is about people generating data and data used for the benefit of people.
Big data is a pool of activities intended at processing the data a company owns (internal and external) so that to open new revenue opportunities, minimize costs and enhance UX.
I had some ideas and thoughts on what telecommunication companies may start from in formulating the Big Data Strategy and so packed some of the most important pieces of thoughts into a small presentation.
What is the difference between Small Data and Big Data?
What kind of data is used currently and which is to be relied on a new paradigm?
What kind of products are expected from telcos?
My personal ranking of operators in terms of their Big Data execution
What are the stages telcos should pass through to become a Big Data operator?
Prerequisites for Big Data transformation
Please take a look at the presentation to find answers to these questions and feel free to share your opinion.
Thanks!
At the Technology Trends seminar, with HCMC University of Polytechnics' lecturers, KMS Technology's CTO delivered a topic of Big Data, Cloud Computing, Mobile, Social Media and In-memory Computing.
This document is the first deliverable of the Lean Big Data work package 7 (WP7). The main goal of the package 7 is to provide the use cases applications that will be used to validate the Lean Big Data platform. To this end, an analysis of requirement of each use case will be provided in the scope.This analysis will be used as basis for the description of the evaluation, benchmarking and validation of the Lean Big Data platform.
This deliverable comprises the analysis of requirements for the following case of study provided in the context of Lean Big Data: Data Centre monitoring Case Study, Electronic Alignment of Direct Debit transactions Case Study, Social Network-based Area surveillance Case Study and Targeted Advertisement Case Study.
"Big Data Use Cases" was presented to Lansing BigData and Hadoop Users Group Kickoff meeting on 2/24/2015 by Vijay Mandava and Lan Jiang. The demo was built on top of CDH 5.3, HDP 2.2 and AWS cloud
Smart Cities and Big Data - Research Presentationannegalang
Research presentation on smart cities (sensor technology) and big data, presented in a graduate course I took on Transmedia Design and Digital Culture.
How telecommunication companies can leverage power Hadoop and Big Data to derive use cases.
Based on Cloudera Whitepaper - Big Data Use Cases for Telcos
Strategizing Big Data in Telco
Big data feels to be a very hot topic nowadays. Some industries depend on it completely, some have opportunities to roll out their strategies and execute, some just considering when it is a right time to hop in.
To my mind, Big Data is not about technology. Big data is about people generating data and data used for the benefit of people.
Big data is a pool of activities intended at processing the data a company owns (internal and external) so that to open new revenue opportunities, minimize costs and enhance UX.
I had some ideas and thoughts on what telecommunication companies may start from in formulating the Big Data Strategy and so packed some of the most important pieces of thoughts into a small presentation.
What is the difference between Small Data and Big Data?
What kind of data is used currently and which is to be relied on a new paradigm?
What kind of products are expected from telcos?
My personal ranking of operators in terms of their Big Data execution
What are the stages telcos should pass through to become a Big Data operator?
Prerequisites for Big Data transformation
Please take a look at the presentation to find answers to these questions and feel free to share your opinion.
Thanks!
At the Technology Trends seminar, with HCMC University of Polytechnics' lecturers, KMS Technology's CTO delivered a topic of Big Data, Cloud Computing, Mobile, Social Media and In-memory Computing.
This document is the first deliverable of the Lean Big Data work package 7 (WP7). The main goal of the package 7 is to provide the use cases applications that will be used to validate the Lean Big Data platform. To this end, an analysis of requirement of each use case will be provided in the scope.This analysis will be used as basis for the description of the evaluation, benchmarking and validation of the Lean Big Data platform.
This deliverable comprises the analysis of requirements for the following case of study provided in the context of Lean Big Data: Data Centre monitoring Case Study, Electronic Alignment of Direct Debit transactions Case Study, Social Network-based Area surveillance Case Study and Targeted Advertisement Case Study.
"Big Data Use Cases" was presented to Lansing BigData and Hadoop Users Group Kickoff meeting on 2/24/2015 by Vijay Mandava and Lan Jiang. The demo was built on top of CDH 5.3, HDP 2.2 and AWS cloud
Smart Cities and Big Data - Research Presentationannegalang
Research presentation on smart cities (sensor technology) and big data, presented in a graduate course I took on Transmedia Design and Digital Culture.
Exploration of a conceptual framework that might be adopted by any municipality or community and enables them to deploy the physical and logical infrastructure required to support all SMART functional technology going forward.
IBM and ASTRON, the Netherlands Institute for Radio Astronomy, will unveil a prototype high-density, 64-bit microserver CPU placed on a 133 x 55 mm board running Linux. The partners are building the microserver as part of the DOME project, which is tasked with building an IT roadmap for the Square Kilometer Array, an international consortium to build the world's largest and most sensitive radio telescope. Scientists estimate that the processing power required to operate the telescope will be equal to several millions of today's fastest computers.
IBM scientist Ronald Luijten (@ronaldgadget) will present the microserver in English from ASTRON's offices in Dwingeloo, The Netherlands.
This was recorded on 3 July 14:00 Central European Time
IBM and ASTRON 64-Bit Microserver Prototype Prepares for Big Bang's Big Data,...IBM Research
IBM and the Netherlands Institute for Radio Astronomy ASTRON have unveiled the world’s first water-cooled 64-bit microserver. The prototype, which is roughly the size of a smartphone, is part of the proposed IT roadmap for the Square Kilometre Array (SKA), an international consortium to build the world’s largest and most sensitive radio telescope. Scientists estimate that the processing power required to operate the telescope will be equal to several millions of today’s fastest computers.
The microserver’s team has designed and demonstrated a prototype 64-bit microserver using a PowerPC based chip from Freescale Semiconductor running Linux Fedora and IBM DB2. At 133 × 55 mm2 the microserver contains all of the essential functions of today’s servers, which are 4 to 10 times larger in size.
Not only is the microserver compact, it is also very energy-efficient. One of its innovations is hotwater cooling, which in addition to keeping the chip operating temperature below 85 degrees C, will also transport electrical power by means of a copper plate. The concept is based on the same technology IBM developed for the SuperMUC supercomputer located outside of Munich, Germany. IBM scientists hope to keep each microserver operating between 35–40 watts including the system on a chip (SOC) — the current design is 60 watts.
The next step for scientists is to begin to take 128 of the microserver boards using the newest T4240 chips to create a 2U rack unit with 1536 cores and 3072 threads with up to 6 terabytes of DRAM. In addition, they will be adding an Ethernet switch and power module to the integrated water-cooling.
Creating Municipal ICT Architectures - A reference guide from Smart CitiesSmart Cities Project
E-government operations require citizens and external organisations to receive appropriate e-services, delivered by an organisation’s automated business processes and supported by information and communication technologies (ICT). This area of service management can be reinforced and strengthened, however, by using architectures: business architectures, information systems architectures, technology architectures and the processes used to produce them.
Architecture frameworks often are difficult concepts to understand. This publication is a collection of ideas about enterprise architecture which will contribute to people’s understanding of this subject. This publication answers some basic questions: What is an ICT architecture? What is the value of ICT architectures and how are they produced? Why should we bother with them? Using the enterprise architecture approach we show how architectures and the processes followed to produce them can help the development and improvement of e-government.
This presentation looks at what 'The Age of the Platform' means for smart city policy challenges and opportunities. Presented as a Keynote Address a the Media Architecture Biennale held as part of Sydney's Vivid Festival in June 2016.
Author: Aggarat Jaisuk ( Romeo Art )
Big Data Trends - Big Data Thailand Meetup#1
Global Perspective - Gartner Hype Cycle for Emerging Tech 2015
Machine Learning - Citizen Datascience
Big Data on AWS
Big Data on Google Cloud Platform
In-memory Big Data Opensources Highlight Projects
New sources of data Computing powers GPGPU
Lean into Deep Learning
Big Data Analytics in a Heterogeneous World - Joydeep Das of SybaseBigDataCloud
Big Data Analytics is characterized by analysis of data on three vectors: exploding data volume, proliferating data variety (relational, multi-media), and accelerating data velocity. However, other key vectors such as costs and skill set needed for Big Data Analytics are often overlooked. In this session, we will consider all five vectors by exploring various techniques where traditional but progressive technologies such as column store DBMS and Event Stream Processing is combined with open source frameworks such as Hadoop to exploit the full potential of Big Data Analytics.
Agenda:
- Big Data Analytics in the real world
- Commercial and Open Source techniques
- Bringing together Commercial and Open Source techniques
* Architectures
* Programming APIs
(e.g. embedded and federated MapReduce)
- Conclusions
How to Crunch Petabytes with Hadoop and Big Data Using InfoSphere BigInsights...DATAVERSITY
Do you wonder how to process huge amounts of data in short amount of time? If yes, this session is for you! You will learn why Apache Hadoop and Streams is the core framework that enables storing, managing and analyzing of vast amounts of data. You will learn the idea behind Hadoop's famous map-reduce algorithm and why it is at the heart of solutions that process massive amounts of data with flexible workloads and software based scaling. We explore how to go beyond Hadoop with both real-time and batch analytics, usability, and manageability. For practical examples, we will use IBM InfoSphere BigInsights and Streams, which build on top of open source tooling when going beyond basics and scaling up and out is needed.
New Trend - Big Data Analytics as a service
The combination of ‘data analysis’ and 'big data-open source-cloud computing' opens up a new universe of opportunities at many levels and in many places.
Abstract: Knowledge has played a significant role on human activities since his development. Data mining is the process of
knowledge discovery where knowledge is gained by analyzing the data store in very large repositories, which are analyzed
from various perspectives and the result is summarized it into useful information. Due to the importance of extracting
knowledge/information from the large data repositories, data mining has become a very important and guaranteed branch of
engineering affecting human life in various spheres directly or indirectly. The purpose of this paper is to survey many of the
future trends in the field of data mining, with a focus on those which are thought to have the most promise and applicability
to future data mining applications.
Keywords: Current and Future of Data Mining, Data Mining, Data Mining Trends, Data mining Applications.
Using a Field Programmable Gate Array to Accelerate Application PerformanceOdinot Stanislas
Intel s'intéresse tout particulièrement aux FPGA et notamment au potentiel qu'ils apportent lorsque les ISV et développeurs ont des besoins très spécifiques en Génomique, traitement d'images, traitement de bases de données, et même dans le Cloud. Dans ce document vous aurez l'occasion d'en savoir plus sur notre stratégie, et sur un programme de recherche lancé par Intel et Altera impliquant des Xeon E5 équipés... de FPGA
Intel is looking at FPGA and what they bring to ISVs and developers and their very specific needs in genomics, image processing, databases, and even in the cloud. In this document you will have the opportunity to learn more about our strategy, and a research program initiated by Intel and Altera involving Xeon E5 with... FPGA inside.
Auteur(s)/Author(s):
P. K. Gupta, Director of Cloud Platform Technology, Intel Corporation
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Odinot Stanislas
(FR)
Voici un excellent document qui explique étape après étape comment installer, monitorer et surtout correctement benchmarker ses SSD PCIe/NVMe (pas si simple que ça). Autre élément clé : comment analyser la charge I/O de véritables applications? Combien d'IOPS, en read, en write, quelle bande passante et surtout quel impact sur la durée de vie des SSD? Bref à mettre en toute les mains, et un merci à mon collègue Andrey Kudryavtsev.
(EN)
An excellent content which describe step by step how to install, monitor and benchmark PCIe/NVMe SSD (many trick not so simple). Another key learning: how to measure real I/O activities on a real workload? How many R/W IOPS, block size, throughtput, and finally what's the impact on SSD endurance and (real)life? A must read, and a huge thanks to my colleague Andrey Kudryavtsev.
Auteurs/Authors:
Andrey Kudryavtsev, SSD Solution Architect, Intel Corporation
Zhdan Bybin, Application Engineer, Intel Corporation
Le SDN et NFV sont très à la mode en ce moment car en passant des appliance physiques aux équipement réseau massivement logiciel, celà devrait offrir une grande flexibilité et agilité aux entreprises (et telco en particulier). Néanmoins chainer des services réseau est un exercice encore très complexe et ce document vous explique ce qu'il est déjà possible de faire sur OpenStack en couplant par exemple : un load balancer (BigIP), un Firewall (BigIP), un réseau virtuel WAN (RiverBed) ou encore un routeur virtuel (Brocade).
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
Après la petite intro sur le stockage distribué et la description de Ceph, Jian Zhang réalise dans cette présentation quelques benchmarks intéressants : tests séquentiels, tests random et surtout comparaison des résultats avant et après optimisations. Les paramètres de configuration touchés et optimisations (Large page numbers, Omap data sur un disque séparé, ...) apportent au minimum 2x de perf en plus.
SNIA : Swift Object Storage adding EC (Erasure Code)Odinot Stanislas
In depth presentation on EC integration in Swift object storage. Content delivered by Paul Luse, Sr. Staff Engineer @ Intel and Kevin Greenan, Staff Software Engineer - Box during fall SNIA event
PCI Express* based Storage: Data Center NVM Express* Platform TopologiesOdinot Stanislas
(FR)
Le PCI Express se démocratise de plus en plus dans les serveurs. Présents depuis des années comme bus pour les cartes d'extensions, on va maintenant le trouver en façades des serveurs pour servir des disque flash 2,5 pouces (connecteur SF-8639) et sous la forme de câble appelés OCulink.
(EN)
PCI Express is becoming more and more present in servers. As a communication bus for extension cards since years, now it will serve 2.5 inches flash drive and through PCIe cables named OCulink.
Auteurs/Authors:
Michael Hall
Director of Technology Solutions Enabling, Data Center Group, Intel Corporation
Jonmichael Hands
Technical Program Manager, Non-Volatile Memory Solutions Group, Intel Corporation
Bare-metal, Docker Containers, and Virtualization: The Growing Choices for Cl...Odinot Stanislas
(FR)
Introduction très sympathique autour des environnements Cloud avec un focus particulier sur la virtualisation et les containers (Docker)
(ENG)
Friendly presentation about Cloud solutions with a focus on virtualization and containers (Docker).
Author: Nicholas Weaver – Principal Architect, Intel Corporation
Software Defined Storage - Open Framework and Intel® Architecture TechnologiesOdinot Stanislas
(FR)
Dans cette présentation vous aurez le plaisir d'y trouver une introduction plutôt détaillées sur la notion de "SDS Controller" qui est en résumé la couche applicative destinée à contrôler à terme toutes les technologies de stockage (SAN, NAS, stockage distribué sur disque, flash...) et chargée de les exposer aux orchestrateurs de Cloud et donc aux applications.
(ENG)
This presentation cover in detail the notion of "SDS Controller" which is in summary a software stack able to handle all storage technologies (SAN, NDA, distributed file systems on disk, flash...) and expose it to Cloud orchestrators and applications. Lots of good content.
Virtualizing the Network to enable a Software Defined Infrastructure (SDI)Odinot Stanislas
Une très intéressante présentation autour de la virtualisation des réseaux contenant des explications détaillées autour des VLAN, VXLAN, mais aussi d'NVGRE et surtout de GENEVE (Generic Network Virtualization Encapsulation) supporté pour la première fois sur la dernière carte 40 GbE d'Intel (XL710)
Intel développe une "ONP" (Open Network Platform) dit autrement un switch ouvert offrant les fonctions de base nécessaires au SDN. Si vous souhaitez connaitre le matériel utilisé, les stack logicielle exploitée et les compatibilité avec notamment les orchestrateurs, ce doc est fait pour vous.
Moving to PCI Express based SSD with NVM ExpressOdinot Stanislas
Une très bonne présentation qui introduit la technologie NVM Express qui sera à coup sure l'interface du futur (proche) des "disques" SSD. Adieu SAS et SATA, bienvenu au PCI Express dans les serveurs (et postes clients)
Intel and Siveo wrote this content which explain how their Cloud Orchestrator is working. You will learn how to configure it, benefit from automatical workload placement feature and manage multiple hypervisors transparently.
Intel IT Open Cloud - What's under the Hood and How do we Drive it?Odinot Stanislas
L'IT d'Intel fait sa révolution et s'impose d'agir comme un "Cloud Service Provider". La transformation est initiée avec au programme la mise en place d'un Cloud Fédéré, Interopérable et Open mais aussi d'un framework de maturité, du DevOps et de la prise de risque. Bref, vraiment intéressant
Configuration and Deployment Guide For Memcached on Intel® ArchitectureOdinot Stanislas
This Configuration and Deployment Guide explores designing and building a Memcached infrastructure that is scalable, reliable, manageable and secure. The guide uses experience with real-world deployments as well as data from benchmark tests. Configuration guidelines on clusters of Intel® Xeon®- and Atom™-based servers take into account differing business scenarios and inform the various tradeoffs to accommodate different Service Level Agreement (SLA) requirements and Total Cost of Ownership (TCO) objectives.
Dans ce document vous trouverez les dernières améliorations faites sur OpenStack et comment certaines technologies Intel dopent la performance et la sécurité de l'environnement Cloud. Quelques exemple avec :
Comment créer des "pool" de VM sécurisées avec possibilité de géo tagging (technologies Intel présentent dans les serveurs HP, DELL, IBM… + Folsom, Nova, Horizon, Open Attestation)
Comment doper la sécurité du nouveau module de gestion des clés d'OpenStack (technologies Intel + Barbican)
Comment benchmarker le stockage object Swift avec COSBench (qui supporte maintenant Ceph, S3 et Amplidata)
Auteurs:
Girish Gopal - Strategic Planning, Intel Corporation
Malini Bhandaru - Security Architect, Intel Corporation
Scale-out Storage on Intel® Architecture Based Platforms: Characterizing and ...Odinot Stanislas
Issue du salon orienté développeurs d'Intel (l'IDF) voici une présentation plutôt sympa sur le stockage dit "scale out" avec une présentation des différents fournisseurs de solutions (slide 6) comprenant ceux qui font du mode fichier, bloc et objet. Puis du benchmark sur certains d'entre eux dont Swift, Ceph et GlusterFS.
Big Data and Intel® Intelligent Systems Solution for Intelligent transportationOdinot Stanislas
Explications sur comment il est possible d'utiliser la puissance d'Hadoop pour analyser les vidéos des caméras présentent sur les réseaux routiers avec pour objectif d'identifier l'état du trafic, le type de véhicule en déplacement et même l'usurpation de plaques d'immatriculation.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Big Data and Implications on Platform Architecture
1. Big Data and Implications on
Platform Architecture
Fayé A Briggs, PhD
Intel Fellow and Chief Server Platform Architect, Intel
BIGS002
2. Agenda
• What is Big Data?
• Big Data use cases
• What does Big Data mean for the data center?
• Call to action
The PDF for this Session presentation is available from our
Technical Session Catalog at the end of the day at:
intel.com/go/idfsessionsBJ
URL is on top of Session Agenda Pages in Pocket Guide
2
3. Agenda
• What is Big Data?
• Big Data use cases
• What does Big Data mean for the data center?
• Call to action
3
4. What is Big Data?
Unstructured size is beyond volume, variety, value and velocity
datasets whose the ability
of typical database software tools to capture, store, manage and analyze†
Unstructured
Data Analyze
Big Sensed Data
Manage
Volume
Big Corporate Data
Big Web Data
Store
Structured
Data
Corporate Data
Time
Capture
4 †”Big data: The next frontier for innovation, competition, and productivity”, McKinsey Global Institute
5. The Four Pillars and Associated
Challenges of Big Data
Massive scale and growth of unstructured data
80%~90% of total data
Volume Growing 10x~50x faster than structured (relational) data
10x~100x of traditional data warehousing
Heterogeneity and variable nature of Big Data
Many different forms (text, document, image, video, ...)
Variety No schema or weak schema
Inconsistent syntax and semantics
Predictive analytics for future trends and patterns
Value Deep, complex analysis (machine learning, statistic modeling,
graph algorithms, …), versus
Traditional business intelligence (querying, reporting, …)
Real-time rather than batch-style analysis
Velocity Data streamed in, tortured, and discarded
Making impact on the spot rather than
after-the-fact
5
6. Big Data Has Gone Mainstream
“ . . . . every large company is working on Big Data projects “at a
furious pace.” - Gartner analyst Merv Adrian
“. . . the use of Big Data will be effective for every segment of the
economy.” - Michael Chui, McKinsey & Co. analyst
Behold the Big Data sign Greeting commuters on Highway 101 in Silicon
gracing Times Square Valley, a giant Walmart* Big Data sign
6
7. Big Data Usages Examples
(Telecom/Financial/Search)
• Telecom
– Calling patterns, signal processing, forecasting
Analyze switches/routers data for quality of call, frequency of
calls, region loads, etc.
– Act before problems happen. Act before customer calls arrive.
• Financial
– Trading behaviour
Analyze real-time data to understand market behavior, role of
individual institution/investor
– Detect fraud, detect impact of an event, detect the players
• Search Engines
– Process the data collected by Web bot in multiple dimensions
– Enhance relevance of search
Big Data impacts e-connected businesses through capture,
processing and storage of huge amount of data efficiently
7
8. Big Data Usages Examples
(E-Biz, Media)
• Click Stream Analysis
– Analysis of online users behavior
– Develop comprehensive insight (Business Intelligence) to run
effective strategies in real time
• Graph analysis
– Term for discovering the online influencers in various walks of life
– Enables a business to understand key players and devise effective
strategies
• Lifecycle Marketing
– Strategies to move away from spam/mass mail
– Enables a business to spend money on high probable customers only
• Revenue Attribution
– Term for analyzing the data to accurately attribute revenue back to
various marketing investments
– Business can identify effectiveness of campaign to control expenses
Big Data phenomenon allows businesses to know,
predict and influence customer behaviors!!!
8
9. Big Data in Health Care
Cancer Care: American Society of Clinical
Oncologists “learning health system”, CancerLinQ
• Benefits: Collects and analyzes de-anonymized cancer
care data from millions of patient visits
Genome Sequencing: Improvements in DNA
sequencing driving down costs of processing
complete set of genomes
• Benefits: Saving lives through better identification and
treatment of diseases
GenBank* is the NIH genetic sequence
database, an annotated collection of all
publicly available DNA sequences
Sequencing costs approaches “$1000.” Analytics,
compute, networking, & storage to improve
affordability still challenging.
9
10. Big Data Analytics Processing
• “Batch”: Sophisticated data processing: enable “better”
decisions
– Analyze, transform, scan, etc. large amount of data
– E.g., ETL, graph construction, anomaly detection, trend analysis
• “Real-time”: Queries on historical data: enable “faster”
decisions
– Data at rest but needs to be served in real-time
– E.g., Facebook* uses HBase* “messaging” App serving real-time
data to its users
• “Streaming”: Queries on live data: enable “instantaneous”
decisions on real-time streaming data
– Large volume of data being ingested and analyzed in real-time
– E.g., detect and block worms in real-time (a worm may infect
1mil hosts in 1.3sec)
10 Adapted from Ion Stoica, UC Berkeley
11. Big Data Converted to Knowledge in an Iterative Cycle†
Knowledge Deliver
Visualization
Presentation
Tableau*, R, Progress Software*, Pentaho*, IBM*, others
Transact
Batch Analytics Real-time Analytics Streaming Analytics
Call Data Records Intelligent Transport Twitter* Spam Analytics
Gene Seq & Analysis Home Energy Video meta-data
GraphLab*-MLearning, Financial Analytics Traffic Modeling
ETL, Sensor Network Database Virus Intrusion Detection
Search, Time Series Stock Ticks Stock trading
Index Creation, Customer Behavior Ads Video Surveillance Analytics
Click Stream Analysis, Retail – Video, PoS data
BI Analytics Compute
Batch Processing Real-Time Processing Stream Processing
MR-Hadoop*, HBase*, SAP* HANA, Spark, Spark Streaming, Storm,
GraphLabs,Giraph Shark, Cassandra*, S4–Simple Scalable Stream.
mongoDB, Drill, Impala Sys
Data
Management
External Unstructured/Semi-structured on Dist. Parallel File System
(HDFS*, Lustre*, CloudStore*, GPFS, GlusterFS*, etc.)
Archive
Filtering, Cleansing, ETL, Scribe, etc.
Big Data Ingest
†Based on Frog’s & IDC’s Layered & Iterative Approach
11
12. Agenda
• What is Big Data?
• Big Data use cases
• What does Big Data mean for the data center?
• Call to action
12
13. Telco Usage - China Mobile Group Guangdong
Hadoop* Big Data storage and analytics
Analytics
13
14. Telco Usage - China Mobile Group Guangdong
Hadoop* Big Data storage and analytics
Usage Model: Deliver real-time access to Call Data Records (CDR)
for billing self service
• Solution: Hadoop* + Intel® Xeon® processors over RDBMS to remove
data access bottlenecks, increase storage, and scale system
• Benefits: Lower TCO, up to 30x performance increase, stable operation,
analytics on subscriber usage for targeted promotions
• Characteristics:
– 30TB billing data/month; real-time retrieval of 30 days CDRs
– 300k records/sec, 800k insert speed/sec; 15 analytics queries , 133 server nodes+
Platform and Cluster Architectural Attributes:
• Compute:
– Scale-out Intel Xeon processor E5 based platform for fast analytic query processing
– Memory: 2-4GB/Core for Data Management Hadoop JVM
– PCI Express* Gen3
• Storage: Scale-out Storage (HDD) for capacity; SSD Cache
• I/O: High bandwidth and IOPS to Storage (HDD + SSD cache) for fast record inserts
and reads
• Network: High network bandwidth for Hadoop Shuffle data; 10GbE TORS; 10-40GbE
14 Inter-Rack Switch
15. Real-Time Analytics: DuPont*– Crop Genetics
HBase* Big Data analytics with “BLAST” to compare protein in
genomic data
Usage Model: Comparative genomic research. Run “BLAST” to compare
protein with every other protein in the genomic code.
• Solution: The current RDBMS didn’t scale to planned data growth. HBase*/HDFS*
proved a reduction in processing time from over 30 days to less than 7 hours.
• Characteristics:
– Genetic data for 4 million organisms – 1TB in size; scale to 14 million
– 12 Trillion HBase rows; single record search takes 1.2 seconds
Platform and Cluster Architectural Attributes:
• Compute:
– Scaleout Intel® Xeon® processor E5 base platform for real-time data serving
– Memory: Higher Memory Capacity 4GB/Core for HBase Memstores
– PCI Express* Gen3
• Storage: Scaleout Storage (HDD) for capacity; SSD Cache
• I/O: High bandwidth and IOPS to Storage (HDD + SSD cache) for fast data
access to HBase tables
• Network: High network bandwidth for HBase region servers; 10GbE TORS; 10-
40GbE Inter-Rack Switch
15
16. Telco - China Unicom
Hadoop* & HBase* for Behavioral Analysis
Subscriber Usage &
Billing
ETL
Storage, Analytics
• Log Analysis
• Daily Reports
New Customer Segmentation & Insights
16
17. Telco - China Unicom
Hadoop* & HBase* for Behavioral Analysis
Usage Model: Analyze subscriber Web usage and billing to derive new information
products
• Solution: Scale out storage based on Hadoop* & HBase* with network optimization based on
Web traffic, log analysis for daily reporting
• Benefits: New customer segmentation
• Characteristics:
– 188 nodes, 14TB/server
– 2.5PB raw disk capacity
– High speed data loading
– Real-time query (latency <1s)
Daily statistics & reports (sum, count, join, etc.)
Platform and Cluster Architectural Attributes:
• Compute:
– Scaleout Intel® Xeon® processor E5 based platform for real-time data serving in < 1 sec
– Memory: Higher Memory Capacity 4GB/Core for HBase Memstores
– PCI Express* Gen3
• Storage: Scaleout Storage (HDD) for capacity; SSD Cache
• I/O: High bandwidth and IOPS to Storage (HDD + SSD cache) for fast data
access to HBase tables
• Network: High network bandwidth for HBase region servers; 10GbE TORS; 10-
40GbE Inter-Rack Switch
17
19. In-Memory GraphLab* Analytics: PageRank
Big Data analytics with GraphLab
Usage Model: Deliver Page Ranking for search
• Solution: Hadoop* + Intel® Xeon® processors: Large number of ML, Genomics,
Web, etc. applications can be efficiently run in Graph Parallel solution
• Benefits: Significantly faster solutions to Graph(Vertices, Edges) domain
problems
• Characteristics:
– XML docs, News Feeds, Web Pages
– Data collected from Web Pages for Page Ranking
Platform and Cluster Architectural Attributes:
• Compute:
– Scaleout Intel Xeon processor E5 based platform for fast analytic in-memory processing
– Memory: 2-4GB/Core for Data Management Hadoop JVM; For graph data
– PCI Express* Gen3
• Storage: Scaleout Storage (HDD) for capacity; SSD Cache
• I/O: High bandwidth and IOPS to Storage (HDD + SSD cache) for fast record inserts and
reads
• Network: High network bandwidth for Hadoop Shuffle data during graph construction;
10GbE TORS; 10-40GbE Inter-Rack Switch
19
20. Pipelined In-Memory Analytics: Twitter* Feed Spam Analytics
Spark Streaming Big Data Analytics
• Run a streaming computation as a series of
extremely small, deterministic batch jobs
• Batch sizes as low as ½ second, latency ~ 1 second
*
20
21. Pipelined In-Memory Analytics: Twitter* Feed Spam Analytics
Spark Streaming Big Data Analytics
Usage Model: Process and filter out Twitter* feed spam as tweets are
ingested
• Solution: Spark Streaming from Berkeley
• Characteristics:
– Data ingested from Twitter feeds
Platform and Cluster Architectural Attributes:
• Compute:
– Scaleout Intel® Xeon® processor E5 based platform for fast analytic query processing
– Memory: Very large Memory Capacity close to the CPU; High memory bandwidth
– PCI Express* Gen3
• Storage: Scaleout Storage (HDD) for capacity; SSD Cache
• I/O: High bandwidth and IOPS to Storage (HDD + SSD cache) for fast record
inserts and reads
• Network: High network bandwidth; 10GbE TORS; 10-40GbE Inter-Rack
Switch
21
22. Smart City Sensor Model
Embedded
Smart Smart Grid
building sensors Cloud
Industrial
sensors automation
Pollution sensors
sensors Dedicated
Meteorological Smart
sensors meters
HPC
INTELLIGENT INTELLIGENT
CITY FACTORY
Transactional
INTELLIGENT INTELLIGENT
HOSPITAL HIGHWAY
Social
Sensors on
Inductive Traffic
Portable medical Medical smartphones
Sensors on sensors cameras
imaging services sensors on
vehicles Location
ambulances
22
23. The Fusion of Internet of Things and Big Data
Planogram monitoring Real-time transaction
(Real-time stock level) Dynamic pricing
Real-time, personalized ad
Auto promotion/coupon
Social network connection
Interactive display
(Behavioral marketing)
RFID
RFID
Store heat map (hot merchandise, browsing
Surveillance camera (store statistics) history, conversion rate)
23
25. Smart Traffic Intelligent Transport System
HBase* Application for Predictive Analytics
Usage Model: Analyze city traffic to derive statistics for crime prevention, info sharing, and
predictive traffic analysis
• Solution: Embed HBase* client in camera for real-time inserts of structured/unstructured
data
• Benefits: Automated queries for traffic violation, data mining of fake licenses <1 minute for
all data captured for a week, predictive traffic forecasting
• Characteristics:
– 30000 + camera data collection points
– Petabytes of traffic data & terabytes of images
– 2 billion HBase records
Platform and Cluster Architectural Attributes:
• Compute:
– Scaleout Intel® Xeon® processor E5 based platform for real-time data serving
– Memory: Higher Memory Capacity 4GB/Core for HBase Memstores
– PCI Express* Gen3
• Storage: Scaleout Storage (HDD) for capacity; SSD Cache
• I/O: High bandwidth and IOPS to Storage (HDD + SSD cache) for fast data
access to HBase tables
• Network: High network bandwidth for HBase region servers; 10GbE TORS; 10-
40GbE Inter-Rack Switch
25
26. Video created
Video analyzed
Video Cold storage
E2E Analytics Use Case - Safe City
Video metadata
stored
Camera
Video Storage
(Edge or Centralized)
Private Cloud Public Cloud
2-3 TB Video
data/Camera/Month
Management
System
Police Car
300 GB
Edge Device metadata/Camera/Month
Edge Client Video
(Video capture)
Data Center/ Data Services
Indexer/Analyzer/Transcoder Cloud (VSaaS, VAaaS)
Smart
(Image extraction & Metadata Creation) (Private/Public)
Checkpoint District City State Country
By end of 2017 Edge & Backend VA’s Value By end of 2017
457 PB Metadata tagging and compression at the edge 76 PB
Raw Video per Metadata per
Enables 10s of new use models for traffic mgmt
Day Day
Traffic video People, cars,
Fastest time to information and public safety
streamed by HD geospatial
cameras information
Typical Scenario : Automated traffic violations
26
E.g., 70% Traffic violation detection by video in 2011
27. Smart City Big Data Architecture Framework
Intel® Many Visualization & Interpretation
Integrated Core
Architecture
Vertical Scale
Based on Intel®
Horizontal &
Streaming [Un]Structured Batch
microarchitecture-EX Analytics Data Analytics
Based on Intel
Data Acquisition
microarchitecture-EP
Microserver
Local Analytics
Based on Intel
Complex Event Processing
microarchitecture-EN Analytics Processing
Horizontal
Preprocessing/
Storage
Cleansing/Filtering/
Intel® Core™
Scale
Aggregation
Data Acquisition Video Analytics
Sensors Cameras
Core System-
on-a-Chip
27
28. Agenda
• What is Big Data?
• Big Data use cases
• What does Big Data mean for the data center?
• Call to action
28
29. Choice of Compute Platforms Optimized for Big Data
Intel® Xeon® processor E5 Family Intel Xeon processor E7 Family
RAM
QPI 1 Xeon E7-4800
QPI 2 CORE 1 CORE 2
QPI 3 CORE 3 CORE 4
CORE 5 CORE 6
QPI 4
Up to 4 channels CORE 7 CORE 8 Up to 8 channels
Integrated DDR3 1600 MHz DDR3 1066 MHz
PCI Express* memory CORE 9 CORE 10 memory
3.0
4 QPI 1.0
Up to 40
Up to 8 cores Lanes for
robust
CACHE Up to 10 cores
Up to 20 MB Up to 30 MB
lanes cache scalability
per socket cache
• Preferred solution for Hadoop* and scale- • Preferred solution for in-memory analytic
out analytic/DW engines engines and enterprise databases
• Up to 80%** performance boost compared • Highest cache and thread performance for
to prior generation large-dataset processing
• Intel® Integrated I/O with PCI Express* • Up to 2TB memory footprint (4-socket
3.0 provides more bandwidth for large platform) for in-memory apps
data sets • Highest reliability and 8-socket+ scalability
• Latest DDR3 memory technology/capacity
for reduced memory latency
Right Analytic Platforms begins with Intel Xeon processors
29 QPI = Intel® QuickPath Interconnect. **See backup slides for 80% claim
30. Platform and Software Optimizations for Hadoop*
Integrated Up to four channels
PCI Express* DDR3 1600 MHz
3.0 memory
Up to 40 Up to eight
lanes cores
per socket
Up to 20 MB
cache •
• Up to 80%** performance boost vs. prior generation
– Intel® Advanced Vector Extensions - reduce compute time
– Intel® Turbo Boost Technology - increased performance
• Intel® Distribution for Apache Hadoop* software
– Built on open source releases
– Custom tuning for data types and scaling approaches
** See backup slides for 80% claim
1 Performance comparison using best submitted/published 2-socket server results on the SPECfp*_rate_base2006 benchmark as of 6 March 2012.
2 Source: Intel internal measurements of average time for an I/O device read to local system memory under idle conditions comparing Intel® Xeon® processor E5-2600 product family (230 ns) vs..
QPI = Intel® QuickPath Interconnect
Intel® Xeon® processor 5500 series (340 ns). See notes in backup for configuration details
Intel® Xeon® processor E5
30 * Other names and brands may be claimed as the property of others
32. Microserver: High Density, Low
Power System Innovations
• Addressing the low power, high
density packaging
• Based on Intel® Atom™ processors
– Next generation Intel Atom
processor codename Avoton
– Workloads
Web tier, SaaS, IaaS, PaaS and
light data analytics
– For scale-out apps
32
33. Transforming Storage
Data explosion … Driving storage opportunity
690% Distributed 30%
CAGR
Growth in storage capacity 2010- Storage
2015+
Traditional 16%
Volume Unstructured Data Storage CAGR
Big Sensed Data
Intel® Xeon® processors provide
Big Corp Data
storage intelligence
Big Web Data • Deduplication
Structured Data • Thin
provisioning
Corporate Data • Erasure code
• MapReduce
Time • Encryption
690 percent growth in storage capacity based off Intel analysis and IDC data,
between 2010 (26,066 petabytes) to 2015 (179,327) which is ~690%
Source: Intel
33
34. Intelligent Distributed Storage
Optimizations
Intelligent pattern matching reduces large
blocks of repeated data
BEFORE AFTER
TRADITIONAL
Real-Time Data Analytics
THIN PROVISIONING
ALLOCATION
ALLOCATED -FREE
On-demand utilization of available storage –
APPLI 2 SYSTEM-WIDE
APPLI 1
ALLOCATED - USED
ALLOCATED -FREE
CAPACITY RESERVED
APPLI 2
virtual and real capacity
ALLOCATED -USED APPLI 1
Analysis of real-time storage determines
extent and nature of compression
Strategic positioning of faster storage
devices, improves storage performance
34
35. Agenda
• What is Big Data?
• Big Data use cases
• What does Big Data mean for the data center?
• Call to action
35
36. Call to Action
• Big Data represents a huge industry opportunity for
innovation – get involved!
– New solutions for analytics
– Hardware infrastructure innovation – across the platform
• Customers from across enterprise, cloud,
government, HPC and telecom are looking to improve
decision making with big data
– If you are a developer of solutions: Understand the market
opportunities
– If you are a manager of solutions: Understand where big
data can help your organization
• Intel is deeply engaged in big data
– Work with us on delivery of big data solutions
36
38. Legal Disclaimer
• Intel® Turbo Boost Technology requires a system with Intel Turbo Boost Technology. Intel Turbo Boost Technology and
Intel Turbo Boost Technology 2.0 are only available on select Intel® processors. Consult your PC
manufacturer. Performance varies depending on hardware, software, and system configuration. For more information,
visit http://www.intel.com/go/turbo.
38
39. Risk Factors
The above statements and any others in this document that refer to plans and expectations for the first quarter, the year and the
future are forward-looking statements that involve a number of risks and uncertainties. Words such as “anticipates,” “expects,”
“intends,” “plans,” “believes,” “seeks,” “estimates,” “may,” “will,” “should” and their variations identify forward-looking
statements. Statements that refer to or are based on projections, uncertain events or assumptions also identify forward-looking
statements. Many factors could affect Intel’s actual results, and variances from Intel’s current expectations regarding such factors
could cause actual results to differ materially from those expressed in these forward-looking statements. Intel presently considers the
following to be the important factors that could cause actual results to differ materially from the company’s expectations. Demand
could be different from Intel's expectations due to factors including changes in business and economic conditions; customer acceptance
of Intel’s and competitors’ products; supply constraints and other disruptions affecting customers; changes in customer order patterns
including order cancellations; and changes in the level of inventory at customers. Uncertainty in global economic and financial
conditions poses a risk that consumers and businesses may defer purchases in response to negative financial events, which could
negatively affect product demand and other related matters. Intel operates in intensely competitive industries that are characterized by
a high percentage of costs that are fixed or difficult to reduce in the short term and product demand that is highly variable and difficult
to forecast. Revenue and the gross margin percentage are affected by the timing of Intel product introductions and the demand for and
market acceptance of Intel's products; actions taken by Intel's competitors, including product offerings and introductions, marketing
programs and pricing pressures and Intel’s response to such actions; and Intel’s ability to respond quickly to technological
developments and to incorporate new features into its products. The gross margin percentage could vary significantly from
expectations based on capacity utilization; variations in inventory valuation, including variations related to the timing of qualifying
products for sale; changes in revenue levels; segment product mix; the timing and execution of the manufacturing ramp and
associated costs; start-up costs; excess or obsolete inventory; changes in unit costs; defects or disruptions in the supply of materials
or resources; product manufacturing quality/yields; and impairments of long-lived assets, including manufacturing, assembly/test and
intangible assets. Intel's results could be affected by adverse economic, social, political and physical/infrastructure conditions in
countries where Intel, its customers or its suppliers operate, including military conflict and other security risks, natural disasters,
infrastructure disruptions, health concerns and fluctuations in currency exchange rates. Expenses, particularly certain marketing and
compensation expenses, as well as restructuring and asset impairment charges, vary depending on the level of demand for Intel's
products and the level of revenue and profits. Intel’s results could be affected by the timing of closing of acquisitions and divestitures.
Intel’s current chief executive officer plans to retire in May 2013 and the Board of Directors is working to choose a successor. The
succession and transition process may have a direct and/or indirect effect on the business and operations of the company. In
connection with the appointment of the new CEO, the company will seek to retain our executive management team (some of whom are
being considered for the CEO position), and keep employees focused on achieving the company’s strategic goals and objectives. Intel's
results could be affected by adverse effects associated with product defects and errata (deviations from published specifications), and
by litigation or regulatory matters involving intellectual property, stockholder, consumer, antitrust, disclosure and other issues, such as
the litigation and regulatory matters described in Intel's SEC reports. An unfavorable ruling could include monetary damages or an
injunction prohibiting Intel from manufacturing or selling one or more products, precluding particular business practices, impacting
Intel’s ability to design its products, or requiring other remedies such as compulsory licensing of intellectual property. A detailed
discussion of these and other factors that could affect Intel’s results is included in Intel’s SEC filings, including the company’s most
recent Form 10-Q, report on Form 10-K and earnings release.
Rev. 1/17/13
39
41. Disclaimer for “Up to 80% performance
boost compared to prior generation”
• Performance comparison using best submitted/published
2-socket server results on the SPECfp*_rate_base2006
benchmark as of 6 March 2012. Baseline score of 271
published by Itautec on the Servidor Itautec MX203* and
Servidor Itautec MX223* platforms based on the prior
generation Intel® Xeon® processor X5690. New score of
492 submitted for publication by Dell on the PowerEdge
T620 platform and Fujitsu on the PRIMERGY RX300 S7*
platform based on the Intel® Xeon® processor E5-2690.
For additional details, please visit www.spec.org. Intel
does not control or audit the design or implementation of
third party benchmark data or Web sites referenced in
this document. Intel encourages all of its customers to
visit the referenced Web sites or others where similar
performance benchmark data are reported and confirm
whether the referenced benchmark data are accurate and
reflect performance of systems available for purchase.
41