DME is a software-defined data management engine for many-core SoC designs. It supports distributed shared memory and provides scalability. DME comes in three versions with different footprints and levels of programmability and performance. Future planned features include support for additional interconnect protocols and memory consistency models.
- Next-generation Intel Xeon 5500 processors provide additional performance through features like QuickPath architecture, integrated memory controllers, Turbo Mode, and Hyper-Threading.
- DDR3 memory is available in both RDIMM and UDIMM formats, with RDIMMs offering greater capacity but higher cost.
- ProLiant G6 servers feature improvements over G5 like support for PCIe 2.0, SSDs, 10Gb Ethernet, and more powerful Smart Array controllers.
The document discusses HP's mission-critical storage solutions including the 3PAR storage array. It highlights key features of 3PAR such as reducing capacity requirements by at least 50% through tiering, delivering high performance even during failures, and reducing storage management burden by 90% compared to competitors' arrays. It also discusses how 3PAR provides massive consolidation, virtual private arrays, and the ability to sustain and consolidate diverse workloads without compromise.
This document summarizes a presentation on datacenter computing trends and problems. It discusses how cooling is a major source of energy inefficiency in datacenters. It also explains how servers are rarely fully utilized but operate least efficiently during common usage of 30% load. The document advocates for achieving better energy proportionality so servers can be more efficient during typical usage levels. It presents approaches like disaggregated memory and servers that break CPU-memory co-location to improve efficiency and consolidation.
UGIF 12 2010 - october 2010 pricing and packaging-externalUGIF
The document discusses updates to IBM Informix packaging and pricing. New editions are being introduced with simplified names and more inclusive features. Licensing is shifting to new metrics like processor value units (PVUs) and authorized user single install to better align with standard definitions. There will also be a migration of existing customers to the new entitlements on May 25, 2010.
Cameron Swen is the Divisional Marketing Manager for AMD’s Embedded Solutions Division. He is responsible for outbound marketing and works with AMDs customers to develop and market board and system level solutions to serve the COTS market.
Performance and scalability of Informix ultimate warehouse edtion on Intel Xe...Keshav Murthy
Talk at Information on Demand Conference 2011. As part of the Informix Ultimate Warehouse Edition, Informix
Warehouse Accelerator (IWA) transparently provides up to several
orders of a magnitude speed up in query performance for
Informix Dynamic Server (IDS), as well as enormous administrative
cost savings. Combined with the Intel Xeon E7 processor series,
Informix and the Accelerator brings the performance and
scalability of IDS solutions to new levels. This presentation will
give best practices and benefits of IWA and the Intel Xeon E7
processors, and highlight the implications and performance
benefits of running IDS and IWA on these processors, compared
to previous releases of IDS and prior Intel server platforms.
EMC's VNX unified storage system is:
1) Optimized for flash with a powerful, flexible modular architecture designed for high performance.
2) Features efficient packaging with dense disk options and built-in energy efficiency.
3) Provides a mix of ultra-performance, performance, and capacity drives for optimal economics.
- Next-generation Intel Xeon 5500 processors provide additional performance through features like QuickPath architecture, integrated memory controllers, Turbo Mode, and Hyper-Threading.
- DDR3 memory is available in both RDIMM and UDIMM formats, with RDIMMs offering greater capacity but higher cost.
- ProLiant G6 servers feature improvements over G5 like support for PCIe 2.0, SSDs, 10Gb Ethernet, and more powerful Smart Array controllers.
The document discusses HP's mission-critical storage solutions including the 3PAR storage array. It highlights key features of 3PAR such as reducing capacity requirements by at least 50% through tiering, delivering high performance even during failures, and reducing storage management burden by 90% compared to competitors' arrays. It also discusses how 3PAR provides massive consolidation, virtual private arrays, and the ability to sustain and consolidate diverse workloads without compromise.
This document summarizes a presentation on datacenter computing trends and problems. It discusses how cooling is a major source of energy inefficiency in datacenters. It also explains how servers are rarely fully utilized but operate least efficiently during common usage of 30% load. The document advocates for achieving better energy proportionality so servers can be more efficient during typical usage levels. It presents approaches like disaggregated memory and servers that break CPU-memory co-location to improve efficiency and consolidation.
UGIF 12 2010 - october 2010 pricing and packaging-externalUGIF
The document discusses updates to IBM Informix packaging and pricing. New editions are being introduced with simplified names and more inclusive features. Licensing is shifting to new metrics like processor value units (PVUs) and authorized user single install to better align with standard definitions. There will also be a migration of existing customers to the new entitlements on May 25, 2010.
Cameron Swen is the Divisional Marketing Manager for AMD’s Embedded Solutions Division. He is responsible for outbound marketing and works with AMDs customers to develop and market board and system level solutions to serve the COTS market.
Performance and scalability of Informix ultimate warehouse edtion on Intel Xe...Keshav Murthy
Talk at Information on Demand Conference 2011. As part of the Informix Ultimate Warehouse Edition, Informix
Warehouse Accelerator (IWA) transparently provides up to several
orders of a magnitude speed up in query performance for
Informix Dynamic Server (IDS), as well as enormous administrative
cost savings. Combined with the Intel Xeon E7 processor series,
Informix and the Accelerator brings the performance and
scalability of IDS solutions to new levels. This presentation will
give best practices and benefits of IWA and the Intel Xeon E7
processors, and highlight the implications and performance
benefits of running IDS and IWA on these processors, compared
to previous releases of IDS and prior Intel server platforms.
EMC's VNX unified storage system is:
1) Optimized for flash with a powerful, flexible modular architecture designed for high performance.
2) Features efficient packaging with dense disk options and built-in energy efficiency.
3) Provides a mix of ultra-performance, performance, and capacity drives for optimal economics.
PRIMERGY Bladeframe: Caratteristiche e beneficiFSCitalia
The document discusses the complexity of managing today's datacenters and how it results in applications being tied to specific IT resources. It introduces PRIMERGY BladeFrame, which provides a centralized managed server pool and resource management platform to help reduce this complexity. The platform uses a "black box" approach to highly integrate hardware and resource management. It allows for central, dynamic allocation of processors, network, I/O, and storage on demand. This can help provide less complexity, increased agility, and reduced costs for the datacenter.
SAP Virtualization Week 2012 - The Lego Cloudaidanshribman
This document discusses a research project called Hecatonchire that aims to provide distributed shared memory (DSM) capabilities to cloud computing. Hecatonchire breaks down physical servers into their core components of CPU, memory, and I/O, and extends existing cloud software like KVM, QEMU, and libvirt to allow virtual machines to transparently access remote resources. Some key capabilities discussed include live migration, flash cloning for rapid auto-scaling, memory pooling to access unused memory on remote hosts, and the long term goal of implementing true DSM across the cluster. The presentation was given by researchers from SAP Research in Belfast and Israel.
Algorithmic Memory Increases Memory Performance by an Order of Magnitudechiportal
Algorithmic memory increases memory performance by an order of magnitude using algorithms and memory macros. It presents a standard memory interface while adding no clock cycle latency. This allows creating multiport functionality from single-port physical memory. Algorithmic memory lowers area and power while increasing available memory ports and clock performance compared to physical memory alone. It provides configurable high performance, density-efficient, and power-efficient memories to alleviate the growing processor-embedded memory performance gap.
IBM software is optimized to take full advantage of Power Systems hardware. Key IBM software products like DB2, WebSphere Application Server, Cognos Business Intelligence, and Rational Developer are tuned at every level from applications to middleware to operating systems to fully leverage Power Systems' massive parallelism, threading, memory, and reliability. This optimization allows IBM software on Power Systems to deliver industry-leading performance, scalability, efficiency and value for customers.
The AMD Embedded G-Series platform is an integrated circuit that combines a low-power CPU and discrete-level GPU into a single chip called an Accelerated Processing Unit (APU). It provides high-performance graphics and video capabilities in a low power and compact package suitable for embedded applications like digital signage and set-top boxes. Key benefits include exceptional performance per watt, support for high-resolution displays, and lower system costs through integration and a long product lifecycle.
Designed for optimum flexibility and growth, the IBM System x3530 M4 offers a wide range of configuration options to help meet general business requirements today and tomorrow. Start with the combination of hard disk drive size, networking and computing power you need now, then scale up through an easy upgrade path...
CS 167 is an operating systems course that involves writing five programs of increasing difficulty in C and completing homeworks and exams. Students will build their own operating system by developing a kernel and adding features like virtual file systems and virtual memory. The course requires skills in C programming, debugging, and computer architecture. An operating system provides convenient abstractions of hardware like files instead of disks and threads instead of processors to make the underlying system easier for programmers to use while managing concerns like performance, sharing, security and reliability.
This document discusses several models for implementing threads:
1) The one-level model treats user and kernel contexts as a single thread scheduled by the kernel.
2) The variable-weight processes model shares resources between processes like threads.
3) The two-level model separates user and kernel contexts into user threads scheduled on kernel threads.
The document discusses how AMD Opteron processors can provide optimal virtualization when used with VMware ESX. It highlights key AMD technologies like AMD-V with Rapid Virtualization Indexing (RVI) that can improve performance and efficiency of virtualized environments by enhancing memory management and switching between virtual machines. The document also outlines business values like scalability and investment protection that AMD Opteron processors can deliver for virtualization.
The IBM System x3620 M3 is a cost-optimized 2-socket server designed for growing businesses. It uses the latest Intel Xeon processors and supports up to 192GB of RAM and 16TB of storage. The server is customizable, flexible, energy efficient, and backed by IBM services and support to help ensure reliability and uptime. It offers affordable growth options as business requirements change.
Learn how Linux on IBM System z provides Memory Sizing for
WebSphere Applications on System z LinuxGood Memory Allocation for Linux. For more information, visit http://ibm.co/PNo9Cb.
Next Generation Business Service Management: Strategy and RoadmapNovell
This session covers the strategic direction of the Business Service Management solution from Novell within the data center, as well as the strategy moving forward.
This document discusses various methods for compressing animation files, including reducing aspect ratio, frame rate, and using codecs. It describes lossless compression, which preserves quality, and lossy compression, which achieves higher compression ratios but can reduce quality. Common compression formats discussed include MPEG, QuickTime, SWF, AVI, and GIF.
The document summarizes AMD's 2012 A-Series desktop platforms. Key points include:
- The 2012 AMD A-Series features "Piledriver" CPU cores and discrete-class AMD Radeon HD 7000 graphics on a single chip. It provides up to 4 CPU cores and improved performance over 2011's AMD A-Series.
- Benchmarks show the 2012 AMD A-Series provides significantly better performance than Intel processors in areas like compression, image processing, and gaming, due to its accelerated processing capabilities. Overclocking can further boost performance by up to 68%.
- For mainstream gaming, the document argues the integrated graphics of the 2012 AMD A-Series provide better performance and value than a competing Intel processor
Io Express is a portable video and audio I/O interface that provides 10-bit HD/SD-SDI and HDMI input/output, hardware-based HD to SD downconversion, 2-channel analog audio output, and LTC timecode input/output. It is compatible with Mac and PC and supports a variety of video and audio applications. Io Express has a small, rugged design making it ideal for use with laptops and in the field.
DataCore's storage virtualization software provides high availability network attached storage (NAS) by enabling non-disruptive failover of clustered file shares across physical servers. It uses synchronous mirroring of file shares between redundant NAS servers for business continuity. Caching and thin provisioning enhance performance and storage efficiency. The solution provides high availability, faster performance, space savings and disaster recovery protection for NAS environments in a cost-effective way by leveraging existing server infrastructure.
This document summarizes AMD's Financial Analyst Day presentation from November 11, 2009. It discusses AMD's transition from CPUs to GPUs over time, increasing transistor counts and capabilities. It highlights AMD's current and future product strategies, including their Fusion era of computing that combines CPU and GPU capabilities on a single chip. It outlines AMD's priorities and roadmaps for server, client, and graphics platforms through 2010 and 2011, emphasizing improved performance, power efficiency, and competitive advantages through GPU technology.
This document proposes a software-defined approach called SDPM (Software-Defined Persistent Memory) to abstract the heterogeneity of emerging persistent memory technologies and enable their use across different hardware configurations. It describes SDPM's design goals of supporting various local and remote persistent memory attach points while providing a unified programming model. The proposed architecture introduces a persistent memory manager and a file system to manage data placement and provide memory-like and storage-like access. An evaluation shows the prototype delivering near-optimal performance for local and remote persistent memory configurations.
This document discusses hardware and software requirements for high-performance packet processing. It covers Direct Cache Access (DCA) which allows network cards to directly write received packets to the CPU cache to reduce memory traffic. It also discusses virtualization techniques like Virtual Machine Device Queues (VMDq) and Receive-Side Scaling (RSS) which improve performance by distributing network traffic across multiple CPU cores using multiple receive queues.
Lego Cloud SAP Virtualization Week 2012Benoit Hudzia
This session will demonstrate that by extending KVM we can deliver none-disruptively the next level of IaaS platform modularization. We will first show instantaneous live migration of VM. Then we will introduce the memory aggregation concept, and finally how to achieve full operational flexibility by dis-aggregating the datacenter resource to its core elements.
DB2 pureScale provides unlimited scalability, application transparency, and continuous availability for transaction processing and ERP workloads. It uses a shared-nothing architecture where multiple database instances (members) connect to a single database and cooperate to provide a single system image to clients. PowerHA pureScale technology handles global bufferpool and locking management to maintain data consistency as members scale out.
PRIMERGY Bladeframe: Caratteristiche e beneficiFSCitalia
The document discusses the complexity of managing today's datacenters and how it results in applications being tied to specific IT resources. It introduces PRIMERGY BladeFrame, which provides a centralized managed server pool and resource management platform to help reduce this complexity. The platform uses a "black box" approach to highly integrate hardware and resource management. It allows for central, dynamic allocation of processors, network, I/O, and storage on demand. This can help provide less complexity, increased agility, and reduced costs for the datacenter.
SAP Virtualization Week 2012 - The Lego Cloudaidanshribman
This document discusses a research project called Hecatonchire that aims to provide distributed shared memory (DSM) capabilities to cloud computing. Hecatonchire breaks down physical servers into their core components of CPU, memory, and I/O, and extends existing cloud software like KVM, QEMU, and libvirt to allow virtual machines to transparently access remote resources. Some key capabilities discussed include live migration, flash cloning for rapid auto-scaling, memory pooling to access unused memory on remote hosts, and the long term goal of implementing true DSM across the cluster. The presentation was given by researchers from SAP Research in Belfast and Israel.
Algorithmic Memory Increases Memory Performance by an Order of Magnitudechiportal
Algorithmic memory increases memory performance by an order of magnitude using algorithms and memory macros. It presents a standard memory interface while adding no clock cycle latency. This allows creating multiport functionality from single-port physical memory. Algorithmic memory lowers area and power while increasing available memory ports and clock performance compared to physical memory alone. It provides configurable high performance, density-efficient, and power-efficient memories to alleviate the growing processor-embedded memory performance gap.
IBM software is optimized to take full advantage of Power Systems hardware. Key IBM software products like DB2, WebSphere Application Server, Cognos Business Intelligence, and Rational Developer are tuned at every level from applications to middleware to operating systems to fully leverage Power Systems' massive parallelism, threading, memory, and reliability. This optimization allows IBM software on Power Systems to deliver industry-leading performance, scalability, efficiency and value for customers.
The AMD Embedded G-Series platform is an integrated circuit that combines a low-power CPU and discrete-level GPU into a single chip called an Accelerated Processing Unit (APU). It provides high-performance graphics and video capabilities in a low power and compact package suitable for embedded applications like digital signage and set-top boxes. Key benefits include exceptional performance per watt, support for high-resolution displays, and lower system costs through integration and a long product lifecycle.
Designed for optimum flexibility and growth, the IBM System x3530 M4 offers a wide range of configuration options to help meet general business requirements today and tomorrow. Start with the combination of hard disk drive size, networking and computing power you need now, then scale up through an easy upgrade path...
CS 167 is an operating systems course that involves writing five programs of increasing difficulty in C and completing homeworks and exams. Students will build their own operating system by developing a kernel and adding features like virtual file systems and virtual memory. The course requires skills in C programming, debugging, and computer architecture. An operating system provides convenient abstractions of hardware like files instead of disks and threads instead of processors to make the underlying system easier for programmers to use while managing concerns like performance, sharing, security and reliability.
This document discusses several models for implementing threads:
1) The one-level model treats user and kernel contexts as a single thread scheduled by the kernel.
2) The variable-weight processes model shares resources between processes like threads.
3) The two-level model separates user and kernel contexts into user threads scheduled on kernel threads.
The document discusses how AMD Opteron processors can provide optimal virtualization when used with VMware ESX. It highlights key AMD technologies like AMD-V with Rapid Virtualization Indexing (RVI) that can improve performance and efficiency of virtualized environments by enhancing memory management and switching between virtual machines. The document also outlines business values like scalability and investment protection that AMD Opteron processors can deliver for virtualization.
The IBM System x3620 M3 is a cost-optimized 2-socket server designed for growing businesses. It uses the latest Intel Xeon processors and supports up to 192GB of RAM and 16TB of storage. The server is customizable, flexible, energy efficient, and backed by IBM services and support to help ensure reliability and uptime. It offers affordable growth options as business requirements change.
Learn how Linux on IBM System z provides Memory Sizing for
WebSphere Applications on System z LinuxGood Memory Allocation for Linux. For more information, visit http://ibm.co/PNo9Cb.
Next Generation Business Service Management: Strategy and RoadmapNovell
This session covers the strategic direction of the Business Service Management solution from Novell within the data center, as well as the strategy moving forward.
This document discusses various methods for compressing animation files, including reducing aspect ratio, frame rate, and using codecs. It describes lossless compression, which preserves quality, and lossy compression, which achieves higher compression ratios but can reduce quality. Common compression formats discussed include MPEG, QuickTime, SWF, AVI, and GIF.
The document summarizes AMD's 2012 A-Series desktop platforms. Key points include:
- The 2012 AMD A-Series features "Piledriver" CPU cores and discrete-class AMD Radeon HD 7000 graphics on a single chip. It provides up to 4 CPU cores and improved performance over 2011's AMD A-Series.
- Benchmarks show the 2012 AMD A-Series provides significantly better performance than Intel processors in areas like compression, image processing, and gaming, due to its accelerated processing capabilities. Overclocking can further boost performance by up to 68%.
- For mainstream gaming, the document argues the integrated graphics of the 2012 AMD A-Series provide better performance and value than a competing Intel processor
Io Express is a portable video and audio I/O interface that provides 10-bit HD/SD-SDI and HDMI input/output, hardware-based HD to SD downconversion, 2-channel analog audio output, and LTC timecode input/output. It is compatible with Mac and PC and supports a variety of video and audio applications. Io Express has a small, rugged design making it ideal for use with laptops and in the field.
DataCore's storage virtualization software provides high availability network attached storage (NAS) by enabling non-disruptive failover of clustered file shares across physical servers. It uses synchronous mirroring of file shares between redundant NAS servers for business continuity. Caching and thin provisioning enhance performance and storage efficiency. The solution provides high availability, faster performance, space savings and disaster recovery protection for NAS environments in a cost-effective way by leveraging existing server infrastructure.
This document summarizes AMD's Financial Analyst Day presentation from November 11, 2009. It discusses AMD's transition from CPUs to GPUs over time, increasing transistor counts and capabilities. It highlights AMD's current and future product strategies, including their Fusion era of computing that combines CPU and GPU capabilities on a single chip. It outlines AMD's priorities and roadmaps for server, client, and graphics platforms through 2010 and 2011, emphasizing improved performance, power efficiency, and competitive advantages through GPU technology.
This document proposes a software-defined approach called SDPM (Software-Defined Persistent Memory) to abstract the heterogeneity of emerging persistent memory technologies and enable their use across different hardware configurations. It describes SDPM's design goals of supporting various local and remote persistent memory attach points while providing a unified programming model. The proposed architecture introduces a persistent memory manager and a file system to manage data placement and provide memory-like and storage-like access. An evaluation shows the prototype delivering near-optimal performance for local and remote persistent memory configurations.
This document discusses hardware and software requirements for high-performance packet processing. It covers Direct Cache Access (DCA) which allows network cards to directly write received packets to the CPU cache to reduce memory traffic. It also discusses virtualization techniques like Virtual Machine Device Queues (VMDq) and Receive-Side Scaling (RSS) which improve performance by distributing network traffic across multiple CPU cores using multiple receive queues.
Lego Cloud SAP Virtualization Week 2012Benoit Hudzia
This session will demonstrate that by extending KVM we can deliver none-disruptively the next level of IaaS platform modularization. We will first show instantaneous live migration of VM. Then we will introduce the memory aggregation concept, and finally how to achieve full operational flexibility by dis-aggregating the datacenter resource to its core elements.
DB2 pureScale provides unlimited scalability, application transparency, and continuous availability for transaction processing and ERP workloads. It uses a shared-nothing architecture where multiple database instances (members) connect to a single database and cooperate to provide a single system image to clients. PowerHA pureScale technology handles global bufferpool and locking management to maintain data consistency as members scale out.
This document discusses tuning PowerCenter for performance. It outlines steps to measure performance, determine bottlenecks, and make targeted changes. Key aspects of the PowerCenter architecture like the engine, memory usage, and threading model are explained. Common bottlenecks like targets, sources, and mappings are described along with solutions like indexing, filtering, and transformation optimization.
Comp tia flashcards set 2 (25 cards) cpu erdSue Long Smith
This document contains flashcards with definitions of computer-related terms starting with letters C through E. Each term is defined in 1-2 sentences. Terms include CPU, CRT, DB-25, DDR, DHCP, DIMM, DVD-RAM, EISA, EMI, and ERD. The flashcards provide concise explanations of commonly used technical computing terms.
The document discusses Open Virtual Platforms (OVP) software for creating virtual platforms of processors and peripherals. It provides an overview of OVP's capabilities such as instruction accurate simulation, connecting to debuggers, and efficient system modeling. Examples are given demonstrating single and multicore platforms using PowerPC processors along with an example integrating OVP models into SystemC TLM 2.0.
This document provides an overview of embedded systems, including definitions, components, and applications. It discusses the main types of embedded processors like microprocessors, microcontrollers, and DSPs. It also covers embedded system memories, development process, real-time aspects, and commonly used programming languages like assembly and C. The document is intended as an introduction to embedded systems.
The document summarizes different types of semiconductor memory technologies, including dynamic RAM (DRAM), static RAM (SRAM), read-only memory (ROM), programmable ROM (PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, synchronous DRAM (SDRAM), Rambus DRAM (RDRAM), and double data rate SDRAM (DDR SDRAM). It describes the basic operation and characteristics of each technology. Key aspects like refresh requirements, cell structures, write capabilities, and performance are compared between DRAM and SRAM.
Greenplum Analytics Workbench - What Can a Private Hadoop Cloud Do For You? EMC
This session discusses the rational behind the Greenplum Analytics Workbench initiative - it's goals, present status today and the roadmap for this first of a kind initiative. Enterprises learn about how a Hadoop Cloud can help unlock revenue opportunities from the data within the cluster.
This document provides an overview of the Distributed Symmetric Multiprocessing (DSMP) software architecture. DSMP transforms an InfiniBand connected cluster of commodity servers into a shared memory supercomputer through two unique software components: 1) a host operating system that runs on the head node, and 2) a lightweight microkernel that runs on the other servers. Key aspects of DSMP include a shared memory system, optimized InfiniBand drivers, an application-driven memory page coherency scheme, enhanced multithreading support, and distributed disk storage. DSMP allows commodity clusters to provide shared memory capabilities at a lower cost than proprietary supercomputers.
The world’s information is doubling every two years. In 2011 the world created a staggering 1.8 zettabytes. By 2020 the world will generate 50 times the amount of information and 75 times the number of "information containers", while IT staff to manage it will grow less than 1.5 times. This session introduces students to various storage networking, & business continuity terminologies.
The document discusses scalability and provides an overview of key concepts for scaling systems. It defines scalability as the ability to handle growing amounts of work in a capable manner. There are two main types of scalability: vertical, which involves getting bigger resources; and horizontal, which involves adding more resources. The document outlines several rules for scalability and discusses strategies for scaling different parts of the system like databases, caching, load balancing, and asynchronous operations.
We4IT lcty 2013 - infra-man - domino run faster We4IT Group
The document discusses optimizing performance for IBM Lotus Domino. It recommends using 64-bit hardware and operating systems to allow Domino to utilize more memory. Transaction logging and separating disks for data, transaction logs, and indexes are also advised. The document provides tips for configuring hardware, operating systems, and Domino server settings to improve performance.
This document discusses Greenplum Database on HDFS (GOH). It provides an introduction and overview of GOH's architecture, features, and performance. Key points include that GOH allows Greenplum to use HDFS for storage, provides pluggable storage support, and full transaction support for tables on HDFS. It also notes challenges around supporting many concurrent queries due to limitations of the current Java-based HDFS client, and possibilities for addressing this.
The MPC5121e is a multi-core processor from Freescale that provides a computing platform for embedded applications. It uses an e300 core running at up to 400MHz with integrated graphics, display control, and networking capabilities. The processor is well-suited for automotive, industrial, and consumer applications requiring an LCD interface.
The Freescale MPC5121e is a 32-bit multi-core processor for automotive and consumer applications. It features an e300 core, PowerVR graphics engine, display controller, audio accelerator, Ethernet MAC, USB, PCI, and 12 programmable serial controllers. The MPC5121e offers competitive cost, quality, reliability and up to 400MHz performance, making it well-suited for applications like telematics, infotainment, security cameras, and mobile PCs.
5. Why choose DME?
DME supports centralized shared memory (CM) and distributed shared memory (DSM).
For medium and large system, we prefer to use DSM structures, since centralized
memory has already become a bottleneck. Memories are preferrably distributed,
featuring good scalability and less delay of memory access.
5
7. Why choose DME
Processor1 Processor2
Cache Cache
Interface Interface Local
NI modules in DME
DME P1 M4 Memory
are designed to DME P1 M2 Local
support Standard Memory NI
bus interface, e.g. NI We support:
AHB,APB,AXI, OCP 1. common bus protocol,
easy to integrate into
existing system;
Network-on-Chip (NoC) 2. configurable for different
DME can perform as
data formats;
a bridge connecting
different IPs with
Network
NI NI
DME P1 M2 DME P1 M0
(Local
Interface DDR-Interface
Memory)
DDR controller
Custom IP
DDR Memory
7
8. DME Product
DME comes in three flavours
P1 M0 – DME Light – Small footprint, low power,
for memory access and similar standard tasks
P1 M2 – DME Flex – Programmable, fully featured
for maximum flexibility and customization
P1 M4 – DME Flex Plus – Programmable, fully
featured, with maximum performance
8
9. DME Product
DME P1 M0 – DME Light
DME
Interconnect-
IP IP-Interface Interconnect
Interface
Feature list:
Transaction Scheduler
1.DSM(distributed shared memory)
2.Private|Shared division
3.Synchronization
4.Privilege level setting (3/2013)
Memory Interface Crossbar
5.IP interface support (AHB,APB) (4/2013)
6.Interconnection interface support (AHB,APB)
(4/2013)
Local Memory Local Memory Local Memory Local Memory
9
10. DME Product
DME P1 M2
Data M anagement Engine
Interconnect
CPU CPU Interface Interconnect
Interface
Feature list:
Transaction Scheduler
h
1.DSM(distributed shared memory)
t
a
p
s
2.Private|Shared division
M ini- M ini - s
a
processor processor p
y
3.V2P
B
4.Synchronization
5.Privilege level setting (3/2013)
M emory Interface Crossbar
6.IP interface support (AHB,APB) (4/2013)
7.Interconnection interface support (AHB,APB)
Local Local (4/2013)
M emory M emory
8.DMA-1, DMA-2 (3/2013)
9.Message passing (4/2013)
10.micro-programming 10
11. DME Product
DME P1 M4
Data M anagement Engine
Interconnect
CPU CPU Interface Interconnect
Interface
Feature list:
Transaction Scheduler
h
t
1.DSM(distributed shared memory)
a
p
M ini- M ini - M ini - M ini - s
s
2.Private|Shared division
a
processor processor processor processor p 3.V2P
y
B
4.Synchronization
5.Privilege level setting (3/2013)
M emory Interface Crossbar
6.IP interface support (AHB,APB) (4/2013)
Local Local Local Local
7.Interconnection interface support (AHB,APB)
M emory M emory M emory M emory (4/2013)
8.DMA-1, DMA-2 (3/2013)
9.Message passing (4/2013)
10.micro-programming 11
12. DME Planned Features
Features
AXI Q2 2013
DMA-3 Q2 2013
Striding access Q2 2013
Data shuffling Q2 2013
SystemC Model, SIMICS Model Q3 2013
Transaction ordering support (memory consistency) Q3 2013
Dynamic memory allocation Q4 2013
OCP Q4 2013
Directory based cache coherence On Demand 12
14. Application Example: H.264 decoder
Task
P1 P2 P3 Pn
Distributor
ITRANS/ INTRA
ENTROPY DEQUANT PREDICTION
P1 P2 P3 P4 P5 P6
Load Private Shared
Store
Mem Mem
3 Private Shared
NODES MEM MEM MEM MEM MEM MEM
Mem Mem
Private Shared
Mem Mem
P7 P8 P9 P10 P11 P12
12
NODES
Private Shared
MEM MEM MEM MEM MEM MEM Mem Mem
with DME based on distributed shared memory without DME based on centralized memory
14
15. Demonstrator Performance
Performance(fps)
77
75 with DM E
51 QCIF(176x144)
50
25 without DM E
25 30 31
24 20 with DM E
13 CIF(352x288)
6 without DM E
7 7
3 6 9 node
15
16. Applications
The DME is useful for many-core SoCs in:
Video, signal and network processing
Cloud computing
Industrial automation
Set-top boxes
Scientific computing
Solid state disks
Other high-end embedded applications
16
18. Evaluating the DME
For evaluation of the DME, Elsip offers:
Introduction Booklet
DME Application Development
Package, with API libraries
C++ Model
SIMICS Model
Compiled IP Model
User manual
Demonstrator
On-site and off-site support
19. Roadmap
Looking into the future, other IP we’re working on
include:
Packet- and Circuit-switched NoCs (Circuit-switched
can be faster than packet-switched for
telecom/datacom applications)
DRRA - Dynamically Reconfgurable Resource Array
(reconfgurable on bus level, better silicon usage than
FPGA)
20. Thank you!
Please go to
www.elsip.se
for more information
21. The founders
Axel Jantsch, CTO. Professor, KTH Electronic Systems since
2002. 20+ years of research, primarily within NoC and SoC.
200+ scientific papers published. Visiting professor of Fudan
University in PRC and Cantabria University in Spain
Ahmed Hemani. Professor, KTH, focus on high-level system
integration, design automation, NoC, asynchronous circuit,
configurable system. Industrial experience from NSC,
NXP/Philips, ABB, Ericsson, Newlogic, Synthesia and
Spirea (co-founder).
Zhonghai Lu: Professor, KTH, expert in SoC and NoC.
Reviewer of 14 international periodicals. Principal
investigator of Intel, dealing with future nuclear processor
chip frame.
21
22. Contact
Sales Director Bengt Edlund
Mail: bengt@elsip.se
Phone: +46 708 722 800
CEO Adam Edström
Mail: adam@elsip.se
Phone +46 702 579 734
Address: c/o SICS, PO Box 1263, SE16429, Kista, Sweden
22
23. Some ELSIP Milestones
• Founded by professors Axel Jantsch, Ahmed Hemani
and Zhonghai Lu at the Royal Institute of Technology in
Stockholm 2011
• Received initial funding from Vinnova
• Commercial launch when Adam Edström (CEO) and
Bengt Edlund (Sales Dir) joined the company Sept 2012
• Established subsidiary Memcom in Shanghai March
2012, PRC, with Zhonghai Lu as CTO and Zhuo Zou as
CEO. Received initial funding from Wuxi government.
• Cooperation with Fudan-Wuxi Institute, Shanghai, PRC
• Selected by SICS, the Swedish Institute of Computer
Science, as member of SICS Startup Accelerator
23