This document discusses physical infrastructure designs to support logical network architectures in data centers. It examines Top of Rack (ToR) and End of Row (EoR) access models. ToR uses an access switch in each cabinet, requiring connections for each server. EoR uses chassis switches in the row middle, connecting cabinets within cable length limits. Designs must map logical networks to physical cable routing and manage connectivity growth.
Customer Name: Provident Financial
Industry: Financial services
Location: Bradford, United Kingdom
Number of Employees: 3700
Challenge
• Reduce time and effort needed for major data centre migration
• Smoothly allocate data to specific priority tiers within new data centre structure
• Quickly migrate data to new devices
Solution
• Cisco MDS Data Mobility Manager
Results
• Reduced migration effort 75 per cent by automating data backup, restore, and qualification processes
• Cut downtime by up to 90 per cent by carrying out migrations while services are still running
This white paper discusses Sun Microsystems' new virtualized network express module and blade server solution. It addresses ongoing customer needs to reduce datacenter costs related to power, cooling, management complexity and staffing. The solution aims to improve efficiency and lower costs by streamlining management, reducing cabling, improving energy efficiency, and providing a single-pane-of-glass management view.
Terremark Virtualized Dr Custom Storage Pre PressFrank Johnson
Virtualized disaster recovery provides reliable protection without the cost and complexity of maintaining secondary infrastructure. Terremark offers array-based replication of customer data and systems into its cloud environment, providing production-ready failover capabilities. Host-based replication is also available for smaller environments. Customers benefit from minimized costs while maximizing resources through Terremark's expertise and on-demand cloud infrastructure.
There is a need for comprehensive monitoring and control of mechanical and electrical infrastructure in data centers to ensure dynamic power and cooling demands from cloud computing are met. Proper planning of the mechanical and electrical design focused on cloud operations will define the best monitoring and control systems coupled with equipment that can support dynamic demands. What is at risk for data center tenants and the data center company itself is the sensitive customer information housed in the data center infrastructure, so proper preparedness and design is important to identify and correct potential issues before operation.
This document summarizes Intechnology's multi-tier data management services. It offers managed replication for mission-critical tier 1 data, managed backup for critical tier 2 data, and managed archiving for legacy tier 3 data. By automatically matching different types of data to the appropriate storage tier and service, Intechnology helps customers reduce costs, improve data access and retention, and scale storage capacity as needed. The multi-tier approach provides business benefits like lower expenses, improved performance and security, and reduced infrastructure management burdens and carbon footprint.
A Cost-Effective Integrated Solution for Backup and Disaster Recoveryxmeteorite
This document discusses a cost-effective integrated solution for backup and disaster recovery provided by InMage Systems. The solution combines application and data recovery into a single platform that can be used for disaster recovery, local backup and restore, and automated application failover and recovery. It leverages continuous data protection and replication technologies to minimize data loss and recovery times. The solution supports applications like Microsoft Exchange, SQL, SharePoint, Oracle, and SAP in a heterogeneous environment.
Our data services are based on the importance of the data (its category) and not just on its volume. This way we can build cost effective data
management
solutions with you.
Savings of up to 35% typically achievable against in-house options.
Network Operations Managed Services (NOMS)TMNG Global
Network Operations Managed Services (NOMS) provide the methods, tools and expertise to validate network inventory and topology as the basis to reduce capex/opex costs through the managed a) grooming and decommissioning of legacy assets, b) optimisation of interconnect costs and c) subscriber migration to advanced services.
Customer Name: Provident Financial
Industry: Financial services
Location: Bradford, United Kingdom
Number of Employees: 3700
Challenge
• Reduce time and effort needed for major data centre migration
• Smoothly allocate data to specific priority tiers within new data centre structure
• Quickly migrate data to new devices
Solution
• Cisco MDS Data Mobility Manager
Results
• Reduced migration effort 75 per cent by automating data backup, restore, and qualification processes
• Cut downtime by up to 90 per cent by carrying out migrations while services are still running
This white paper discusses Sun Microsystems' new virtualized network express module and blade server solution. It addresses ongoing customer needs to reduce datacenter costs related to power, cooling, management complexity and staffing. The solution aims to improve efficiency and lower costs by streamlining management, reducing cabling, improving energy efficiency, and providing a single-pane-of-glass management view.
Terremark Virtualized Dr Custom Storage Pre PressFrank Johnson
Virtualized disaster recovery provides reliable protection without the cost and complexity of maintaining secondary infrastructure. Terremark offers array-based replication of customer data and systems into its cloud environment, providing production-ready failover capabilities. Host-based replication is also available for smaller environments. Customers benefit from minimized costs while maximizing resources through Terremark's expertise and on-demand cloud infrastructure.
There is a need for comprehensive monitoring and control of mechanical and electrical infrastructure in data centers to ensure dynamic power and cooling demands from cloud computing are met. Proper planning of the mechanical and electrical design focused on cloud operations will define the best monitoring and control systems coupled with equipment that can support dynamic demands. What is at risk for data center tenants and the data center company itself is the sensitive customer information housed in the data center infrastructure, so proper preparedness and design is important to identify and correct potential issues before operation.
This document summarizes Intechnology's multi-tier data management services. It offers managed replication for mission-critical tier 1 data, managed backup for critical tier 2 data, and managed archiving for legacy tier 3 data. By automatically matching different types of data to the appropriate storage tier and service, Intechnology helps customers reduce costs, improve data access and retention, and scale storage capacity as needed. The multi-tier approach provides business benefits like lower expenses, improved performance and security, and reduced infrastructure management burdens and carbon footprint.
A Cost-Effective Integrated Solution for Backup and Disaster Recoveryxmeteorite
This document discusses a cost-effective integrated solution for backup and disaster recovery provided by InMage Systems. The solution combines application and data recovery into a single platform that can be used for disaster recovery, local backup and restore, and automated application failover and recovery. It leverages continuous data protection and replication technologies to minimize data loss and recovery times. The solution supports applications like Microsoft Exchange, SQL, SharePoint, Oracle, and SAP in a heterogeneous environment.
Our data services are based on the importance of the data (its category) and not just on its volume. This way we can build cost effective data
management
solutions with you.
Savings of up to 35% typically achievable against in-house options.
Network Operations Managed Services (NOMS)TMNG Global
Network Operations Managed Services (NOMS) provide the methods, tools and expertise to validate network inventory and topology as the basis to reduce capex/opex costs through the managed a) grooming and decommissioning of legacy assets, b) optimisation of interconnect costs and c) subscriber migration to advanced services.
This document discusses best practices for data migration and how IBM's Softek Transparent Data Migration Facility (TDMF) software can help. It outlines five key factors to consider for data migration: performance, source data protection, tiered storage, multivendor environments, and application downtime. The TDMF software allows for nondisruptive data migration that maintains application availability and balances data movement with system demands. It also provides capabilities like backout commands, fallback, and support for migrating across different storage media and vendor environments. Any change to storage infrastructure requires data migration, but traditional methods cause downtime - the TDMF software aims to minimize these issues.
The document discusses several computerized tree management systems that have been developed over the last two decades to help organizations effectively manage their resources. It provides details on a few specific systems currently available, including Arb Pro software designed for tree contractors, ezytreev software which offers online and desktop versions and additional modules, and Eye-TREE software which maintains tree inspection and work details for estates and organizations. The systems aim to help users monitor, control and manage all aspects of their tree-related businesses.
This document provides an introduction to NoSQL and data scalability techniques. It discusses key concepts like data scalability and horizontal scaling. The two main architectures for scalable data systems are data grids and NoSQL systems. Example NoSQL implementations discussed include MongoDB, which uses a document-oriented data model, and GigaSpaces XAP, a data grid platform. The document provides examples of using MongoDB with Python and discusses some advantages and disadvantages of MongoDB compared to SQL databases.
It\'s affordable replication, backup and archiving of your data.
Exponential data growth and finding affordable storage options is one of the biggest challenges businesses face.
We keep it straightforward and cost-effective: Our model is called Multi-tier Data Management (MDM) and we charge according to the priority of the data, not the volume.
It\'s simple; it works and will cut your data management costs.
To view this webinar:
http://ecast.opensystemsmedia.com/320
Suppliers of C4I, C2, Cyber, ISR and sensor and weapons platforms are challenged to meet commercial pressure from defense procurement for more capability at lower cost, and from acquisition officials for increasing interoperability across their combat systems in order to be able to enable new system capability through Information Dominance (ID).
RTI will present an architecture and its Connext solution, designed to meet these twin imperatives. Built upon proven open technology, Connext is a foundational system architecture that delivers significant productivity gains in integration, while also enabling discovery and rapid assimilation of existing system entities, potentially from 3rd party suppliers or already deployed in the field of operation.
Given the unique requirements of tactical system-of-systems, the architecture must support both real-time combat systems as well as brigade and command HQ enterprise style systems, bringing them together in a scalable, dynamic, and flexible framework. Connext addresses the performance and scale impedance mismatch between these disparate systems types, and delivers the ability to develop a common infrastructure that runs over DIL (Disconnected Intermittent Loss) communications as well as it does over Ethernet, putting minimal strain on the communications interfaces and maximizing information exchange.
The Connext foundation is in use in over 400 defense programs globally with over 350,000 licensed deployments. It has been approved by the US DoD to TRL9 (Technology Readiness Level).
This document discusses support for mobility in mobile communications. It covers file systems, databases, the World Wide Web, and the Wireless Application Protocol (WAP). For file systems, it describes challenges like limited resources, bandwidth issues, and inconsistency problems in mobile environments. It also summarizes several experimental file systems that use techniques like caching, pre-fetching, and weak consistency models. For databases, it notes issues like location-dependent queries and transaction processing challenges. For the WWW, it outlines problems of HTTP and HTML on mobile devices and approaches to address them. It provides an overview of the WAP standard and its goals of delivering Internet content and services to mobile devices.
The client, a major player in power generation and distribution for 9 decades, faced challenges with inefficient bill generation and payroll processes. Newgen provided a variable data publishing solution to generate 25,000 bills and 3,000 pay slips from SAP. The solution automated output distribution through multiple channels like print and email. This reduced costs, improved productivity and maintained design integrity for effective customer communication.
This document provides a summary of new storage tiering technologies that can improve performance, reduce costs, and manage risk. Key points discussed include using automated movement of data between storage tiers based on activity level and age, reducing costs by placing data on the most cost-effective tier, and managing risk through technologies like data replication and encryption. The document highlights IBM's Easy Tier software that automatically moves data within an array to optimize use of faster solid-state drives.
This document provides an overview of objectives for a chapter on databases. It defines key database terms like data, database, and information. It describes the hierarchy of data from characters to fields to records to files. It explains the differences between file processing and database approaches. It also discusses relational, object-oriented, and multidimensional databases and how database management systems provide tools for querying, entering, and reporting data.
WP107 How Data Center Management Software Improves Planning and Cuts OPEXSE_NAM_Training
Modern data center infrastructure management software tools can help simplify operations, cut costs, and speed up information delivery in three key ways:
1. Planning tools simulate the impact of infrastructure changes to help with capacity planning and ensure redundancy.
2. Operations tools provide rapid impact analysis when issues arise and can proactively prevent downtime.
3. Analytics tools leverage historical data to identify strengths and weaknesses to improve future performance.
Data Center Energy Efficiency Best Practices – Insights Into The ROI On Best Practices
Electricity expense has become an increasingly important factor of the total cost of ownership (TCO) for data centers. Energy consumption of typical data centers can be substantially reduced through design of the physical infrastructure and IT architecture.
To view the recorded webinar presentation, please visit http://www.42u.com/energy-efficiency-webinar.htm
Mo ict 2013 new data center design proposal - pts-0813-004006 - ptsp-c-13-0...khalid noman husainy
The document is a proposal from PTS Data Center Solutions, Inc. to the Ministry of Information & Communication of Bangladesh for designing a new 5,000 sq ft data center. The proposal includes:
1) An overview of the project background and objectives
2) A description of PTS' proposed approach and scope of services which includes conceptual design, modeling, cost estimating and reporting
3) Clarification that additional services like detailed design or construction management are not included in the proposed fees
The Atlantic ACO Data Center collects data from various sources including claims, doctors' offices, patients, pharmacies, and other providers. It has departments for data collection and analytics, education/presentations, and process improvement. The small staff includes administrators, programmers, presentation/usability experts, and management. The office is centrally located and equipped with computers, training areas, and a classroom. Data is collected on 33 quality indicators and displayed simply on screens in exam rooms. Office managers review the data monthly and it is updated from various sources. The process improvement department analyzes the data to provide meaningful feedback. Presentations are crafted to be effective, efficient, simple and physician-satisfying.
This document proposes a "Consolidate, Virtualize and Energize" project to reduce energy consumption and costs across the 32 colleges and universities in the Minnesota State Colleges and Universities system through server virtualization, reducing underutilized computer terminals, incorporating alternative energy sources, expanding online classes, and educating staff on green technologies. The project aims to reduce energy usage by 5% in the first phase and 10% upon completion in 2016. Key steps include consolidating data centers, implementing virtualization, purchasing energy efficient equipment, and raising awareness of energy consumption among students, faculty and staff.
Enclosure Strategies for Efficiency – Data Center Efficiency Best-Practice Starts with Your Racks
Historically, Data Center managers didn't invest much thought in their deployment of server racks beyond basic functionality, air flow, and the initial cost of the rack itself. Today, the widespread deployment of high-density configurations is causing major hot spot concerns and capacity issues. These factors, along with the high cost of power, require a sound understanding of how your server rack deployment plan relates to your overall efficiency strategy.
To view the recorded webinar presentation, please visit http://www.42u.com/enclosure-strategies-webinar.htm
Executive Presentation on adhering to Healthcare Industry complianceThomas Bronack
Thomas Bronack of Data Center Assistance Group proposes assisting healthcare providers in adhering to regulatory requirements regarding workplace security, violence prevention, and workflow management. The proposal outlines new compliance regulations around patient privacy, security, and freedoms as well as penalties for non-compliance. Bronack would perform risk assessments, implement physical and data security controls, and provide training and awareness to help organizations achieve Joint Commission accreditation and compliance.
This document provides an overview of Cisco's data center solutions and the evolution of data centers. It discusses how conventional data center models lead to challenges around siloed infrastructure and inefficient provisioning. Cisco's data center 3.0 approach aims to address these issues through virtualization, unified computing, and automation to improve utilization, flexibility, and management while reducing costs. Specific solutions discussed include the Nexus 1000V virtual switch, unified fabric using FCoE, and the unified computing system.
The document provides troubleshooting tips and techniques for Cisco Data center switches including the Cisco Nexus 7000, Catalyst 6500 VSS, and high CPU utilization issues. It discusses using commands like show processes cpu sorted, debug netdr capture, and show ip cef to troubleshoot traffic flow and switching paths. It also covers troubleshooting software upgrades on the Nexus 7000 and gathering core dumps and logs to debug process crashes.
This document discusses best practices for data migration and how IBM's Softek Transparent Data Migration Facility (TDMF) software can help. It outlines five key factors to consider for data migration: performance, source data protection, tiered storage, multivendor environments, and application downtime. The TDMF software allows for nondisruptive data migration that maintains application availability and balances data movement with system demands. It also provides capabilities like backout commands, fallback, and support for migrating across different storage media and vendor environments. Any change to storage infrastructure requires data migration, but traditional methods cause downtime - the TDMF software aims to minimize these issues.
The document discusses several computerized tree management systems that have been developed over the last two decades to help organizations effectively manage their resources. It provides details on a few specific systems currently available, including Arb Pro software designed for tree contractors, ezytreev software which offers online and desktop versions and additional modules, and Eye-TREE software which maintains tree inspection and work details for estates and organizations. The systems aim to help users monitor, control and manage all aspects of their tree-related businesses.
This document provides an introduction to NoSQL and data scalability techniques. It discusses key concepts like data scalability and horizontal scaling. The two main architectures for scalable data systems are data grids and NoSQL systems. Example NoSQL implementations discussed include MongoDB, which uses a document-oriented data model, and GigaSpaces XAP, a data grid platform. The document provides examples of using MongoDB with Python and discusses some advantages and disadvantages of MongoDB compared to SQL databases.
It\'s affordable replication, backup and archiving of your data.
Exponential data growth and finding affordable storage options is one of the biggest challenges businesses face.
We keep it straightforward and cost-effective: Our model is called Multi-tier Data Management (MDM) and we charge according to the priority of the data, not the volume.
It\'s simple; it works and will cut your data management costs.
To view this webinar:
http://ecast.opensystemsmedia.com/320
Suppliers of C4I, C2, Cyber, ISR and sensor and weapons platforms are challenged to meet commercial pressure from defense procurement for more capability at lower cost, and from acquisition officials for increasing interoperability across their combat systems in order to be able to enable new system capability through Information Dominance (ID).
RTI will present an architecture and its Connext solution, designed to meet these twin imperatives. Built upon proven open technology, Connext is a foundational system architecture that delivers significant productivity gains in integration, while also enabling discovery and rapid assimilation of existing system entities, potentially from 3rd party suppliers or already deployed in the field of operation.
Given the unique requirements of tactical system-of-systems, the architecture must support both real-time combat systems as well as brigade and command HQ enterprise style systems, bringing them together in a scalable, dynamic, and flexible framework. Connext addresses the performance and scale impedance mismatch between these disparate systems types, and delivers the ability to develop a common infrastructure that runs over DIL (Disconnected Intermittent Loss) communications as well as it does over Ethernet, putting minimal strain on the communications interfaces and maximizing information exchange.
The Connext foundation is in use in over 400 defense programs globally with over 350,000 licensed deployments. It has been approved by the US DoD to TRL9 (Technology Readiness Level).
This document discusses support for mobility in mobile communications. It covers file systems, databases, the World Wide Web, and the Wireless Application Protocol (WAP). For file systems, it describes challenges like limited resources, bandwidth issues, and inconsistency problems in mobile environments. It also summarizes several experimental file systems that use techniques like caching, pre-fetching, and weak consistency models. For databases, it notes issues like location-dependent queries and transaction processing challenges. For the WWW, it outlines problems of HTTP and HTML on mobile devices and approaches to address them. It provides an overview of the WAP standard and its goals of delivering Internet content and services to mobile devices.
The client, a major player in power generation and distribution for 9 decades, faced challenges with inefficient bill generation and payroll processes. Newgen provided a variable data publishing solution to generate 25,000 bills and 3,000 pay slips from SAP. The solution automated output distribution through multiple channels like print and email. This reduced costs, improved productivity and maintained design integrity for effective customer communication.
This document provides a summary of new storage tiering technologies that can improve performance, reduce costs, and manage risk. Key points discussed include using automated movement of data between storage tiers based on activity level and age, reducing costs by placing data on the most cost-effective tier, and managing risk through technologies like data replication and encryption. The document highlights IBM's Easy Tier software that automatically moves data within an array to optimize use of faster solid-state drives.
This document provides an overview of objectives for a chapter on databases. It defines key database terms like data, database, and information. It describes the hierarchy of data from characters to fields to records to files. It explains the differences between file processing and database approaches. It also discusses relational, object-oriented, and multidimensional databases and how database management systems provide tools for querying, entering, and reporting data.
WP107 How Data Center Management Software Improves Planning and Cuts OPEXSE_NAM_Training
Modern data center infrastructure management software tools can help simplify operations, cut costs, and speed up information delivery in three key ways:
1. Planning tools simulate the impact of infrastructure changes to help with capacity planning and ensure redundancy.
2. Operations tools provide rapid impact analysis when issues arise and can proactively prevent downtime.
3. Analytics tools leverage historical data to identify strengths and weaknesses to improve future performance.
Data Center Energy Efficiency Best Practices – Insights Into The ROI On Best Practices
Electricity expense has become an increasingly important factor of the total cost of ownership (TCO) for data centers. Energy consumption of typical data centers can be substantially reduced through design of the physical infrastructure and IT architecture.
To view the recorded webinar presentation, please visit http://www.42u.com/energy-efficiency-webinar.htm
Mo ict 2013 new data center design proposal - pts-0813-004006 - ptsp-c-13-0...khalid noman husainy
The document is a proposal from PTS Data Center Solutions, Inc. to the Ministry of Information & Communication of Bangladesh for designing a new 5,000 sq ft data center. The proposal includes:
1) An overview of the project background and objectives
2) A description of PTS' proposed approach and scope of services which includes conceptual design, modeling, cost estimating and reporting
3) Clarification that additional services like detailed design or construction management are not included in the proposed fees
The Atlantic ACO Data Center collects data from various sources including claims, doctors' offices, patients, pharmacies, and other providers. It has departments for data collection and analytics, education/presentations, and process improvement. The small staff includes administrators, programmers, presentation/usability experts, and management. The office is centrally located and equipped with computers, training areas, and a classroom. Data is collected on 33 quality indicators and displayed simply on screens in exam rooms. Office managers review the data monthly and it is updated from various sources. The process improvement department analyzes the data to provide meaningful feedback. Presentations are crafted to be effective, efficient, simple and physician-satisfying.
This document proposes a "Consolidate, Virtualize and Energize" project to reduce energy consumption and costs across the 32 colleges and universities in the Minnesota State Colleges and Universities system through server virtualization, reducing underutilized computer terminals, incorporating alternative energy sources, expanding online classes, and educating staff on green technologies. The project aims to reduce energy usage by 5% in the first phase and 10% upon completion in 2016. Key steps include consolidating data centers, implementing virtualization, purchasing energy efficient equipment, and raising awareness of energy consumption among students, faculty and staff.
Enclosure Strategies for Efficiency – Data Center Efficiency Best-Practice Starts with Your Racks
Historically, Data Center managers didn't invest much thought in their deployment of server racks beyond basic functionality, air flow, and the initial cost of the rack itself. Today, the widespread deployment of high-density configurations is causing major hot spot concerns and capacity issues. These factors, along with the high cost of power, require a sound understanding of how your server rack deployment plan relates to your overall efficiency strategy.
To view the recorded webinar presentation, please visit http://www.42u.com/enclosure-strategies-webinar.htm
Executive Presentation on adhering to Healthcare Industry complianceThomas Bronack
Thomas Bronack of Data Center Assistance Group proposes assisting healthcare providers in adhering to regulatory requirements regarding workplace security, violence prevention, and workflow management. The proposal outlines new compliance regulations around patient privacy, security, and freedoms as well as penalties for non-compliance. Bronack would perform risk assessments, implement physical and data security controls, and provide training and awareness to help organizations achieve Joint Commission accreditation and compliance.
This document provides an overview of Cisco's data center solutions and the evolution of data centers. It discusses how conventional data center models lead to challenges around siloed infrastructure and inefficient provisioning. Cisco's data center 3.0 approach aims to address these issues through virtualization, unified computing, and automation to improve utilization, flexibility, and management while reducing costs. Specific solutions discussed include the Nexus 1000V virtual switch, unified fabric using FCoE, and the unified computing system.
The document provides troubleshooting tips and techniques for Cisco Data center switches including the Cisco Nexus 7000, Catalyst 6500 VSS, and high CPU utilization issues. It discusses using commands like show processes cpu sorted, debug netdr capture, and show ip cef to troubleshoot traffic flow and switching paths. It also covers troubleshooting software upgrades on the Nexus 7000 and gathering core dumps and logs to debug process crashes.
This document provides an overview of services from the Small Business and Technology Development Center (SBTDC) and Procurement Technical Assistance Center (PTAC), which assist small businesses with government contracting. It discusses writing proposals for government contracts, including analyzing solicitation documents, researching regulations and past awards, and preparing competitive proposals that address evaluation criteria. The presentation covers submitting proposals on time and in the required format, and the evaluation process for determining if proposals are responsive, responsible, and technically acceptable.
This document proposes establishing a rural health center in Hubei Province, China to provide basic healthcare, educational workshops, and emergency services. It outlines goals of annual exams, vaccinations, pharmaceuticals, and workshops on topics like HIV/AIDS and family planning. The target population includes at-risk groups like the homeless, migrants, children, elderly, and uninsured. It presents a timeline, budget, organizational chart, and plans for evaluation and monitoring the health center's impact on public health over time.
MetaFabric™ Architecture Virtualized Data Center: Design and Implementation G...Juniper Networks
This document provides an overview and design guide for implementing a MetaFabric architecture virtualized data center using Juniper Networks technologies. It describes the key components of the solution including compute, network, storage and applications. The design uses Juniper QFX switches and EX switches for data center switching and routing, SRX firewalls for security, and IBM Flex System servers and Juniper Network Director/Security Director for management. The guide includes configuration details for validating a proof of concept MetaFabric deployment.
eBook: Guide to Data Center Cabling Infrastructure
In This Free 36-page eBook:
*10 Gb/s Data Center Solutions
*Best Practices for Data Center Infrastructure Design
*Comparing Copper and Fiber Options in the Data Center
*The Hidden Costs of 10 Gb/s UTP Systems
*Light it Up: Fiber *Transmissions and Applications
*Cabling Infrastructure and Green Building Initiatives
About the Author:
Carrie Higbie has been involved in the computing and networking for 25+ years in executive and consultant roles. She is Siemon’s Global Network Applications Manager supporting end-users and active electronics manufacturers. She publishes columns and speaks at industry events globally. Carrie is an expert on TechTarget’s SearchNetworking, SearchVoIP, and SearchDataCenters and authors columns for these and SearchCIO and SearchMobile forums and is on the board of advisors. She is on the BOD and former President of the BladeSystems Alliance. She participates in IEEE, the Ethernet Alliance and IDC Enterprise Expert Panels. She has one telecommunications patent and one pending.
A data center network is a system in which multiple server are connected to each other to share information and resources. Multiple remote office or user connected to data center network and server for resource or information sharing.
Multiple remote office connected to data center server via VPN. Multiple ISP connected each branch and give failover service and using routing protocol OSPF.
Addressing the Challenges of Tactical Information Management in Net-Centric S...Angelo Corsaro
This paper provides an overview of the advantages provided by the OMG Data Distribution Service for Real-Time Systemts (DDS) for addressing the challenges associated with Tactical Information distribution.
This white paper discusses Sun Microsystems' new virtualized network express module and blade server solution. It addresses ongoing customer needs to reduce datacenter costs related to power, cooling, management complexity and staffing. The solution aims to improve efficiency and lower costs by streamlining management, reducing cabling, improving energy efficiency, and providing a single-pane-of-glass management view.
This volume of the Open Datacenter Interoperable Network (ODIN) describes software defined networking (SDN) and OpenFlow. SDN is used to simplify network control and management, automate network virtualization services, and provide a platform from which to build agile ....
Oracle Database 10g provides functionality to enable grid computing by virtualizing resources, automatically provisioning resources based on policies, and pooling resources. It allows databases to leverage hardware innovations like blades and Infiniband networks. It also simplifies installation, provides high availability features like RAC, and allows compute resources to be dynamically provisioned to meet changing business needs through features like Resonance and the Scheduler.
This document is a term paper on Software Defined Networking (SDN). It discusses how SDN proposes separating the control plane from the data plane in network architecture, making networks programmable. The key points made are:
1) SDN introduces three planes - data, control, and management. The control plane centralizes network intelligence through a controller.
2) Benefits of SDN include simpler network management through centralized control and programming. It also enables network virtualization.
3) The document outlines the layers in the SDN architecture, including the data plane (forwarding devices), southbound interface, network operating system controller, and northbound interface for programming.
Deerns Data Center Chameleon 20110913 V1.1euroamerican
The Chameleon Data Center was designed to dynamically adapt to meet changing business needs in terms of IT space, cooling, power and reliability tiers, while maintaining energy efficiency. It utilizes a unique combination of centralized and decentralized systems to deliver flexible IT power and cooling across a range of power densities, reliability tiers and capacities from the same infrastructure. This reduces upfront investment costs and allows customers to decide how to configure the data center space until equipment installation. The design achieves flexibility without additional costs through a modular infrastructure that can be easily adapted.
This document discusses guidelines for specifying data center criticality or tier levels. It analyzes three common classification methods - The Uptime Institute's Tiers, TIA-942, and Syska Hennessy's Criticality Levels. All three methods define four levels of criticality/tier but lack detail. TIA-942 provides the most specificity. The document also provides guidance on choosing a criticality level based on a data center's costs and a business' downtime costs. It associates typical criticality levels with different business applications.
This document discusses requirements and technology issues for data centers of the future. It outlines a vision for modular, proximity-based data centers with mixed compute environments, redundancy, workload migration capabilities, and automation/orchestration. Current issues include a lack of standardization in orchestration/integration, limitations on linear scaling, and "flatness" challenges from multi-tiered network designs. The data center of the future aims to address these through software-defined networking, computing, and storage orchestrated in a secure, flat design. Companies that implement these technologies gain competitive advantages around utilization and rapid expansion.
Softwarization has been transforming industries like data center and communications businesses. The established hardware-based architectures are being replaced by fundamentally new approaches - software-based systems which are essentially more flexible, dynamic and powerful. In this paper we analyse the evolution in data centers and communications networks towards virtualized platforms and study how a similar type of evolution could impact and benefit power distribution. Following the softwarization process in other industry sectors, we consider that next a Software Defined Grid (SDG) will emerge.
Router virtualization can provide benefits like reduced costs and environmental impact. There are two main approaches: hardware-isolated virtual routers (HVRs) with dedicated resources, and software-isolated virtual routers (SVRs) that share resources. HVRs are better suited for high-scale environments like POPs due to their ability to independently scale resources, while SVRs are preferable for low-scale environments like data centers. Cisco IOS XR Software supports HVRs through Secure Domain Routers that provide full isolation of routing instances.
Datacenter and cloud architectures continue to evolve to address the needs of large-scale multi-tenant data centers and clouds. These needs are centered around dimensions such as scalability in computing, storage, and bandwidth, scalability in network services, efficiency in resource utilization, agility in service creation, cost efficiency, service reliability, and security. Data centers are interconnected across the wide area network via routing and transport technologies to provide a pool of resources, known as the cloud. High-speed optical interfaces and dense wavelength-division multiplexing optical transport are used to provide for high-capacity transport intra- and inter-datacenter. This presentation will provide some brief descriptions on the working principles of Cloud & Data Center Networks.
The document discusses data center infrastructure and operations. It explains that data centers must transform from traditional environments to ones that are efficient, automated, and service-oriented to reduce costs and complexity while enabling growth. A typical data center securely houses an organization's IT systems and provides power, cooling, and redundancy to ensure maximum availability. It also discusses business benefits of data centers like availability, continuity, lower total cost of ownership, and agility. The document provides considerations for data center design like power usage efficiency and virtualization strategy. It includes a glossary of terms.
The document discusses data center infrastructure and operations. It explains that data centers must transform from traditional environments to ones that are efficient, automated, and service-oriented to reduce costs and complexity while enabling growth. A typical data center securely houses an organization's IT systems and provides power, cooling, and redundancy to ensure maximum availability and resilience. It also discusses considerations for data center design like power usage efficiency and virtualization strategy.
Software defined networking (SDN) separates the network control plane from the forwarding plane, allowing a single, centralized control plane to control multiple forwarding devices. SDN gives network administrators the ability to abstract the underlying network infrastructure and program how network traffic is handled. This allows SDN to simplify network management and make the network more flexible, programmable, and adaptable to changing needs. However, implementing SDN also presents challenges related to changing traditional network architectures, security, and specialized technical knowledge requirements.
This document provides an overview of key concepts for designing network topologies, including:
- Hierarchical network design with access, distribution, and core layers is recommended to divide the problem and improve performance, availability, and scalability. Spanning tree protocol and VLANs are also discussed.
- Network design documents should include requirements, logical and physical designs, implementation plans, budgets, and testing results. Response to an RFP must follow the specified format.
- Common topologies like hierarchical, collapsed core, and flat are compared. Hierarchical models are generally best to reduce workload, constrain broadcasts, and facilitate scaling and changes.
This document discusses cache and consistency in NoSQL databases. It introduces distributed caching using Memcached to improve performance and reduce load on database servers. It discusses using consistent hashing to partition and replicate data across servers while maintaining consistency. Paxos is presented as an efficient algorithm for maintaining consistency during updates in a distributed system in a more flexible way than traditional 2PC and 3PC approaches.
Research Paper Find a peer reviewed article in the following d.docxeleanorg1
Research Paper:
Find a peer reviewed article in the following databases provided by the UC Library and write a 500
-word
paper reviewing the literature concerning
Data Center Technology. Choose one of the technologies discussed in Chapter 5, Section 5.2 (Erl, 2014).
Abstract <>
Introduction <>
1-
Virtualization --
provide some flow chat also.
(Note:- But you can take anyone from 1 to 7)
2- Standardization and Modularity
3- Automation
4- Remote Operation and Management
5- High Availability
6- Security-Aware Design, Operation, and Management
7- Facilities
Etc…
======This is must
Use the following databases for your research:
· ACM Digital Library
· IEEE/IET Electronic Library
· SAGE Premier
=======
Conclusion<>
You may choose any scholarly peer reviewed articles and papers.
FYI -- PDF BOOK
Section 5.2
5.2. DATA CENTER TECHNOLOGY
Grouping IT resources in close proximity with one another, rather than having them geographically dispersed, allows for
power sharing, higher efficiency in shared IT resource usage, and improved accessibility for IT personnel. These are the
advantages that naturally popularized the data center concept. Modern data centers exist as specialized IT infrastructure
Chapter 5. Cloud-Enabling Technology - Cloud Computing: Concepts, Technology & Architecture
https://www.safaribooksonline.com/library/view/cloud-computing-concepts/9780133387568/ch05.html[11/15/2017 5:49:24 PM]
used to house centralized IT resources, such as servers, databases, networking and telecommunication devices, and
software systems.
Data centers are typically comprised of the following technologies and components:
Virtualization
Data centers consist of both physical and virtualized IT resources. The physical IT resource layer refers to the facility
infrastructure that houses computing/networking systems and equipment, together with hardware systems and their
operating systems (Figure 5.7). The resource abstraction and control of the virtualization layer is comprised of operational
and management tools that are often based on virtualization platforms that abstract the physical computing and
networking IT resources as virtualized components that are easier to allocate, operate, release, monitor, and control.
Chapter 5. Cloud-Enabling Technology - Cloud Computing: Concepts, Technology & Architecture
https://www.safaribooksonline.com/library/view/cloud-computing-concepts/9780133387568/ch05.html[11/15/2017 5:49:24 PM]
Figure 5.7.
The common components of a data center working together to provide virtualized IT resources
supported by physical IT resources.
Virtualization components are discussed separately in the upcoming
Virtualization Technology
section.
Standardization and Modularity
Data centers are built upon standardized commodity hardware and designed with modular architectures, aggregating
multiple identical building blocks of facility infrastructure and equipment to support scalability, gro.
Software defined data centers (SDDC) enable organizations to radically shift how they consume IT services by offering unprecedented flexibility, efficiency, and automation. SDDCs bestow capabilities to propel organizations towards business growth. They are more agile, secure, and flexible than conventional hardware-centric data centers. SDDCs virtualize infrastructure components like storage, networking, and computing and implement them through software-defined policies rather than hardware dependencies. This allows for more flexible consumption and management of data center services.
Similar to Datacenter Design Guide UNIQUE Panduit-Cisco (20)
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
1. Data Centers
Mapping Cisco Nexus, Catalyst, and MDS Logical Architectures
into PANDUIT Physical Layer Infrastructure Solutions
1
2. Introduction
The growing speed and footprint of data centers is challenging IT Managers to effectively budget and develop reliable,
high-performance, secure, and scalable network infrastructure environments. This growth is having a direct impact on the
amount of power and cooling required to support overall data center demands. Delivering reliable power and directly cooling
the sources that are consuming the majority of the power can be extremely difficult if the data center is not planned correctly.
This design guide examines how physical infrastructure designs can support a variety of network layer topologies in
order to achieve a truly integrated physical layer infrastructure. By understanding the network architecture governing the
arrangement of switches and servers throughout the data center, network stakeholders can map out a secure and scalable
infrastructure design to support current applications and meet anticipated bandwidth requirements and transmission speeds.
The core of this guide presents a virtual walk through the data center network architecture, outlining the relationships of key
physical layer elements including switches, servers, power, thermal management, racks/cabinets, cabling media, and cable
management. The deployment of Cisco hardware in two different access models are addressed: Top of Rack (ToR) and
End of Row (EoR).
Top of Mind Data Center Issues Developing the Integrated
Infrastructure Solution
The following issues are critical to the process of building
and maintaining cost-effective network infrastructure Data center planning requires the close collaboration of
solutions for data centers: business, IT, and facilities management teams to develop
an integrated solution. Understanding some general planning
• Uptime relationships helps you translate business requirements into
Uptime is the key metric by which network reliability practical data center networking solutions.
is measured, and can be defined as the amount of
time the network is available without interruption
to application environments. The most common
service interruptions to the physical layer result
from operational changes.
• Scalability
When designing a data center, network designers
must balance today’s known scalability requirements
with tomorrow’s anticipated user demands. Traffic
loads and bandwidth/distance requirements will
continue to vary throughout the data center,
which translates to a need to maximize your
network investment.
• Security
A key purpose of the data center is to house
mission critical applications in a reliable, secure
environment. Environmental security comes in many
forms, from blocking unauthorized access to
monitoring system connectivity at the physical layer.
Overall, the more secure your network is, the more
reliable it is.
2
3. Business Requirements Drive Data Center Design
A sound data center planning process is inclusive of the needs of various business units. Indeed, the process requires the
close collaboration of business, IT, and facilities management teams to develop a broad yet integrated data center solution
set. Understanding some general planning relationships helps you translate business requirements into practical data center
networking solutions.
Business requirements ultimately drive all data Business Stakeholders
center planning decisions (see Figure 1). On
a practical level, these requirements directly
impact the type of applications deployed and
Business Requirements
Service Level Agreements (SLAs) adopted by the
organization. Once critical uptime requirements
are in place and resources (servers, storage,
compute, and switches) have been specified to
Service Level Applications
support mission-critical applications, the required Agreement Software
bandwidth, power, and cooling loads can
be estimated. Data Center
Solution
Some data center managers try to limit the
number of standard compute resources on fewer
Active Equipment
hardware platforms and operating systems ,which Power Requirements
(Servers, Storage)
makes planning decisions related to cabinet, row, Structured Cabling
and room more predictable over time. Other
managers base their design decisions solely on Cooling Requirements
Network Requirements
(Switching)
the business application, which presents more
of a challenge in terms of planning for future
Facilities Stakeholders IT Stakeholders
growth. The data center network architecture
discussed in this guide uses a modular, End of Row
Figure 1. Business Requirements
(EoR) model and includes all compute resources
and their network, power, and storage needs. The
resulting requirements translates to multiple LAN, SAN, and power connections at the physical layer infrastructure. These
requirements can be scaled up from cabinet to row, from row to zone, and from zones to room.
Designing Scalability into the Physical Layer
When deploying large volumes of servers inside the data One way to simplify the design and simultaneously
center it is extremely important that the design footprint is incorporate a scalable layout is to divide the raised floor
scalable. However, access models vary between each net- space into modular, easily duplicated sub-areas. Figure 2
work, and can often be extremely complex to design. illustrates the modular building blocks used in order to
The integrated network topologies discussed in this guide design scalability into the network architecture at both OSI
take a modular, platform-based approach in order to scale Layers 1 and 2. The logical architecture is divided into three
up or down as required within a cabinet or room. It is discrete layers, and the physical infrastructure is designed
assumed that all compute resources incorporate resilient and divided into manageable sub-areas called “Pods” This.
network, power, and storage resources. This assumption example shows a typical data center with two zones and 20
translates to multiple LAN, SAN, and power connections Pods distributed throughout the room; core and aggregation
within the physical layer infrastructure. layer switches are located in each zone for redundancy, and
access layer switches are located in each Pod to support the
compute resources within the Pod.
3
4. C O LD A IS LE DC
Pod Zone
H O T A IS LE
Pod
N e tw o rk R a ck
S e rve r R a ck
S to ra g e R a ck
Pod
Module 1 Module N
Po d Po d
Figure 2. Mapping the Logical Architecture to the Cabling Infrastructure
Network Access Layer Environments
This guide describes two models for access layer switching environments – Top of Rack (TOR), and End of Row (EOR) – and
reviews the design techniques needed for the successful deployment of these configurations within an integrated physical
layer solution. When determining whether to deploy a TOR or EOR model it is important to understand the benefits and
challenges associated with each:
• A ToR design reduces cabling congestion which enhances flexibility of network deployment and installation. Some
trade-offs include reduced manageability and network scalability for high-density deployments due to the need to
manage more access switches than in an EoR configuration.
• An EoR model (also sometimes known as Middle of Row [MoR]) leverages chassis-based technology for one or
more row of servers to enable higher densities and greater scalability throughout the data center. Large modular
chassis such as the Cisco Nexus 7000 Series and Cisco Catalyst 6500 Series allow for greater densities and
performance with higher reliability and redundancy. Figure 2 represents an EoR deployment with multiple Pods being
distributed throughout the room
Note: Integrated switching configurations, in which applications reside on blade servers that have integrated switches built into each chassis, are not covered in
this guide. These designs are used only in conjunction with blade server technologies and would be deployed in a similar fashion as EoR configurations.
4
5. Top of Rack (ToR) Model
The design characteristic of a ToR model is the inclusion of an access layer switch in each server cabinet, so the physical
layer solution must be designed to support the switching hardware and access-layer connections. One cabling
benefit of deploying access layer switches in each server cabinet is the ability to link to the aggregation layer using
long-reach small form factor fiber connectivity. The use of fiber eliminates any reach or pathway challenges presented
by copper connectivity to allow greater flexibility in selecting the physical location of network equipment.
Figure 3 shows a typical logical ToR network topology,
illustrating the various redundant links and distribution of Nexus
connectivity between access and aggregation switches. This 7010
example utilizes the Cisco Nexus 7010 for the aggregation layer
and a Cisco Catalyst 4948 for the access layer. The Cisco
Catalyst 4948 provides 10GbE links routed out of the cabinet
back to the aggregation layer and 1GbE links for server access
connections within the cabinet.
Once the logical topology has been defined, the next step is Catalyst
to map a physical layer solution directly to that topology. 4948
With a ToR model it is important to understand the number
of network connections needed for each server resource. The
basic rule governing the number of ToR connections is that any
server deployment requiring more than 48 links requires an
additional access layer switch in each cabinet to support the
higher link volume. For example, if thirty (30) 1 RU servers
that each require three copper and two fiber connections
are deployed within a 45 RU cabinet, an additional access
layer switch is needed for each cabinet. Figure 4 shows the
typical rear view ToR design including cabinet connectivity Figure 3. Logical ToR Network Topology
requirements at aggregation and access layers.
Patch panel Patch panel
Top of Rack Top of Rack
Patch panel Patch panel
server server
x – connect x – connect
Network Network
Aggregation Aggregation
Point Point
A–B A–B
server server
Figure 4. Rear View of ToR Configuration
5
6. Density considerations are tied to the level of
redundancy deployed to support mission critical
hardware throughout the data center. It is critical
to choose a deployment strategy that
accommodates every connection and facilitates
good cable management.
SAN
High-density ToR deployments like the one shown Connections
in Figure 5 require more than 48 connections
per cabinet. Two access switches are deployed
in each NET-ACCESS ™ Cabinet to support complete LAN
redundancy throughout the network. All access Connections
connections are routed within the cabinet and all
aggregation linked are routed up and out of the
cabinet through the FIBERRUNNER ® Cable Routing
System back to the horizontal distribution area.
Power
Connections
Lower-density ToR deployments require less than
48 connections per cabinet (see Figure 6). This design
shares network connections between neighbor
cabinets to provide complete redundancy to each
compute resource.
Figure 5. Dual Switch – Server Cabinet Rear View
Figure 6. Single Switch – Server Cabinet Rear View
6
7. The cross-over of network connections between cabinets lower portion of the cabinet closer to the cooling source to
presents a cabling challenge that needs to be addressed in allow for proper thermal management of each compute
the physical design to avoid problems when performing any resource. In this layout, 1 RU servers are specified at 24
type of operational change after initial installation. To properly servers per cabinet, with two LAN and two SAN connections
route these connections between cabinets there must be per server to leverage 100% of the LAN switch ports
dedicated pathways defined between each cabinet to allocated to each cabinet.
accommodate the cross-over of connections. The most
common approach is to use PANDUIT overhead cable Connectivity is routed overhead between cabinets to
routing systems that attach directly to the top of the minimize congestion and allow for greater redundancy within
NET-ACCESS ™ Cabinet to provide dedicated pathways for all the LAN and SAN environment. All fiber links from the LAN
connectivity routing between cabinets, as shown in Figure 6. and SAN equipment are routed via overhead pathways back
to the Cisco Nexus, Catalyst, and MDS series switches at
Table 1 itemizes the hardware needed to support a typical the aggregation layer. The PANDUIT ® FIBERRUNNER ® Cable
ToR deployment in a data center with 16,800 square feet Routing System supports overhead fiber cabling, and
of raised floor space. Typically in a ToR layout, the Cisco PANDUIT ® NET-ACCESS ™ Overhead Cable Routing System
Catalyst 4948 is located towards the top of the cabinet. This can be leveraged with horizontal cable managers to support
allows for heavier equipment such as servers to occupy the copper routing and patching between cabinets.
Data Center Assumptions Quantity Server Specifications:
Raised Floor Square Footage 16800 2 — 2.66 GHz Intel Quad Core Xeon X5355
Servers 9792 2 x 670 W Hot-Swap
Cisco Nexus 7010 10 4 GB Ram — (2) 2048MB Dimm(s)
Cisco Catalyst 4948 408 2 — 73GB 15K-rpm Hot-Swap SAS — 3.5
Server Cabinets 408
Network Cabinets (LAN & SAN Equipment) 33
Midrange & SAN Equipment Cabinets 124
Watts Per Device Per Cabinet Cabs Per Pod Pods Per Room Quantity Total Watts
Servers 350 24 102 4 9792 3,427,200
Cisco Catalyst 4948 Switches (Access) 350 1 102 4 408 142,800
Cisco MDS (Access) 100 1 102 4 408 40,800
Cisco Appliance Allocation 5,400 — — — 8 43,200
Cisco Nexus 7010 (Aggregation) 5,400 — 2 4 8 43,200
Cisco MDS (Aggregation) 5,400 — 2 4 8 43,200
Cisco Nexus 7010 (Core) 5,400 — — — 2 10,800
Cisco MDS (Core) 5,400 — — — 2 10,800
Midrange/SAN Equipment Cabinets 4,050 — — — 124 502,200
Midrange/SAN Switching MDS 5,400 — — — 4 21,600
Midrange/SAN Switching Nexus 7010 5,400 — — — 4 21,600
Total Watts: 4,307,400
Table 1. Room Requirements for Typical ToR Deployment Total Kilowatts: 4,307.40
Total Megawatts: 4.31
End of Row (EoR) Model
In an EoR model, server cabinets contain patch fields but not access switches. In this model, the total number of servers per
cabinet and I/Os per server determines the number of switches used in each Pod, which then drives the physical layer
design decisions. The typical EoR Pod contains two Cisco Nexus or Cisco Catalyst switches for redundancy. The length of
each row within the Pod is determined by the density of the network switching equipment as well as the distance from the
server to the switch. For example, if each server cabinet in the row utilizes 48 connections and the switch has a capacity
for 336 connections, the row would have the capacity to support up to seven server cabinets with complete network
redundancy, as long as the seven cabinets are within the maximum cable length to the switching equipment.
7
8. Switch Cabinets
Figure 7. Top View of EoR Configuration
Figure 7 depicts a top view of a typical Pod design for a EoR configuration, and shows proper routing of connectivity from
a server cabinet to both access switches. Network equipment is located in the middle of the row to distribute redundant
connections across two rows of cabinets to support a total of 14 server cabinets. The red line represents copper LAN “A”
connections and the blue line represents copper LAN “B” connections. For true redundancy the connectivity takes two diverse
pathways using PANDUIT ® GRIDRUNNER ™ Underfloor Cable Routing Systems to route cables underfloor to each cabinet. For
fiber connections there is a similar pathway overhead to distribute all SAN “A” and “B” connections to each cabinet.
As applications continue to put even greater demands on
the network infrastructure it is critical to have the appropriate
cabling infrastructure in place to support these increased
bandwidth and performance requirements. Each EoR-arranged
switch cabinet is optimized to handle the high density
requirements from the Cisco Nexus 7010 switch. Figure 8
depicts the front view of a Cisco Nexus 7010 switch in a
NET-ACCESS ™ Cabinet populated with Category 6A 10 Gigabit
cabling leveraged for deployment in an EoR configuration. It is
critical to properly manage all connectivity exiting the front of
each switch. The EoR-arranged server cabinet is similar to a
typical ToR-arranged cabinet (see Figure 5), but the characteristic
access layer switch for both LAN and SAN connections are
replaced with structured cabling.
Table 2 itemizes the hardware needed to support a typical EoR
deployment in a data center with 16,800 square feet of raised
floor space. The room topology for the EoR deployment is not
drastically different from the ToR model. The row size is
determined by the typical connectivity requirements for any
given row of server cabinets. Most server cabinets contain
a minimum of 24 connections and sometimes exceed
48 connections per cabinet. All EoR reference architectures
Figure 8. PANDUIT ® NET-ACCESS™ Cabinet
are based around 48 copper cables and 24 fiber strands for
with Cisco Nexus Switch
each server cabinet.
8
9. Data Center Assumptions Quantity Server Specifications:
Raised Floor Square Footage 16800 2 — 2.66 GHz Intel Quad Core Xeon X5355
Servers 9792 2 x 835 W Hot-Swap
Cisco Nexus 7010 50 16 GB Ram — (4)4096MB Dimm(s)
Server Cabinets 336 2 — 146GB 15K-rpm Hot-Swap SAS — 3.5
Network Cabinets (LAN & SAN Equipment) 108
Midrange & SAN Equipment Cabinets 132
Watts Per Device Per Cabinet Cabs Per Pod Pods Per Room Quantity Total Watts
Servers 450 12 14 24 9792 1,814,400
Cisco Appliance Allocation 5,400 — — — 8 43,200
Cisco Nexus 7010 (Access) 5,400 — 2 24 48 259,200
Cisco MDS (Access) 5,400 — 2 24 48 259,200
Cisco Nexus 7010 (Core) 5,400 — — — 2 10,800
Cisco MDS (Core) 5,400 — — — 2 10,800
Midrange/SAN Equipment Cabinets 4,050 — — — 132 502,200
Midrange/SAN Switching MDS 5,400 — — — 4 21,600
Midrange/SAN Switching Nexus 7010 5,400 — — — 4 21,600
Total Watts: 4,307,400
Table 2. Room Requirements for Typical EoR Deployment Total Kilowatts: 4,307.40
Total Megawatts: 4.31
Considerations Common to ToR and EoR Configurations
PANDUIT is focused on providing high-density, flexible physical layer solutions that maximize data center space utilization
and optimize energy use. The following sections describe cabinet, cooling, and pathway considerations that are common to
all logical architectures.
Cabinets
Cabinets must be specified that allow for maximum scalability and flexibility within the data center. The vertically mounted
patch panel within the cabinet provides additional rack units that can be used to install more servers within the 45 rack units
available. These vertically mounted panels also provide superior cable management versus traditional ToR horizontal patch
panels by moving each network connection closer to the server network interface card ultimately allowing a shorter patching
distance with consistent lengths throughout the cabinet.
Figure 9 represents a typical vertical patch panel deployment
in the NET-ACCESS ™ Server Cabinet. Complete power and data
separation is achieved through the use of vertical cable
management on both sides of the cabinet frame. The cabinet
also allows for both overhead and underfloor cable routing
for different data center applications. Shorter power cords
are an option to remove the amount of added cable slack
from longer cords that are shipped with server hardware.
Using shorter patch cords and leveraging PANDUIT vertical
cable management integrated into the NET-ACCESS ™ Cabinet
alleviates potential airflow issues at the back of each server
that could result from poorly managed cabling.
Figure 9. Cabinet Vertical Mount Patch Panel
9
10. Thermal Management management systems that efficiently route and organize
Data center power requirements continue to increase at high critical IT infrastructure elements. Figure 10 represents an
rates making it difficult to plan appropriately for the proper analysis done based upon the assumptions for the 16,800
cooling systems needed to support your room. Cabinets play square foot EoR deployment.
a critical role in managing the high heat loads generated by
active equipment. Each cabinet will require different power PANDUIT ® NET-ACCESS ™ Cabinets feature large pathways for
loads based upon the type of servers being installed as well efficient cable routing and improved airflow while providing
as the workload being requested of each compute resource. open-rack accessibility to manage, protect, and showcase
Understanding cabinet level power requirements gives cabling and equipment. Elements such as exhaust ducting,
greater visibility into overall room conditions. filler panels, and the PANDUIT ® COOL BOOT ™ Raised Floor
Assembly support hot and cold aisle separation in
PANDUIT Laboratories’ research into thermal management accordance with the TIA-942 standard. These passive
includes advanced computational fluid dynamics (CFD) solutions (no additional fans or compressors) contribute by
analysis to model optimal airflow patterns and above-floor minimizing bypass air in order to manage higher heat loads
temperature distributions throughout the data center. This in the data center and ensure proper equipment operation
data is then used to develop rack, cabinet, and cable and uptime.
Designed in a Hot – Cold Architecture
12 – 20 Ton CRAC Units Outside
12 – 30 Ton CRAC Units Inside
Utilizing Ceiling Plenum for
Return Air
All Perforated Tiles at 25% Open
Peak Temp was 114°
Figure 10. Data Center EoR Thermal Analysis
Pathways This strategy offers several benefits:
The variety and density of data center cables means that
there is no “one size fits all” solution when planning cable • The combination of overhead fiber routing system
pathways. Designers usually specify a combination of and cabinet routing system ensures physical
pathway options. Many types and sizes are available for separation between the copper and fiber cables, as
designers to choose from, including wire basket, ladder rack, recommended in the TIA-942 data center standard
J-hooks, conduit, solid metal tray, and fiber-optic cable
routing systems. Factors such as room height, equipment • Overhead pathways such as the PANDUIT ®
cable entry holes, rack and cabinet density, and cable types, FIBERRUNNER ® Cable Routing System protect
counts, and diameters also influence pathway decisions. fiber optic jumpers, ribbon interconnect cords, and
The pathway strategies developed for ToR and EoR models multi-fiber cables in a solid, enclosed channel that
all leverage the FIBERRUNNER ® Cable Routing System to route provides bend radius control, and the location of the
horizontal fiber cables, and use the NET-ACCESS ™ Overhead pathway is not disruptive to raised floor cooling
Cable Routing System in conjunction with a wire basket or
ladder rack for horizontal copper and backbone fiber cables. • The overall visual effect is organized, sturdy,
and impressive
10