The document provides a summary of the hardware, licenses, and features of a Data Domain system. It includes:
- Hardware information such as memory, disks, network cards, and enclosure details.
- License keys for shelf capacities in the active and archive tiers, as well as feature licenses for encryption, expanded storage, and secure multi-tenancy.
- Descriptions of the different licenses and what features they enable, such as encryption of the filesystem or sharing the system among multiple tenants.
Slides presentate dai relatori durante il corso avanzato "Aspetti ematologici della malattia di Gaucher: dalla diagnosi al trattamento", che si è tenuto a Udine nei giorni 25 e 26 ottobre 2017.
Slides presentate dai relatori durante il corso avanzato "Aspetti ematologici della malattia di Gaucher: dalla diagnosi al trattamento", che si è tenuto a Udine nei giorni 25 e 26 ottobre 2017.
Oracle RAC on Extended Distance Clusters - PresentationMarkus Michalewicz
NOTE that a newer version of this presentation (covering Oracle RAC 12c Release) has been uploaded to my SlideShare: https://www.slideshare.net/MarkusMichalewicz/oracle-extended-clusters-for-oracle-rac
This presentation can be used as an illustration for some of the ideas and best practices discussed in the paper "Oracle RAC and Oracle RAC One Node on Extended Distance (Stretched) Clusters"
Percona Live 2022 - The Evolution of a MySQL Database SystemFrederic Descamps
From a single MySQL instance to multi-site high availability, this is what you will find out in this presentation. You will learn how to make this transition and which solutions best suit changing business requirements (RPO, RTO). Recently, MySQL has extended the possibilities for easy deployment of architecture with integrated tools. Come and discover these open source solutions that are part of MySQL.
Oracle RAC is an option to the Oracle Database Enterprise Edition. At least, this is what it is known for. This presentation shows the many ways in which the stack, which is known as Oracle RAC can be used in the most efficient way for various use cases.
FOSDEM 2022 MySQL Devroom: MySQL 8.0 - Logical Backups, Snapshots and Point-...Frederic Descamps
Logical dumps are becoming popular again. MySQL Shell parallel dump & load utility changed to way to deal with logical dumps, certainly when using instances in the cloud. MySQL 8.0 released also an awesome physical snapshot feature with CLONE.
In this session, I will show how to use these two ways of saving your data and how to use the generated backup to perform point-in-time recovery like a rockstar with MySQL 8.0 in 2022 !
How to Use Oracle RAC in a Cloud? - A Support QuestionMarkus Michalewicz
This presentation, which was first presented during Sangam16, discusses general and specific support rules for the Oracle Database and Oracle RAC with the purpose of enabling you to determine whether a given system is supported, certified or even recommended. This presentation was last updated on August 31st 2017 (minor update).
Oracle Flex ASM - What’s New and Best Practices by Jim WilliamsMarkus Michalewicz
Oracle Open World (OOW) 2014 Presentation by Jim Williams (Oracle ASM Product Manager) on Oracle Flex ASM - What's New and Best Practices. The presentation provides an overview of enhancements (What's New) in Oracle ASM 12c, especially with respect to Oracle Flex ASM, and provides best practices which can be applied in any environment (Flex or Standard ASM). This presentation has also more background information for some of the configuration recommendations that I made in my "Oracle RAC (12.1.0.2) Operational Best Practices" presentation.
Standard Edition High Availability (SEHA) - The Why, What & HowMarkus Michalewicz
Standard Edition High Availability (SEHA) is the latest addition to Oracle’s high availability solutions. This presentation explains the motivation for Standard Edition High Availability, how it is implemented and the way it works currently as well as what is planned for future improvements. It was first presented during Oracle Groundbreakers Yatra (OGYatra) Online in July 2020.
Changes in WebLogic 12.1.3 Every Administrator Must KnowBruno Borges
WebLogic 12c has evolved quite a lote since its first release (12.1.1). Now on 12.1.3 it has more to offer, optimizations for Exalogic, support of some Java EE 7 APIs and more.
In this session, we looked at five things you might not know about the Oracle Database or might have forgotten. For each topic, I explained the functionality and demonstrated the benefits using real-world examples. The topics covered apply to anyone running Oracle Database 11g and up, including Standard Edition, with only a few minor exceptions.
With the HPE ProLiant DL325 Gen10 server, Hewlett Packard Enterprise is extending the worlds' most secure industry standard servers product families. This a secure and versatile single socket (1P) 1U AMD EPYC™ based platform offers an exceptional balance of processor, memory and I/O for virtualization and data intensive workloads. With up to 32 cores, up to 16 DIMMs, 2 TB memory capacity and support for up to 10 NVMe drives, this server delivers 2P performance with 1P economics.This datasheet includes features, port description, configuration guide and specification of this series.
Troubleshooting Complex Performance issues - Oracle SEG$ contentionTanel Poder
From Tanel Poder's Troubleshooting Complex Performance Issues series - an example of Oracle SEG$ internal segment contention due to some direct path insert activity.
Oracle RAC on Extended Distance Clusters - PresentationMarkus Michalewicz
NOTE that a newer version of this presentation (covering Oracle RAC 12c Release) has been uploaded to my SlideShare: https://www.slideshare.net/MarkusMichalewicz/oracle-extended-clusters-for-oracle-rac
This presentation can be used as an illustration for some of the ideas and best practices discussed in the paper "Oracle RAC and Oracle RAC One Node on Extended Distance (Stretched) Clusters"
Percona Live 2022 - The Evolution of a MySQL Database SystemFrederic Descamps
From a single MySQL instance to multi-site high availability, this is what you will find out in this presentation. You will learn how to make this transition and which solutions best suit changing business requirements (RPO, RTO). Recently, MySQL has extended the possibilities for easy deployment of architecture with integrated tools. Come and discover these open source solutions that are part of MySQL.
Oracle RAC is an option to the Oracle Database Enterprise Edition. At least, this is what it is known for. This presentation shows the many ways in which the stack, which is known as Oracle RAC can be used in the most efficient way for various use cases.
FOSDEM 2022 MySQL Devroom: MySQL 8.0 - Logical Backups, Snapshots and Point-...Frederic Descamps
Logical dumps are becoming popular again. MySQL Shell parallel dump & load utility changed to way to deal with logical dumps, certainly when using instances in the cloud. MySQL 8.0 released also an awesome physical snapshot feature with CLONE.
In this session, I will show how to use these two ways of saving your data and how to use the generated backup to perform point-in-time recovery like a rockstar with MySQL 8.0 in 2022 !
How to Use Oracle RAC in a Cloud? - A Support QuestionMarkus Michalewicz
This presentation, which was first presented during Sangam16, discusses general and specific support rules for the Oracle Database and Oracle RAC with the purpose of enabling you to determine whether a given system is supported, certified or even recommended. This presentation was last updated on August 31st 2017 (minor update).
Oracle Flex ASM - What’s New and Best Practices by Jim WilliamsMarkus Michalewicz
Oracle Open World (OOW) 2014 Presentation by Jim Williams (Oracle ASM Product Manager) on Oracle Flex ASM - What's New and Best Practices. The presentation provides an overview of enhancements (What's New) in Oracle ASM 12c, especially with respect to Oracle Flex ASM, and provides best practices which can be applied in any environment (Flex or Standard ASM). This presentation has also more background information for some of the configuration recommendations that I made in my "Oracle RAC (12.1.0.2) Operational Best Practices" presentation.
Standard Edition High Availability (SEHA) - The Why, What & HowMarkus Michalewicz
Standard Edition High Availability (SEHA) is the latest addition to Oracle’s high availability solutions. This presentation explains the motivation for Standard Edition High Availability, how it is implemented and the way it works currently as well as what is planned for future improvements. It was first presented during Oracle Groundbreakers Yatra (OGYatra) Online in July 2020.
Changes in WebLogic 12.1.3 Every Administrator Must KnowBruno Borges
WebLogic 12c has evolved quite a lote since its first release (12.1.1). Now on 12.1.3 it has more to offer, optimizations for Exalogic, support of some Java EE 7 APIs and more.
In this session, we looked at five things you might not know about the Oracle Database or might have forgotten. For each topic, I explained the functionality and demonstrated the benefits using real-world examples. The topics covered apply to anyone running Oracle Database 11g and up, including Standard Edition, with only a few minor exceptions.
With the HPE ProLiant DL325 Gen10 server, Hewlett Packard Enterprise is extending the worlds' most secure industry standard servers product families. This a secure and versatile single socket (1P) 1U AMD EPYC™ based platform offers an exceptional balance of processor, memory and I/O for virtualization and data intensive workloads. With up to 32 cores, up to 16 DIMMs, 2 TB memory capacity and support for up to 10 NVMe drives, this server delivers 2P performance with 1P economics.This datasheet includes features, port description, configuration guide and specification of this series.
Troubleshooting Complex Performance issues - Oracle SEG$ contentionTanel Poder
From Tanel Poder's Troubleshooting Complex Performance Issues series - an example of Oracle SEG$ internal segment contention due to some direct path insert activity.
HPE ProLiant DL20 Gen10 server delivers a compact and versatile server at an affordable price. Deploy the portable form factor in small, remote or branch offices, as a compact yet powerful point of sale platform in transport, retail and hospitality environments, or as a flexible configuration for customization in space constrained environments of OEM, military and government customers.This datasheet includes features, port description, configuration guide and specification of this series.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
2. 2
Autosupport Overview
========== GENERAL INFO ==========
GENERATED_ON=Sun Jun 14 06:55:32 PDT 2015 GENERATED_EPOCH_TIME=1434290132
TIME_ZONE=US/Pacific
VERSION=Data Domain OS 5.5.2.1-486029
SYSTEM_SERIALNO=1F41505202
CHASSIS_SERIALNO=1F41505202 Serial No. of Chassis and System
MODEL_NO=DD890 Model Number
HOSTNAME=dd890-1.lss.emc.com
LOCATION=Santa Clara, California
ADMIN_EMAIL=iplabstaff@emc.com
Filesystem has been up 43 days, 17:16.
General Info
3. 3
Autosupport Overview
Memory Usage Summary
Total memory: 96666 MiB Memory Information
Free memory: 14755 MiB
Total swap: 49151 MiB
Free swap: 49151 MiB
System Memory
The Memory Information is important if you want to upgrade the system.
Most systems are shipped with 2 possible Memory Configurations.
Capacity Upgrade kit contains Memory and an Expanded Capacity License
4. 4
Networking Hardware
Net Show Hardware
This example shows 2x 10 Gig Optical Ports available with no link
Net Show Hardware
-----------------
Port Speed Duplex Supp Speeds Hardware Address Physical Link Status
----- -------- ------- ----------- ----------------- -------- -----------
eth0a 1000Mb/s full 10/100/1000 00:a0:d1:cf:15:0d Copper yes
eth0b 1000Mb/s full 10/100/1000 00:a0:d1:cf:15:0c Copper yes
eth4a unknown unknown 1000/10000 00:1b:21:8f:d8:28 Fiber no
eth4b unknown unknown 1000/10000 00:1b:21:8f:d8:29 Fiber no
----- -------- ------- ----------- ----------------- -------- -----------
This section shows:
• Network Cards that are physically present
• Supported Link Speed
• Connection Speed – if Link is present
5. 5
Disk Status
Disk Status provides an overview of where installed disks are assigned.
This example shows a system with Extended Retention so there are disks in:
• Head Unit
• Active Tier – Where data is first ingested
• Archive Tier – Target for Long-term data based on migration policies
Disk Status
-----------
Normal - Storage operational
Disk States Active tier Archive tier Head unit
----------- ----------- ------------ ---------
In Use 238 84 3
Spare 17 6 1
TOTAL DISKS 255 90 4
----------- ----------- ------------ ---------
6. 6
Enclosure Information
Enclosure show command
Provides a complete output of all hardware information.
CLI – shows full details
GUI – selected subset of information is presented
“enclosure show” command can be used with the following options:
Chassis Controllers
CPUs Fans
I/O-cards Memory
NVRAM Power supply
Temperature-sensors
7. 7
Enclosure Information Continued
Enclosure show summary
In this example, Enclosure 1 (the DD890 head) has 4 internal disks
Each ES30 shelf has 15 Disks
Enclosure Show Summary
----------------------
Enclosure Model No. Serial No. State OEM Name OEM Value Capacity
--------- --------- -------------- ------ -------- --------- --------
1 DD890 1F41505202 Online 4 Slots
2 ES30 APM00111100155 Online 15 Slots
3 ES30 APM00111100151 Online 15 Slots
4 ES30 APM00111100154 Online 15 Slots
5 ES30 APM00111100574 Online 15 Slots
6 ES30 APM00111100157 Online 15 Slots
7 ES30 APM00111100156 Online 15 Slots
--------- --------- -------------- ------ -------- --------- --------
7 enclosures present.
Provides an overview of installed Disk Shelves.
8. 8
Enclosure Information continued
IO Card Information
This section provides I/O Card
information.
Slot indicates the card’s
physical location
Device indicates card type.
Interface describes the cards
• Slot 0 – 4 Port 1Gb Ethernet
• Slot 2 – 2 Port 8Gb Fiberchannel
• Slot 4 – 4 Port SAS2 Card
• Slot 11 – 2 Port 10 Gb Ethernet
Port describes the name of
each specific port – more later.
9. 9
Enclosure Information continued
IO Card Information cont.
The last column describes the MAC or WWPN address.
• WWPN is used for FC Zoning
• WWPN is configurable since DDOS 5.3
• Use SCSI Target command to configure
More information in DDOS Command Reference Guide
Exercise Extreme Caution when configuring WWPN
Example shows a DD990 with Extended Retention enabled
• Observe 4 SAS Adapters with 4x6Gb ports each
This information can be seen in the Topology View later.
10. 10
Enclosure Information continued
Controller Information
• Model & Serial Number
• Disks in Controller Head
− Store Configuration
− Log files
• CPU details
− Model
− Speed
− Cores
Intel ARK Site for CPU Details
Controller:
Model DD990
Capacity 4
Serial No 3FZ1723104
Number of Controllers 1
Controller 1
Interface LSISAS1068E B3
Firmware 01.27.06.00
Part No. L3-00159-02E
Serial No. SP23413163
Status OK
Controller HDDs:
Disk Slot Size(GB) Part No. Serial No. Rev
---- ---- -------- ------------------------ ---------- ----
1 1 600 HITACHI HUC10606_CLAR600 PZJRG1JD C330
2 2 600 HITACHI HUC10606_CLAR600 PZJRUS0D C330
3 3 600 HITACHI HUC10606_CLAR600 PZJRGBXD C330
4 4 600 HITACHI HUC10606_CLAR600 PZJRTSLD C330
---- ---- -------- ------------------------ ---------- ----
CPUs:
Model Stepping Speed(MHz) Cores Hyperthreading
----------------------------- -------- ---------- ----- --------------
Intel(R) Xeon(R) CPU E7- 4870 2 2394 10 disabled
Intel(R) Xeon(R) CPU E7- 4870 2 2394 10 disabled
Intel(R) Xeon(R) CPU E7- 4870 2 2394 10 disabled
Intel(R) Xeon(R) CPU E7- 4870 2 2394 10 disabled
----------------------------- -------- ---------- ----- --------------
11. 11
Enclosure Information continued
NVRAM Information
• Slot where NVRAM Card is
installed
• Battery charging details
• If the Batteries are not charged,
the filesystem will not start.
NVRAM Cards:
Card Component Value
---- ------------------- -----------------------------------------------
1 Slot 6
Firmware version 1.2.6
Memory size 1.93 GiB
Errors 0 memory (0 uncorrectable), 0 PCI, 0 controller
Board temperature 37 C
CPU temperature 67 C
Number of batteries 3
Card model no 521-0012-0001-6
Card serial no F61105B30301
Battery model no 521-0021-0001
Battery serial no F61028M20013
---- ------------------- -----------------------------------------------
NVRAM Batteries:
Card Battery Status Charge Charging Time To Temperature Voltage
Status Full Charge
---- ------- ------ ------ -------- ----------- ----------- -------
1 1 ok 97 % enabled 0 mins 25 C 8.132 V
2 ok 97 % enabled 0 mins 25 C 8.126 V
3 ok 97 % enabled 0 mins 25 C 8.140 V
---- ------- ------ ------ -------- ----------- ----------- -------
12. 12
Enclosure Information continued
Enclosure show topology
• Observe dual path connectivity from the controller to all 6 shelves (2-7)
− Confirm that both paths display correctly (Path A and Path B)
− If results are not the same for each path, there might be an issue with a connector or a cable.
• This order can be changed in the event the system is re-cabled
• Same information can be seen via disk port show summary
Enclosure Show Topology
-----------------------
Port enc.ctrl.port enc.ctrl.port enc.ctrl.port
---- - ------------- - ------------- - -------------
2a > 2.A.H: 2.A.E > 6.A.H: 6.A.E > 5.A.H: 5.A.E
2b > 4.B.H: 4.B.E > 7.B.H: 7.B.E > 3.B.H: 3.B.E
2c
2d
3a > 3.A.H: 3.A.E > 7.A.H: 7.A.E > 4.A.H: 4.A.E
3b > 5.B.H: 5.B.E > 6.B.H: 6.B.E > 2.B.H: 2.B.E
3c
3d
---- - ------------- - ------------- - -------------
Path A
Path A
Path B
Path B
13. 13
PCI Slot Information
System show hardware
This command shows the location of all PCI Cards, device type and the Ports
Detailed System PCI Info
------------------------
Slot Vendor Device Ports
---- ------------ ------------------------ --------------
0 Intel 82576 Gigabit 0a, 0b
1 (empty) (empty)
2 LSI Logic SAS31601E 2a, 2b, 2c, 2d
3 LSI Logic SAS31601E 3a, 3b, 3c, 3d
4 Intel Dual Port 10GbE(82599EB) 4a, 4b
5 Qlogic Corp. QLE2562 8Gb FC 5a, 5b
6 EMC DD00 NVRAM Card
---- ------------ ------------------------ --------------
14. 14
PCI Slot Information
System show hardware
This example shows a DD9500
• Slot M has unique Port naming
• System contains
− 4 Port 10Gb Ethernet Cards
− 2 Port 16Gb Fiber Channel Cards
15. 15
PCI Slot Information
disk show hardware
This command shows the Hardware information of the disks –
manufacturer, capacity and type
Disk Show Hardware
------------------
Disk Slot Manufacturer/Model Firmware Serial No. Capacity Type
(enc/disk)
---------- ---- --------------------- -------- --------------- ---------- ----
1.1 1 WDC_WD1003FBYX-01Y7B0 01.01V01 WD-WCAW30677581 931.51 GiB SATA
1.2 2 WDC_WD1003FBYX-01Y7B0 01.01V01 WD-WCAW30321273 931.51 GiB SATA
1.3 3 WDC_WD1003FBYX-01Y7B0 01.01V01 WD-WCAW30673531 931.51 GiB SATA
1.4 4 WDC_WD1003FBYX-01Y7B0 01.01V01 WD-WCAW30700036 931.51 GiB SATA
2.1 0 HUA722020ALA330 JKAOA3FB B9K8W32F 1.81 TiB SATA
2.2 1 HUA722020ALA330 JKAOA3FB B9K68DGF 1.81 TiB SATA
2.3 2 HUA722020ALA330 JKAOA3FB B9K90XAF 1.81 TiB SATA
2.4 3 HUA722020ALA330 JKAOA3FB B9K8ZEBF 1.81 TiB SATA
16. 16
Licenses
Shelf Licenses
• Beginning with DDOS 5.1, a “Shelf
License” is required for either
− Active Tier
− Archive Tier (Extended Retention)
• Shelf License prevents Archive Tier
shelves from being used in Active Tier
• Contact Sales Operations Team for shelf
licensing issues
• Extended Retention shelves cannot be
“converted” to Active Tier licensing
Licenses
--------
Feature : CAPACITY-ACTIVE
-------------------------
## License Key Model Capacity*
-- -------------------------------- ----- ---------
1 V6LK-K22T-EPRP-Z5RM-U6B4-3E3J-AC ES30 21.8 TiB
2 6L47-L9BC-DXFH-25NG-2T94-ZYRY-U2 ES30 21.8 TiB
3 R7RA-3A4U-83H8-FDKE-YTM1-A3DK-1D ES30 21.8 TiB
4 FZSS-MFS6-LZ7R-9WUL-D9DS-VTNW-TJ ES30 21.8 TiB
5 T8YV-JL7N-JP9E-H86F-9ZF5-UX5V-B5 ES30 21.8 TiB
6 9FA1-5PJ4-74AG-KFGB-ZZUK-KTZC-WW ES30 21.8 TiB
Feature : CAPACITY-ARCHIVE
--------------------------
## License Key Model Capacity*
-- -------------------------------- ----- ---------
1 X4RM-L5U8-J7YA-GG2B-YJYB-3LBK-XY ES30 21.8 TiB
2 YDTP-EMDP-N4P7-9G9L-RF1W-JU52-J5 ES30 21.8 TiB
3 ZC3H-6WCR-CP8A-MGDA-9RCT-UR77-95 ES30 21.8 TiB
4 4TCN-SY5L-BRFH-TLJ4-P7NM-ZN29-RW ES30 21.8 TiB
5 NHWW-V2R3-DLUS-12TP-2LSA-UMBR-7D ES30 21.8 TiB
6 SL87-N1M1-SR9Z-KXKC-SB3L-P8FN-VJ ES30 21.8 TiB
-- -------------------------------- ----- ---------
Licensed Archive Tier Capacity: 130.9 TiB*
17. 17
Licenses
Feature Licenses
• Feature Licenses enable the usage of additional DDOS features, e.g. Encryption.
• Depending on the Version of the DDOS you may see different features listed, not
available in prior versions. In this example, Secure-Multi-Tenancy.
• Licenses are purchased on a per Head Basis.
• Feature Licenses are not transferable
− If you perform a Head Upgrade the customer must purchase new licenses.
Feature licenses:
## License Key Feature
-- ------------------- --------------------
1 FXAT-GRGW-YRSE-TDFE ENCRYPTION
2 TZSX-SCWT-CSZA-FFED SECURE-MULTI-TENANCY
3 HDDH-ECZT-EYYR-SSAD DDBOOST
4 RBVZ-ATTC-BBHC-GDDF REPLICATION
5 TYHE-HWBR-GZHD-HADT VTL
-- ------------------- --------------------
18. 18
Licenses
Feature Licenses Expanded Storage
As already mentioned in the Hardware Part, if you want to expand a
system beyond a certain capacity you will require a Memory Upgrade in
addition to an Expanded Storage License
Feature licenses:
## License Key Feature
-- ------------------- ----------------
1 XZEG-HTXC-TGHW-TZFD ARCHIVER
2 WEWC-FVDB-DABA-AHVR EXPANDED-STORAGE
3 AAYV-CSCW-AYAY-GWBR DDBOOST
4 FXCC-WGSZ-BFDT-YVRB VTL
-- ------------------- ----------------
19. 19
Licenses
Feature License Encryption
The Encryption Feature enables Encryption of the complete Filesystem of
a DD System. There is no further granularity for this feature.
Encryption uses AES 128 or AES 256 standard – more details can be found
at: https://inside.emc.com/docs/DOC-26072
Feature licenses:
## License Key Feature
-- ------------------- --------------------
1 FXAT-GRGW-YRSE-TDFE ENCRYPTION
2 TZSX-SCWT-CSZA-FFED SECURE-MULTI-TENANCY
3 HDDH-ECZT-EYYR-SSAD DDBOOST
4 RBVZ-ATTC-BBHC-GDDF REPLICATION
5 TYHE-HWBR-GZHD-HADT VTL
-- ------------------- --------------------
20. 20
Licenses
Feature Licenses Secure Multi Tenancy
The Secure-Multi-Tenancy License was introduced in DDOS 5.5
It is used for customers sharing a single Data Domain System among
multiple “tenants.”
More detail is available at
https://inside.emc.com/docs/DOC-23573
Feature licenses:
## License Key Feature
-- ------------------- --------------------
1 FXAT-GRGW-YRSE-TDFE ENCRYPTION
2 TZSX-SCWT-CSZA-FFED SECURE-MULTI-TENANCY
3 HDDH-ECZT-EYYR-SSAD DDBOOST
4 RBVZ-ATTC-BBHC-GDDF REPLICATION
5 TYHE-HWBR-GZHD-HADT VTL
-- ------------------- --------------------
21. 21
Licenses
Feature Licenses DDBoost and Replication
DDBoost and Replication Licenses are needed per Data Domain System,
source and target, to enable Managed File Replication
Feature licenses:
## License Key Feature
-- ------------------- --------------------
1 FXAT-GRGW-YRSE-TDFE ENCRYPTION
2 TZSX-SCWT-CSZA-FFED SECURE-MULTI-TENANCY
3 HDDH-ECZT-EYYR-SSAD DDBOOST
4 RBVZ-ATTC-BBHC-GDDF REPLICATION
5 TYHE-HWBR-GZHD-HADT VTL
-- ------------------- --------------------
22. 22
Licenses
Feature License VTL / IBMi
There are two types of VTL feature Licenses
• Windows/Open Systems clients require a VTL License
• IBMi or AS/400 clients backing up via VTL require an I/OS License
Feature licenses:
## License Key Feature
-- ------------------- -----------
1 BESX-HSER-FYAC-TGBS I/OS
2 XTSR-ZYWE-ZGFW-FHAE REPLICATION
3 DCHF-SBCC-ASWG-TVGB VTL
-- ------------------- -----------
23. 23
Licenses
Feature License Gateway Systems
They are no longer sold but, should you encounter a customer with a
legacy Data Domain Gateway System, you will find two types of licenses:
• Half Capacity
• Full Capacity
Licenses
--------
## License Key Feature
-- ------------------- -----------------
1 DZSG-BFSZ-GAWC-VHHF CAPACITY-FULLSIZE
2 YEVW-ZBYT-RSXD-VEDH REPLICATION
-- ------------------- -----------------
24. 24
Licenses
Feature License Archive
Another deprecated license you might encounter is Archivestore.
This was a $0 license intended to identify systems whose primary use case
was archiving rather than backup.
This identification strategy did not work because a lot of Sales AMs
automatically added the license to the system because it was no cost.
25. 25
Autosupport Space Reporting
Filesys show space
Active Tier:
Resource Size GiB Used GiB Avail GiB Use% Cleanable GiB*
---------------- -------- --------- --------- ---- --------------
/data: pre-comp - 1750964.6 - - -
/data: post-comp 173462.2 130406.5 43055.8 75% 3464.6
/ddvar 132.9 18.8 107.3 15% -
---------------- -------- --------- --------- ---- --------------
* Estimated based on last cleaning of 2015/06/18 17:23:51.
Gib = Gibibyte https://en.wikipedia.org/wiki/Gibibyte
/data: pre-comp The amount of pre-compressed data on the system
/data: post-comp The amount of space actually used after dedup & compr.
/ddvar: The space reserved for DDOS & Logs
26. 26
Autosupport Space Reporting
Filesys show space
Active Tier:
Resource Size GiB Used GiB Avail GiB Use% Cleanable GiB*
---------------- -------- --------- --------- ---- --------------
/data: pre-comp - 1750964.6 - - -
/data: post-comp 173462.2 130406.5 43055.8 75% 3464.6
/ddvar 132.9 18.8 107.3 15% -
---------------- -------- --------- --------- ---- --------------
* Estimated based on last cleaning of 2015/06/18 17:23:51.
• Size Gib
/data/post-comp: usable space
This value may change – refer to https://support.emc.com/kb/181295
• Use%
Shows the amount of space used on your system. Refer to
https://support.emc.com/kb/181217 for more information on system utilization
27. 27
Autosupport Space Reporting
Filesys show space
Active Tier:
Resource Size GiB Used GiB Avail GiB Use% Cleanable GiB*
---------------- -------- --------- --------- ---- --------------
/data: pre-comp - 1750964.6 - - -
/data: post-comp 173462.2 130406.5 43055.8 75% 3464.6
/ddvar 132.9 18.8 107.3 15% -
---------------- -------- --------- --------- ---- --------------
* Estimated based on last cleaning of 2015/06/18 17:23:51.
Cleanable GiB* is an estimated value and NOT what you will gain as free space
after the next cleaning run. https://support.emc.com/kb/181047
Cleaning finished 17:23 – scheduled to start 1am = 16 h 23 min for cleaning
Filesystem Cleaning Configuration
---------------------------------
52 Percent Throttle
Filesystem cleaning is scheduled to run "Thu" at "0100".
DO NOT CLEAN MORE THAN ONCE A WEEK !!!
29. 29
Autosupport Space Reporting
Filesys show compression
From: 2015-06-16 16:00 To: 2015-06-23 16:00
Pre-Comp Post-Comp Global-Comp Local-Comp Total-Comp
(GiB) (GiB) Factor Factor Factor
(Reduction %)
--------------- --------- --------- ----------- ---------- -------------
Currently Used: 1750964.7 130406.5 - - 13.4x (92.6)
Written:*
Last 7 days 155311.1 7712.0 8.1x 2.5x 20.1x (95.0)
Last 24 hrs 39555.9 1345.3 12.3x 2.4x 29.4x (96.6)
--------------- --------- --------- ----------- ---------- -------------
Pre-Comp = Data written before compression
Post-Comp = Storage used after compression
Global-Comp Factor = Pre-Comp / (Size after de-dupe)
Local-Comp Factor = (Size after de-dupe) / Post-Comp
Total-Comp Factor = Pre-Comp / Post-Comp
Reduction % = ((Pre-Comp - Post-Comp) / Pre-Comp) * 100
Currently used shows the same information as filesys show space
30. 30
Autosupport Space Reporting
Filesys show compression
From: 2015-06-16 16:00 To: 2015-06-23 16:00
Pre-Comp Post-Comp Global-Comp Local-Comp Total-Comp
(GiB) (GiB) Factor Factor Factor
(Reduction %)
--------------- --------- --------- ----------- ---------- -------------
Currently Used: 1750964.7 130406.5 - - 13.4x (92.6)
Written:*
Last 7 days 155311.1 7712.0 8.1x 2.5x 20.1x (95.0)
Last 24 hrs 39555.9 1345.3 12.3x 2.4x 29.4x (96.6)
--------------- --------- --------- ----------- ---------- -------------
Pre-comp: data precompressed sent to System
Post-Comp: data stored on System – usable for amout that is approx. replicated
Global-Comp: Effect of deduplication
Local-Comp: Effect of Zip
31. 31
Autosupport Space Reporting
Filesys show compression
Please read this documents for further clarification of filesys show
compression
https://support.emc.com/docu48770_Case-Study-of-Filesys-Show-
Compression-and-Filesys-Show-Space.pdf?language=en_US
https://emc--c.na5.visual.force.com/articles/How_To/86266-Understanding-
DataDomain-Compression
32. 32
Autosupport Space Reporting
Filesys show compression
• Overview for understanding the effects of
dedup, local compression and the Meta-
data related to that backup.
• Not usable for space reporting since it
doesn’t reflect on deletions as mentioned
in the article above
File System Compression (total, type 9)
---------------------------------------
Files: 139,191; bytes/storage_used: 17.3
Original Bytes: 1,877,774,018,815,796
Globally Compressed: 262,632,674,536,121
Locally Compressed: 107,938,983,799,644
Meta-data: 835,251,098,900
File System Compression (last 7 days, type 9)
---------------------------------------------
Files: 3,585; bytes/storage_used: 18.8
Original Bytes: 186,041,235,236,773
Globally Compressed: 23,539,408,407,960
Locally Compressed: 9,797,695,970,104
Meta-data: 74,271,551,688
File System Compression (last 24 hours, type 9)
-----------------------------------------------
Files: 685; bytes/storage_used: 20.2
Original Bytes: 56,627,165,012,843
Globally Compressed: 6,313,408,878,904
Locally Compressed: 2,789,103,230,415
Meta-data: 19,907,475,352
33. 33
Autosupport Space Reporting
File Distribution
• Number of backup files on the System and Age
• Can help to understand the retention of data depending on the software and
protocol used.
File Distribution
-----------------
139,191 files in 49,066 directories
Count Space
----------------------------- --------------------------
Age Files % cumul% GiB % cumul%
--------- ----------- ----- ------- -------- ----- -------
1 day 685 0.5 0.5 52738.2 3.0 3.0
1 week 2,900 2.1 2.6 120526.2 6.9 9.9
2 weeks 3,946 2.8 5.4 138273.0 7.9 17.8
1 month 8,631 6.2 11.6 316196.9 18.1 35.9
2 months 6,863 4.9 16.5 407256.6 23.3 59.2
3 months 108,596 78.0 94.6 572292.0 32.7 91.9
6 months 6,572 4.7 99.3 139919.8 8.0 99.9
1 year 409 0.3 99.6 1610.7 0.1 100.0
> 1 year 589 0.4 100.0 0.0 0.0 100.0
--------- ----------- ----- ------- -------- ----- -------
37. 37
Consumption
• Reports details of
• daily capacity consumption
• 13 most recent days
• weekly consumption
• 13 most recent weeks
• Reconciles available space resulting from
• Consumption
• Data ingest
• Data received via replication
• DDFS overhead (metadata+fragmentation)
• Space Reclaimed from Cleaning
• Data deleted after filesystem cleaning
• This report shows:
• available space trends
• capacity usage trends
Space Consumption
Formula
38. 38
Mtree Statistics
Mtree list shows the amount of data
written to a specific Mtree
Mtree List
------------
Name Pre-Comp (GiB) Status
-------------------- -------------- ------
/data/col1/S-Barcode 686169.4 RW
/data/col1/backup 896735.1 RW
/data/col1/coruscant 168041.1 RW
-------------------- -------------- ------
D : Deleted
Q : Quota Defined
RO : Read Only
RW : Read Write
RD : Replication Destination
RLGE : Retention-Lock Governance Enabled
RLGD : Retention-Lock Governance Disabled
RLCE : Retention-Lock Compliance Enabled
39. 39
Mtree Statistics
Mtree show compression is similar to filesys show compression but on an Mtree Basis.
Mtree Show Compression /data/col1/S-Barcode
---------------------------------------------
From: 2015-06-16 16:00 To: 2015-06-23 16:00
Pre-Comp Post-Comp Global-Comp Local-Comp Total-Comp
(GiB) (GiB) Factor Factor Factor
(Reduction %)
------------- -------- --------- ----------- ---------- -------------
Written:*
Last 7 days 46679.0 2178.0 7.5x 2.8x 21.4x (95.3)
Last 24 hrs 23368.8 668.4 13.3x 2.6x 35.0x (97.1)
------------- -------- --------- ----------- ---------- -------------
* Does not include the effects of pre-comp file deletes/truncates
40. 40
Griffin Integration with Avamar
Avamar uses a different reporting concept than Data Domain.
Because every backup is a Full, Avamar reports on the volume of data scanned rather than the amount of
data transmitted.
This leads to numbers that are astronomically large compared to what you would expect.
Unfortunately, these numbers are unusable for DD Statistics and Quotas
From a Data Domain perspective, the parts in red are incorrect, the blue ones are correct
Filesys Compression
--------------
From: 2015-06-18 06:00 To: 2015-06-25 06:00
Pre-Comp Post-Comp Global-Comp Local-Comp Total-Comp
(GiB) (GiB) Factor Factor Factor
(Reduction %)
--------------- ---------- --------- ----------- ---------- -------------
Currently Used: 52764703.0 305557.3 - - 172.7x (99.4)
Written:*
Last 7 days 7804221.2 21939.9 169.9x 2.1x 355.7x (99.7)
Last 24 hrs 689837.1 1831.5 173.9x 2.2x 376.7x (99.7)
--------------- ---------- --------- ----------- ---------- -------------
41. 41
Griffin Integration with Avamar
Mtree Statistics
The parts in red are not correct, the blue ones are
Mtree Show Compression /data/col1/avamar-1409074175
---------------------------------------------
From: 2015-06-18 06:00 To: 2015-06-25 06:00
Pre-Comp Post-Comp Global-Comp Local-Comp Total-Comp
(GiB) (GiB) Factor Factor Factor
(Reduction %)
------------- --------- --------- ----------- ---------- -------------
Written:*
Last 7 days 6956516.2 17971.9 180.5x 2.1x 387.1x (99.7)
Last 24 hrs 689173.2 1810.9 175.2x 2.2x 380.6x (99.7)
------------- --------- --------- ----------- ---------- -------------
* Does not include the effects of pre-comp file deletes/truncates
Mtree Show Compression /data/col1/Default
---------------------------------------------
Pre-Comp Post-Comp Global-Comp Local-Comp Total-Comp
(GiB) (GiB) Factor Factor Factor
(Reduction %)
------------- -------- --------- ----------- ---------- -------------
Written:*
Last 7 days 65525.9 1880.0 24.3x 1.4x 34.9x (97.1)
Last 24 hrs 421.0 17.9 15.0x 1.6x 23.5x (95.7)
------------- -------- --------- ----------- ---------- -------------
Editor's Notes
Hello and Welcome to EMC Presales Accreditation for Q3 2015.
My name is Russell Brown and I work for EMC in Sydney, Australia.
After many years in Data Protection Presales, I now manage the team who is responsible for your quarterly accreditation along with many other programs related to Sales and Presales "Enablement" for EMC's "Core Technologies".
As we close out 2015 and start into 2016, you will see many new approaches to "Enablement".
During this transition, please feel free to reach out to us and keep us informed about what is working and which areas could use some attention. We will do our best to respond to your needs so that you can better support our customers and our partners.
As you are probably aware, there is a mountain of information contained in each Data Domain Autosupport.
Rather than cover everything in a single 8 hour long training, what we have decided to do is to create an ongoing series of modules to focus on each individual component of the autosupport.
This Data Domain Autosupport module is the first module in that series and it will cover two portions of the Autosupport.
Those two parts are:
Hardware configuration
and
Licenses installed on a Data Domain system.
All other parts of the Autosupport will be covered during future installments in this series.
In the first part of the autosupport you will find the General Info section which contains all information about timezone, DDOS Version, Serial Number and Model Number of this specific system.
You can also see the current hostname, location and the Admin Email that has been entered for this system.
System Memory is shown further down in the autosupport.
The memory information is important in case you want to upgrade systems to the maximum storage capacity.
The larger systems can be purchased with 2 different Memory Configurations.
To upgrade the capacity, a kit will be required which contains both Physical Memory modules and an Expanded Capacity License
Networking Hardware
This part shows the Network Cards that are physically present and the link speed.
In this example you can see that the system has two 10 Gig Optical Ports with no link
Disk Status gives a short overview of the installed disks and where they are assigned from a logical point of view.
This example shows a system that is using extended retention software.
The Active Tier is the Area where data is initially written.
The Archive Tier is the area where data is moved for long term retention
An example of this would be - backup data that is kept for a long period of time that meets one of the configured migration policies would be moved from the Active Tier to the Archive Tier
The Enclosure Show command provides a complete output of all hardware information.
It is part of the autosupport output but it can also be issued from the Command Line Interface.
Some parts of the information is also visible in the GUI but some information can only be seen via CLI
The "Enclosure Show" command can be used with several options:
One example is Chassis
This would be typed: enclosure show chassis enclosure 1
Other options are:
Controllers
Cpus
Fans
Io-cards
Memory
Nvram
Powersupply
Temperature-sensors
This command can show a lot of details like Fan Speed, Temperature and more.
Each of these options will not be covered in depth in this module since these options are primarily used by tech support.
If you are interested in the details of these options, please run the command on your local Data Domain system and check the output.
Enclosure show summary provides an overview of the installed disk shelves.
In this example you can see enclosure 1
which is the DD890 Head with
4 internal disks in the head unit and
6 ES30 shelves with 15 disks each
The IO Card information gives you an overview of all cards in your Data Domain System.
Slot gives the physical location in the chassis,
Device gives you the type of interface.
Ports shows the connectivity for the specific cards and
the Address gives you an address depending on the type of the card.
More information about this is on the next slide
The last column describes the Mac or WWPN address.
WWPN will be used for FC Zoning.
With DD OS 5.3 or later, this information can be modified using the SCSI Target command.
For more information please refer to the DDOS Command Reference Guide and handle with care.
This example shows a DD990 with Extended Retention enabled so you can see 4 SAS Adapters with 4 ports each.
This information can be seen again later when we look at the Topology View.
In this slide you can see the specific information about the Controller – or Enclosure 1
It shows the Model Number and the Serial Number of the System
This Head contains 4 600 GB Disks which are used for storing configuration information and internal files.
You can also see the CPU’s that are used in this DD990, this is a question that is often raised by customers.
Although it is not an EMC site, here is a link to the Intel web page which provides more performance and configuration details for the CPUs used in this DD990.
Although it doesn't really matter, customers who like science projects and those who enjoy theorizing about how Data Domain could be improved, often want to know which chipset or processor family the Data Domain CPUs belong to – Nehalem, Sandy Bridge, etc.
In this case the CPUs are from the Westmere-EX family
http://ark.intel.com/products/53579/Intel-Xeon-Processor-E7-4870-30M-Cache-2_40-GHz-6_40-GTs-Intel-QPI
This Slide shows the NVRAM Card configuration.
The Slot shows the physical location in the chassis.
In this example only one NVRAM Card is present but some models can use 2 NVRAM cards. For the DD890 there are two configurations. Older models use 2 NVRAM Cards and newer models use 1 NVRAM Card.
In case the Batteries go bad over time the cards will need to be replaced.
When installing a new DD System the Batteries need to be charged otherwise the filesystem will not start.
The Enclosure show topology command shows the dual path connectivity from the controller to the Expansion shelves.
The details shown include the Enclosure Number
The Controller for the Shelf - either A or B
The Cable Port location either - Host or Expansion
The way I think of the cabling sequence is that the Host port is always the one in the chain closest to the host.
Whereas the Expansion port might not have another shelf connected to it.
If you're familiar with the old days of SCSI termination, you could think of this as being similar to the end of the SCSI chain.
Verify that each shelf is connected down two paths.
In this example,
"Path A" shows Port "a" on SAS Controller "2" being connected to Shelves 2, 6 and 5 in sequence on the "A" controllers.
"Path B" shows Port "b" on SAS Controller "3" being connected to Shelves 5, 6 and 2 in sequence on the "B" controllers.
If this order becomes interrupted, you might have a cable or connector problem.
Unlike most other storage systems, Data Domain Systems are not dependent on the order of the shelves.
So, even if the shelves are re-cabled in a different order – the system will continue to work.
This information can also be viewed using the command
disk port show summary
which is also part of the autosupport output
System show hardware shows the location of all PCI Cards and the Ports.
In this example you can see the internal Gigabit Ethernet card in Slot 0 and 2 LSI SAS Adapters in Slots 2 and 3
This example shows a DD9500 output of system show hardware.
You can see Slot M with specific Port Naming
The system also contains 4 Port 10 Gb Ethernet Cards and 2 Port 16 Gb Fiber Cards
Disk show hardware shows the hardware information from the disks, This includes the Manufacturer and most importantly the capacity of the disks.
Beginning with DDOS 5.1, Expansion shelves require a license when being added to a System.
There are Licenses per Tier.
The Licenses separate the shelves for use in either the Active Tier or the Extended Retention Tier.
Shelves cannot be added to a different type of tier.
This means that you cannot add an Extended retention Tier Shelf to an Active Tier.
Please contact the Sales Operations Team in case you need to switch licenses for use in a different Tier.
Licenses enable special features.
For example, Encryption.
Depending on the Version of the DDOS you may see different features being added.
In this example you see Secure Multi Tenancy.
Licenses are sold on a per Head basis regardless of how you use the specific function.
If you want to perform DD Boost backups you will need a DD Boost license. The single license is good regardless of whether you're backing up 1 TB or 1 PB of data.
Licenses are attached to the head which means, if you perform a head upgrade you will also need to purchase new Licenses.
As already mentioned in the discussion about Memory,
if you want to expand a system beyond a certain capacity you will require a Memory Upgrade as well as an Expanded Storage License
The Encryption Feature allows the encryption of the complete Filesystem of a Data Domain System.
There is no lower level granularity.
For example, I can not encrypt just one selected MTree.
I must encrypt the entire filesystem.
Encryption can use AES 128 or AES 256 standards.
For further information please refer to the inside emc page DOC-26072
The Secure Multi Tenancy License was introduced with DDOS 5.5 and DDBoost Version 3.0.
It allows the usage of Secure Multi Tenancy features.
More Details on this feature can be found on the Inside EMC Doc 23573
DDBoost and Replication Licenses are needed on all hosts that will use the DDBoost protocol for Clone Controlled or Managed File Replication.
There are two types of VTL use cases.
If you perform “normal” open-systems VTL backups, that's your Windows, UNIX, Linux, you will need a VTL License.
If you need to perform VTL backups of IBM/i (formerly known as iSeries or the AS/400), you will also need an I/OS License
This license is pretty much just FYI - in case you happen to see one and wonder what it is.
The Data Domain Gateway Systems, which are no longer sold, had a licensing model which had two options. Half and Full capacity.
Another deprecated license you might encounter is Archivestore.
This was a $0 license intended to identify systems whose primary use case was archiving rather than backup.
This identification strategy did not work because a lot of Sales AMs automatically added the license to the system because it was no cost.
The command being used to display this information is filesys show space.
All numbers displayed here are in Gibibytes
If you don’t know what a Gibibyte is, please ask Michael Colby.
Oh no… wait a minute … It looks like either Michael or Ditmar were nice enough to include a Wikipedia link.
At this level, “Filesys Show Space” has no granularity.
The reporting does not differentiate between storage Tier or MTree or any other allocation within the DD Filesystem.
Filesys Show Space only shows the view of the entire filesystem.
A later slide will address reporting on Extended Retention capacity.
/data/pre-comp shows the amount of pre-compressed data stored on the complete system regardless of application and protocol that was used to write the data to the Data Domain system.
/data/post-comp shows the amount of space actually used after the effects of deduplication and compression.
/ddvar shows the space that is used for the DDOS and its logfiles. This space cannot be modified.
The Size Gib in /data post-compression is also referred to as the usable space.
This is the quantity of physical storage available after accounting for RAID-6.
In this example, the usable space is 173 TiB.
130 TiB have already been used and there are still 43 TiB available.
This means that 75% of the physical storage space has been consumed which is shown in the “Use%” column.
The value of the usable space is not a fixed number.
Metadata and structural overhead in the filesystem will cause 2 identical systems to report different amounts of usable space even though their hardware is exactly the same.
This is a common question customers will want to have explained to them.
Please refer to the Knowledge Base Article 181295 for further information on these calculations.
Please monitor the Use% number and make sure that you do not exceed 85% of system capacity utilization.
For further information on this topic please refer to the knowledge base article - 181217
The value for the number of Cleanable Gibibytes is only an estimated value.
It is not possible to determine in advance the amount of data storage space that will be freed with a cleaning run.
For further information on this estimate please see the Knowledge Base Article 181047.
The cleaning in this example finished at 5:23 PM so, checking the autosupport further, you can see that cleaning is scheduled to run on Thursday at 1AM.
That means that this system took 16 hours and 23 minutes to run cleaning.
A healthy system should be able to finish cleaning within 24 hours.
Do not clean more than once a week because this will result in Segment locality issues.
If you think that you need to run cleaning more frequently, you probably need more capacity.
This example shows the filesys show space output of a data domain system using extended retention software.
The Archive Tier, what we now call Extended Retention tier, is an independent deduplication area and has its own deduplication statistics.
Filesys show compression is another command that shows filesystem usage.
Currently, this command will show the same information that you have already seen in filesys show space.
The Last 7 days and last 24 hours show the amount of data sent to the System and how much was stored in these timeframes.
The Post-Compression Number is a valid measure of the amount of traffic you would need to replicate in a specific timeframe such as daily or weekly.
Global Compression factor shows the effect of deduplication
Local Compression factor shows the effect of the compression that happens after the deduplication of the data
This is based on the compression method enabled for DDFS – either LZ or GZ
Total Compression Factor is the result of the effects of both Global and Local compression.
This number is what most people quote as the deduplication rate.
This course does not have a quiz but,
for your own personal knowledge about filesys show compression we recommend you review the two documents listed on this page.
As stated in the link on the previous slide,
filesys show compression helps you understand the overall effect of deduplication and local compression.
It also shows the Meta-data related to the specific backups.
This information is not usable for space reporting because it does not reflect file deletions correctly
Reviewing the file distribution can help you understand how a systems is being used.
For example, if a customer is using NetWorker, you will see a large number of files because every Advanced File Type Device or DD Boost Device configured will dynamically create 100 directories each with 100 sub-directories.
For a NetWorker system with 10 Data Domain devices configured, you would expect to see at least 100 x 100 x 10 = 100,000 files
The file Age distribution can help explain further information about the usage of the system.
In mixed data type environments or if certain software like TSM is used, interpreting the data aging is extremely difficult and can lead to a misunderstandings about the nature of the data.
In this example you see a TSM Filedevice on a Data Domain.
This file distribution provides no insight into data retention since TSM only backs up new and changed files over time.
The “file distribution by size” helps you understand the size of your files – a lot of 1Kibibyte files typically points to NetWorker.
The size of files depends on the behavior of the application.
NetWorker for example creates files that are the size of the saveset.
In this example you can see a mixed use case with both Filedevices and VTL.
The majority of data is stored in 400 GB large files which represent Virtual Tape Cartridges
If the analysis of this information does not jump off the page at you, please take a look at the fact that 3% of the files on the system contain 91% of the capacity used.
I’ve used red and green boxes to highlight this.
This is a TSM example where TSM is using a Filesize between 10 and 50 GB
Space consumption reporting provides the details of daily capacity consumption for the last 13 days and for the past 13 weeks.
This report also provides details about the components of that space usage.
Those components are:
space consumed by local data ingest
space consumed by data received via replication
space consumed as a result of DDFS overhead
space freed up after file system cleaning completes and space from deleted data is reclaimed
This report has been referred to by some as a “space burn” report.
It can provide the basis for a trend analysis of capacity usage.
This report indicates whether available space is increasing or decreasing over time and what are the components of that space consumption.
For example:
Available space could be decreasing because more data is being ingested locally as a result of data growth or due to the addition of new data sets not previously backed up.
Space could be decreasing because the amount of data ingested via replication is increasing.
Space could be decreasing because less space is being reclaimed by file system cleaning, which could be an indication that retention times are increasing.
Mtree list gives you an overview on the amount of data that was written to each Mtree as well as the state of each Mtree.
Mtree show compression has the same behavior as filesys show compression.
It shows the amount of data ingested during the last 7 days and in the past 24 hours.
This report provides de-duplication statistics for each Mtree.
Now let’s have a look at the Avamar / Griffin Integration.
Avamar has a completely different data architecture and was not originally designed to write to a Data Domain System.
Because Avamar does Full backups only, it reports on the total size of the data scanned on each client.
These numbers are reported to Data Domain although due to the nature of Avamar’s client side de-duplication, that full volume of data is never sent to the Data Domain system.
This results in a huge amount of data “virtually” transmitted to the Data Domain.
These numbers cause confusion with the Data Domain statistics.
In this example you see a report that Avamar has sent 7.8 Petabyte of data to DD during the past 7 days.
This is an average of 1.1 PB per day.
In reality, only about 3 TB per day has been transmitted to the Data Domain system.
These reporting discrepancies lead to situations like the case where quotas can not be managed for an Mtree with an Avamar workload.
This will be explained in the next slide.
This example shows that in a mixed use case situation, you cannot rely on the Avamar reported numbers.
You will need to manually sum the Non-Avamar Mtrees to create a valid pre-comp statistic.
The Avamar Mtree displays some outlandish numbers but the numbers shown for the Default Mtree are realistic.