ttec infortrend ds
Upcoming SlideShare
Loading in...5

ttec infortrend ds



Presentation on the New Infortrend DS series RAID systems.

Presentation on the New Infortrend DS series RAID systems.



Total Views
Views on SlideShare
Embed Views



0 Embeds 0

No embeds



Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment

ttec infortrend ds ttec infortrend ds Presentation Transcript

  • Infortrend DS series RAIDMarc van Schijndel Managing Director ttec
  • What is EonStor DS?• Entry-level DAS and SAN storage solution• Target Customers  SMBs  Enterprise remote sites or departments• Key applications  Database and email  Virtualization and cloud datacenter  Media editing and broadcasting  High performance computing  Data backup and archiving 2
  • What is EonStor DS?Interface FC (8G), iSCSI (1G/10G), SAS (6G) host ports Single or dual controllersForm factors 2U/12bay, 3U/16bay, 4U/24bayDrives SAS HDD, NL SAS HDD, SATA HDD, SSDScalability Up to 240 drives 3
  • What is EonStor DS?SoftwareSANWatch management suiteIncluded snapshot and volume copy/mirror functionsOptional thin provisioning and remote replication (via FC or iSCSI) functions 4
  • Local replication - snapshots Source SnapshotRead-only virtual point-in-time (PIT) data A S0 copies, recording data state at a given S0 index B moment CUses Map to host so that the files accidentally deleted can be restored S1 S0 A C Rollback source volume attacked by viruses B index or hackers to its previous healthy state DBenefits Space-efficiency Backup window as short as 10 minutes S0 A C Granular recovery points S2 E index BFlush agent available on MS SQL and Exchange D 5
  • Local replication – Volume copy/mirror Production Server• Volume Copy  Immediately usable (Read/Write) point-in-time (PIT) full data copies within a single system  Uses - Support secondary tape applications such as analysis, Target backup testing and backup Source Target  Benefits Target - Minimize performance degradation and data corruption risk of production decision volume support application testing 6
  • Local replication – Volume copy/mirror Production Server• Volume Mirror  Sync or async full data copies within a single system  Uses - Restart services within minutes after source volume tape fatal failure Target backup Source Target  Benefits - Highest data availability Target users can achieve in single- system configuration decision support application testing 7
  • Remote replication• Sync or async full data copies across • Uses systems  Restart services in minutes after  Via FC or iSCSI host ports disaster  Support remote snapshot  Centralize backup data - Used with async mode, take  Achieve continuous data snapshot whenever protection while offloading source synchronization session ends system FC/iSCSI SAN FC/iSCSI SAN WAN Source Target Target 8
  • Thin ProvisioningCapacity is dynamically allocated when data blocks are writtenCost benefits Delay equipment acquisition Optimize capacity utilization: 30- 40% 80-90% Reduce power, space and cooling expensesManagement benefits Expand volume without downtime Simplify capacity planning Eliminate individual volume utilization monitoring efforts Before After
  • High availibility Hardware Redundant, hot-swappable controllers, power supplies and fans CacheSafe technology (BBU + flash) During accidental power outage, cache data will be written to the flash module for permanent protection RAID 6 protection Less than 10% performance degradation than RAID5 LEDs and alarms on components Downtime Productivity Revenue 10
  • High availibility Firmware/Software Detect drive problems and automatically clone data Perform at scheduled time Extensive system diagnostics combined with real-time notification mechanism Local Replication (Snapshot + Volume Copy/Mirror) Remote Replication Integration Multipathing support Clustering support Downtime Productivity Revenue 11
  • Management of RAID how? • LCD Panel • Embedded RAIDWatch • SANWatch • Telnet • RS-232 Serial Port 12
  • Management of RAID how? LCD panel S16F-R1840 P No Host LUN 13
  • Management of RAID how? 1. Connect the RAID to the networkEmbedded RAIDWatch 2. Power On and check the IP 3. Open browser: http://<RAID _IP>/index.html 14
  • Management of RAID how? Telnet 15
  • Management of RAID how? RS-232 16
  • Management of RAID how? SANWatch RAID configuration Component View Real-time display of components Task-oriented graphical interface Configurable firmware options in details Logical View Powerful Utilities Central Management: Central view of multiple arrays Interactive Display Configuration Client: Event notification via Email, FAX, LAN broadcast, SNMP traps, etc. Local or Remote Access: in-band, out-of-band 17
  • SANWatchSANWatch, a single management interface shared by multiple product families (ESVA, EonStor DS, EonStor) Reduced learning curve Centralized management 18
  • 19
  • 20
  • Proactively troubleshoot performance degradation caused by slow, defective drives Each mini-screen indicates status of each disk drive 21
  • Data Service Details• Local Replication (Snapshot + Volume Cop/Mirror) (Q1, 2012)• Thin provisioning (Q2, 2012)• Remote Replication (Q2, 2012) Free Function Standard Advanced Snapshot Max. snapshots per source volume 64 256 Max. snapshots per system 1024 4096 Volume Copy/Mirror Max. replicaton source volume 16 32 Max. replicaton pairs per source volume 4 8 Max. replicaton pairs per system 64 256 22
  • Data Service Details• Local Replication (Snapshot + Volume Cop/Mirror) (Q1, 2012)• Thin provisioning (Q2, 2012)• Remote Replication (Q2, 2012) Function Remote Replication Max. replication source volume 16 Max. replication pairs per source volume 4 Max. replication pairs per system 64 23
  • Non-Disruptive LUN ExpansionTo Expand Partition 1 EonStor P1 (data1) P1 (data1)P1 (data1) P1 (data1) P2 (data2) Copy data2 P2 (free) Delete P2 Add drivesP2 (data2) to P3 Downtime! P3 (free) P3 (data2) P2 (data2) EonStor DS P1 (data1)P1 (data1) P1 (data1) Add drives P2 (data2) Expand P1P2 (data2) Free P2 (data2) 24
  • Vragen ? 25