24. Export a clone of a running VM
• Point-time image of running VM
exported to an alternate location
• Useful for troubleshooting VM
without downtime for primary VM
Export from an existing checkpoint
• Export a full cloned virtual machine
from a point-in-time, existing checkpoint
of a virtual machine
• Checkpoints automatically merged into
single virtual disk
Duplication of a Virtual
Machine whilst Running
VM1 VM2
[CLICK] Hot data is data that changes frequently and is stored on the faster, but more expensive, solid state drives. All data starts as hot data. [CLICK]Cold data is data that changes infrequently and is stored on the slower, but cheaper, hard disk drives.
[CLICK] If cold data becomes hot it will be automatically moved to the solid state drives, and if [CLICK] hot data becomes cold it is moved to the hard disk drives.
Data deduplication is a new storage efficiency feature available with Windows Server 2012 that helps address the ever-growing demand for file storage. Instead of expanding the storage used to host the data, the amount of space used by that data is now reduced through the use of variable-size chunking and compression. What this means is that Windows will automatically scan through your disks, identify duplicate chunks in the data you have stored and store these chunks only once. Since only one copy is stored for duplicate data this not only lets you optimize your existing storage infrastructure, it also translates into even greater savings by postponing the need to purchase storage upgrades and extending the lifespan of current storage investments.
The disk space savings we have seen with Data Dedup during testing, both internally and by ESG Lab, has been phenomenal. Data deduplication can deliver storage savings of 25-60% for general file shares and 98% for OS VHDs. This is far above what was possible with Single Instance Storage (SIS) or NTFS compression.
Data deduplication also throttles CPU and memory usage to allow for implementation on large volumes without impacting server performance. Furthermore, compression routine run times can be scheduled for off-peak times to reduce any impact those operations might have on data access.
Reliability and data integrity aren’t problems for data deduplication, thanks to metadata and preview redundancy that helps to prevent data loss due to unexpected power outages. Checksums, along with data integrity and consistency checks, also help prevent corruption for volumes configured to use data deduplication.
Not for :
Live VMs
SQL DBs
ReFS file shares
Client machines
Boot data
Cluster shared volumes
Offloaded data transfer (ODX) in Windows Server 2012 R2 Preview enables you to accomplish more with your existing external storage arrays by letting you quickly move large files and virtual machines directly between storage arrays, which reduces host CPU and network resource consumption.
Offloaded Data Transfer (ODX) support is a feature of the storage stack of Hyper‑V in Windows Server 2012 R2 Preview. ODX, when used with offload-capable SAN storage hardware, lets a storage device perform a file copy operation without the main processor of the Hyper‑V host actually reading the content from one storage place and writing it to another.
ODX uses a token-based mechanism for reading and writing data within or between intelligent storage arrays. Instead of routing the data through the host, a small token is copied between the source and destination. The token simply serves as a point-in-time representation of the data. As an example, when you copy a file or migrate a virtual machine between storage locations (either within or between storage arrays), a token that represents the virtual machine file is copied, which removes the need to copy the underlying data through the servers. In a token-based copy operation, the steps are as follows (see the following figure):
<Click>
A user initiates a file copy or move in Windows Explorer, a command-line interface, or a virtual machine migration.
<Click>
Windows Server automatically translates this transfer request into an ODX (if supported by the storage array) and receives a token representation of the data.
<Click>
The token is copied between the source and destination systems.
<Click>
The token is delivered to the storage array.
<Click>
The storage array performs the copy internally and returns progress status.
ODX is especially significant in the cloud space when you must provision new virtual machines from virtual machine template libraries or when virtual hard disk operations are triggered and require large blocks of data to be copied, as in virtual hard disk merges, storage migration, and live migration. These copy operations are then handled by the storage device that must be able to perform offloads (such as an offload-capable iSCSI, Fibre Channel SAN, or a file server based in Windows Server 2012 R2 Preview) and frees up the Hyper‑V host processors to carry more virtual machine workloads.
As you can imagine having an ODX compliant array provides a wide range of benefits:
ODX frees up the main processor to handle virtual machine workloads and lets you achieve native-like performance when your virtual machines read from and write to storage.
ODX greatly reduces time to copy large amounts of data.
With ODX, copy operations don’t use processor time.
Virtualized workload now operates as efficiently as it would in a non-virtualized environment.