štvrtok 1. apríla 2010

VMware Consolidated Backup - podstatne informacie z IBM Redbook

Podstatne informacie o VCB z IBM Redbooku:

Overview of a VCB proxied environment in the sample configuration that we will use to demonstrate VCB features with Tivoli Storage Manager.



File-level backup of VMware guests
File-level backup allows the file systems of Windows (only, at the time of writing) guests in a VMware ESX server to be presented across the storage network to another physical (non-virtualized) Windows 2003 system used specifically for backup. This system is referred to as the proxy node.


Root of a guest’s C: drive, which is mounted on the proxy node during a snapshot operation.

Although from this view of Windows Explorer these folders look like local files, they are actually mounted as a virtual mount point on the proxy. This means that they are not copied and do not occupy any disk space on the proxy.
After backup to Tivoli Storage Manager, the filespaces created are associated with the actual guest’s nodename (not the proxy nodename), and the Tivoli Storage Manager database therefore records and expires these files individually. Files and other objects appear as belonging to the Tivoli Storage Manager node registered for that guest (not to the proxy), so from the Tivoli Storage Manager server perspective, the guests each look like they have been backed up from a locally installed client on the guest. A corollary is that individual files from a particular guest can, if desired, be restored by a backup-archive client on that guest.

Full backup of VMware guests

Full backup of VMware guests means that the guest’s disk files are backed up as a single entity. Similarly, the entire image can then be restored to VMware.

Even though it is an image-type backup, full backup creates a small number of large objects, rather than one enormous object. It also presents the various log files and settings files that accompany the guest. The images are sliced into manageable sized chunks of (by default) approximately 2 GB.

Full backup works well with Tivoli Storage Manager adaptive differencing (subfile backup) technology, which eliminates much of the backup overhead of taking full images at the client side, before they ever make it to the Tivoli Storage Manager server. This makes the backup very efficient both from the client processing required, as well as overall storage utilization on the Tivoli Storage Manager server.

Planning for VCB with Tivoli Storage Manager V5.5
As always, there are a number of important planning considerations for VCB. The principal items to consider are:
Is there a VirtualCenter (VC) server, or will the Tivoli Storage Manager client connect (via the VCB framework) to the ESX servers individually?
Typically, for a VMware farm of more than a few instances of ESX server, having a VC server makes the solution easier to manage. It is also a useful tool for problem diagnosis.
Our example uses a VC server called KCW09B.
Is LAN-free backup required and, if so, will it be effective?
LAN-free backup involves backing up objects straight to tape. When dealing with many
thousands of small files (in a file-level backup), it may be more appropriate to back these up to a Tivoli Storage Manager diskpool, which is then migrated to tape.
The storage network infrastructure should be sufficient to provide the speeds required.
As we have said, the proxy node must have visibility to the external disks containing the VMware guest images. However, it is not supported to use a multipath driver such as SDD or RDAC to load-balance across multiple HBAs. It may therefore be useful to invest in a single, faster HBA than multiple slower ones. This depends on the speed of the disk being backed up and the backup window available. The storage network design itself will have to be up to the job (for example, non-blocked and where fanout is applied, it should have enough bandwidth to accomplish the job).
Security controls of the backup proxy machine are important.
Since VCB file-level backup presents the NTFS file systems of the guests from the ESX server to the proxy node, this effectively bypasses the security controls on each guest operating system. Therefore, the proxy node should be appropriately secured according to enterprise policy and practice from unauthorized access.

Hardware infrastructure guidance

For the proxy node
The proxy node will move all the backup data either out onto the network, or via the SAN straight to tape. The proxy node must be running Windows 2003 SP1 with an HBA that is supported for access to the SAN disk where the guest images are installed. The proxy node must have visibility to the SAN disk. In our case we created a mapping on the SVC between the proxy node and the virtual disk.



We strongly recommend separating SAN disk and tape traffic on the proxy node to dedicated HBAs. It should also be a powerful enough system to cope with the performance requirements required, often hundreds of MB per second. A presentation from VMware is available at:
http://communities.vmware.com/docs/DOC-1793.pdf;jsessionid=DCF8C8B0E0B4BE25B13F393
8E9FF0015

This includes excellent recommendations for designing a VCB solution. A typical minimum configuration would be a dual core CPU and 2 GB memory.
Performing full VM backups and restores requires actual disk space on the proxy node. The actual amount required varies according to the number of simultaneous full VM backups to be performed and the size of the images generated. You should plan on having storage space sufficient to keep the largest guest, plus some extra, and to increase this if you will make multiple simultaneous full snapshots. If you will only perform file-level backup, disk space is not required on the proxy node, since the guest file systems are attached as virtual mount points. We strongly recommend pre-production prototyping of VCB solutions in order to more accurately predict resource requirements for your particular environment.

Note that at the time of writing, there is not support for using any multipathing software on the proxy node, such as RDAC or SDD. The proxy node must also not be allowed to write a disk label on the SAN disk, as this could corrupt the VM images.

Zdroj: IBM Redbook http://www.redbooks.ibm.com/redbooks/pdfs/sg247447.pdf

Žiadne komentáre: