VCAP5-DCA Objective 1.1 – Implement and Manage Complex Storage Solutions – Knowledge

Objective 1.1 – Implement and Manage Complex Storage Solutions – Knowledge

Identify RAID Levels

Not sure how VMware will test me about RAID Levels in VCAP5-DCA Exam, but it is important to understand different types of RAID levels and their performance implication and disk failure supported. I’m sure if you are attempting for DCD exam, this will be one of the key factors of selecting right LUNs for the requirement.

Identify supported HBA types

ESXi 5 supports the followings HBA types

  • Hardware iSCSI
  • FC
  • FCoE

And the following hardware can be used for storage purpose

  • Standard NIC for software iSCSI

As long as NIC is supported by ESXi, you can enable software iSCSI initiator on VMkernel, but you can only enable 1 software iSCSI initiator.
NIC with TCP/IP off load engine (Dependent iSCSI HBA)
NIC with TCP/IP off load engine allows VMkernal to offload TCP/IP Tasks to NIC thus VMkernel doesn’t need to use the main CPU for Storage Processing. The configuration of dependent iSCSI NIC is done via VMware side (using vSphere Client or CLI) like you do for the software initiator.
There is a good blog post explaining the process of using dependent iSCSI NIC from
http://www.virtuallifestyle.nl/2010/08/dependent-hardware-iscsi/, do recommend to read it.

  • 10Gbps NIC with Software FCoE support for FC access via NIC

There has been support for FCoE since ESXi 4. FCoE utilise 10Gb NIC To carry both network traffic and FC storage traffic via a single CNA (Converged Network Adapter).

  • 10Gbps NIC with software FCoE support

This is a new addition to ESXi 5. Now you can enable a software FCoE on supported 10Gb NICs. You will get the same accessibility as hardware based CNA, but the configuration is done via VMware side. There is an additional configuration steps to activate software FCoE.
At first, you will need to create VMkernel for FCoE. This VMkernel port will be used for software FCoE. One condition at the time of writing this blog, you must have a dedicated Standard vSwitch for FCoE to work. Once you got Standard vSwitch then you create Software FCoE adapter like you do for software iSCSI adapter. It will automatically detect VLAN to be used FC and CoS value (802.1p). These information is pushed by the physical switch attached.
Unfortunate I haven’t get to use software FCoE, but apparently VMware adapted this from Open FCoE, which is developed by IBM, so if you look for information about Open FCoE, you might find more information about this. And also there are not many switches and NICs support this feature at the time of writing.

Identify virtual disk format types

There are three types of virtual disk formats:

  • Eager-zero thick Provision

Thick Provision disk reserves full disk allocation in the physical storage. For example, if you are allocating 10GB virtual hard disk, 10GB of physical storage is used for this virtual hard disk whether there is any data or not. With the eager zero, VMkernal not only pre-allocate the space for this virtual hard disk but also it resets all blocks to zero to make clean continuos blocks. This will improve the performance of vmdk as all blocks are ready to be written. The drawback is that it will take significantly long time to provision eager zero thick disk as each block has to be processed. This could potentially creates excess amount of workload to VMkernel hence to ESXi server CPU. The pre-allocation of eager zero disk can be offloaded to storage sub system if the storage system supports VAAI. And also eager zero thick disk is required for some of the vSphere feature such as FT. (I haven’t come aross any other feature that requires eager zero, if you know please let me know!)

  • Lazy-Zero thick Provision

Lazy-Zero thick disk is similar to eager zero disk, but it doesn’t reset all blocks zero. Lazy zero disk just pre-allocate the space in the physical storage system.

  • Thin Provision

Thin Provisioned disk does not allocate the physical disk space when you create this type of virtual hard disk. It only uses the physical disk space when you write data to vmdk. I often classify this as pay-as-you-go disk. So when you create 1TB virtual disk, there is no physical storage space used (there is very tiny space used to store the descriptor file) until you start writing to the disk. Thin provision disk is the most cost efficient disk allocation because it doesn’t use disk space if you are not using it. Be very careful if you select thin provisioning. You may end up over provision virtual disks, means you can provision more virtual disk space than you actually have in the physical storage.
VMware made a slight modification when a datastore space is about run out in vSphere 5. In the previous version, when a datastore space is about to run out, all VMs that uses the datastore will be suspended but is version 5, only VMs that try to write to the datastore will be suspended.

   Send article as PDF   

2 comments

  1. news says:

    Im having a small issue. I cant get my reader to pick up your rss feed, Im using yahoo reader by the way.

  2. admin says:

    Can you try adding with http://www.bluebox-web.com/rss

    Many thanks

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Follow

Get every new post on this blog delivered to your Inbox.

Join other followers: