Press "Enter" to skip to content

VCAP5-DCA Objective 1.1 : Implement and Manage Complex Storage Solutions

Knowledge

  • Identify RAID levels
  • Identify supported HBA types
  • Identify virtual disk format types

Skills and Abilities

  • Determine use cases for and configure VMware DirectPath I/O

Use case

VMware DirectPath I/O is a feature that takes advantages intels VT-d and AMDs AMDIOMMU ,   It gives the ability for a virtual machine to directly access hardware on the host.

Basically Direct Path I/O will free up host CPU cycles no matter what your directly connecting to (generally Networking is favored for DPIO) as it bypasses execution of the vSphere network virtualisation layer.

Best use case is in my mind and gets used allot is a high network work workload virtual machine, such as a high volume web server, by using DPIO the guest OS will use its drivers to control the network card and give it access to hardware fetures which are not currently available to ESXi so things like TCP offloading and SSL offloading will be fully functional which will lower CPU usage and boost performance.

There is certain downsides to having DPIO enabled, these are features not available:

  • vMotion
  • Hot adding and removing virtual devices
  • Suspend and resume
  • Record and replay
  • Fault Tolerance
  • HA
  • DRS (a VM can be part of a cluster but cannot be migrated between hosts)
  • Snapshots

Configure

From configuration tab on the ESXi Host, choose advanced settings under hardware and select “configure passthrough”

Note: if this is not available either its not enabled in the servers BIOS or your hardware does not support it.

Select the device you want to have a virtual connect directly to.

 

To connect a virtual machine to the chosen device you need to:

    • Power off the virtual machine
    • Add hardware to vitual select PCIE device
    • Select the device you configured of DPIO

Done.

  • Determine requirements for and configure NPIV

First of what is NPIV? It stands for N-Port ID Virtualisation in lamens terms it allows you to have multiple WWN addresses sharing a single HBA port.

Some of you out there may already be using this technology, if anyone is using HP Virtual Connect Flex-10 or Flex Fabric your upstream switches would need to be configured for NPIV as there is multible WWNs from the hosts going over single uplink ports.

But the NPIV we are talking about is a configuration of the actual virtual machine, This allows the VM to have its own WWN, for this to work the host must have a HBA capable of supporting NPIV, as well as the FC switches.

To be honest I don’t see why you would activate this feature and I find it hard to justify a use case. The virtual machine require the use of RDMs and loses all the benefits of VMFS, the only point is that you can vMotion a virtual using NPIV as long as the RDM is in the same folder as its VMX file.  I guess at best, this means you could assign storage directly to the guest virtual and have a higher viability of the virtual server on the array level and make it easier to track the traffic.

Configure

1)      Edit settings and Assign an RDM in physical mode to the virtual machine

2)      Edit settings again, Options tab, select “Fibre Channel NPIV” 3 rd from the bottom and select “generate new WWNs” on the right hand side.

This will generate the New WWNs for that machine

3)      Will need to configure Zoning in the FC switches and configure the Array to use and see the new WWNs for this virtual machine (im not a storage guy)

  • Determine appropriate RAID level for various Virtual Machine workloads

This really comes down to what array your storage is sitting on, as depending on the array, the write cache can eliminate the write penalty caused by say RAID 5.

Also depends on things like how many spindles is the volume spread over, all of this comes into play.

But basic rule of thumb is:

    • RAID 1 = Fast read and redundancy, lose half your space
    • RAID 0 = Fast read and write no redundancy, no loss of space
    • RAID 1+0 = Fast read and write with redundancy, lose half your space
    • RAID 5 = RAID of choice has a good balance of read write performance and redundancy but suffers a write penalty which can be negated by the array, lose 1 HDD worth of space

So for a database server or a hard drive intensive workload you would generally look for RAID 1+0 type of configuration, and for an average normal workload RAID 5 is generally used.

  • Apply VMware storage best practices

Applying storage best practices is easy and simple

http://www.vmware.com/technical-resources/virtual-storage/best-practices.html

4 things to remember:

    • Configure and size storage resources for optimal I/O performance first, then for storage capacity.
    • Aggregate application I/O requirements for the environment and size them accordingly.
    • Base your storage choices on your I/O workload.
    • Remember that pooling storage resources increases utilization and simplifies management, but can lead to contention.
  • Understand use cases for Raw Device Mapping

When would you use RDM over VMDK disk on a VMFS volume?

First thing that comes to mind is Microsoft Clusters, RDM is a requirement for across box clusters or virtual and physical cluster.

Using RDMs gives the virtual machine direct access to that LUN, and allows you to use array based management tools from the virtual machine.

There are 2 types of RDMs, Physical and Virtual compatibility mode.

Physical Mode:

    • Allows the virtual machine direct access to the SCSI device
    • Lose the ability to vMotion
    • Lose the ability to snapshot
    • Allows greater that 2TB volumes (in vSphere 5)

Virtual Mode:

    • Looks and acts like a normal VMDK disk but allows direct read and writes to the device.
    • limited to 2TB volume

Never use RDM if the only reason is performance, as there is very little if anything in the performance of RDM vs VMFS.

  • Configure vCenter Server storage filters

vCenter Server storage filters should only be used if your trouble shooting or you made a mistake with the formatting to go back re make the VMFS or RDM by setting say config.vpcd.filter.vmfsFilter to false it will enable you to add a LUN that’s already being used as a VMFS volume. If you run the VMware Health Analyzer it will actually mark you down for using them.

The following storage filters are used:

    • VMFS Filter – key = config.vpxd.filter.vmfsFilter
    • RDM Filter – key = config.vpxd.filter.rdmFilter
    • Host Rescan Filter – key = config.vpxd.filter.hostRescanFilter
    • Same hosts and transports filter – key = config.vpxd.filter.SameHostAndTransportsFilter

These are all on by default and to follow the bellow steps

    1.       Click Administation -> vCenter Server Settings
    2.       Click advanced settings
    3.       If the key does not exist add the key in with the value false
    4.       Click ok
vCenter Advanced Settings
  • Understand and apply VMFS re-signaturing

vSphere Client

  1. Log in to the vSphere Client and select the server from the inventory panel.
  2. Click the  Configuration  tab and click Storage in the Hardware panel.
  3. Click  Add Storage .
  4. Select the Disk/LUN storage type.
  5. Click  Next .
  6. From the list of LUNs, select the LUN that has a datastore name displayed in the VMFS Label column.

    Note
    : The name present in the VMFS Label column indicates that the LUN is a copy that contains a copy of an existing VMFS datastore.
  7. Click  Next .
  8. Under Mount Options, these options are displayed:
  • Keep Existing Signature  – Persistently mount the LUN (for example, mount LUN across reboots)
  • Assign a New Signature  – Resignature the LUN
  • Format the disk  – Re-format the LUN.
     Note : This option will delete any existing data on the LUN.
  1. Select the desired option for your volume.
  2. In the  Ready to Complete  page, review the datastore configuration information.
  3. Click  Finish .

Command line

You need to use the esxcli command. It can be used in this way:

  • To list the volumes detected as snapshots, run this command: # esxcli storage vmfs snapshot listThe output appears similar to:49d22e2e-996a0dea-b555-001f2960aed8
    Volume Name: VMFS_1
    VMFS UUID: 4e26f26a-9fe2664c-c9c7-000c2988e4dd
    Can mount: true
    Reason for un-mountability:
    Can resignature: true
    Reason for non-resignaturability:
    Unresolved Extent Count: 1
  • To mount a snapshot/replica LUN that is persistent across reboots:# esxcli storage vmfs snapshot mount -l <label>|-u <uuid>For example:# esxcli storage vmfs snapshot mount -l “VMFS_1”
    # esxcli storage vmfs snapshot mount -u “49d22e2e-996a0dea-b555-001f2960aed8”
  • To mount a snapshot/replica LUN that is NOT persistent across reboots:# esxcli storage vmfs snapshot mount -n -l <label>|-u <uuid>For example:# esxcli storage vmfs snapshot mount -n -l “VMFS_1”
    # esxcli storage vmfs snapshot mount -n -u “49d22e2e-996a0dea-b555-001f2960aed8”
  • To resignature a snapshot/replica LUN:# esxcli storage vmfs snapshot resignature -l <label>|-u <uuid>For example:# esxcli storage vmfs snapshot resignature -l “VMFS_1”
    # esxcli storage vmfs snapshot resignature -u “49d22e2e-996a0dea-b555-001f2960aed8”
  • Understand and apply LUN masking using PSA-related commands
  • List all the claimrules currently on the ESXi host:

# esxcli storage core claimrule list

There are two MASK_PATH entries: one of class runtime and the other of class file. The runtime is the rules currently running in the PSA. The file is a reference to the rules defined in /etc/vmware/esx.conf. These are identical, but they could be different if you are in the process of modifying the /etc/vmware/esx.conf.

  • Add a rule to hide the LUN with the command

# esxcli storage core claimrule add –rule <number> -t location –A <hba_adapter> -C <channel> -T <target> -L <lun> -P MASK_PATH

Note –  Use the  esxcfg-mpath –b  and  esxcfg-scsidevs –l  commands to identify disk and LUN information

  • Verify that the rule has taken with the command:

# esxcli storage core claimrule list

  • Re-examine your claim rules and you verify that you can see both the file and runtime class:

# esxcli storage core claimrule list

  • Unclaim all paths to a device and then run the loaded claimrules on each of the paths to reclaim them:

# esxcli storage core claiming reclaim –d <naa.id>

  • Verify that the masked device is no longer used by the ESXi host:

# esxcfg-scsidevs –m

The masked datastore does not appear in the list

  • To verify that a masked LUN is no longer an active device

# esxcfg-mpath –L | grep <naa.id>

Empty output indicates that the LUN is not active

  • Analyze I/O workloads to determine storage performance requirements
  • Identify and tag SSD devices
vSphere 5 has a new feature called “Host Cache” which is one of the 5 memory overcommit techniques if memory compression doesnt lower the memory usage enough ESXi would then start swaping out to disk, if you have an SSD disk you can enable swap to host cache which is pretty much the same thing as regular swapping but to SSD which has a far lower performance impact.
Since allot of of people wont have SSD drives in their hosts in a home lab there is a way to mark a drive as an SSD device.
To Identify an SSD drive there is a “drive type” column when looking at storage like the image below.
Storage type
Now how to tag one as an SSD drive, Even though it should automatically pick up if a drive is an SSD drive, but in some cases due to drivers it ESXi may not recognize it as an SSD.
Here is a iSCSI LUN that I am going to use notice how it says it non-SSD.
Below is how you can tag a drive as an SSD:
  • Open up an SSH connection or from the ESXi Shell and type the following (<device name> in my case was eui.a2522cf953a481da)
 esxcli storage core device list –device=<device name>
  • now you can see that the device is a none SSD time to add in a storage mask
esxcli storage nmp satp rule add –satp VMW_SATP_LOCAL –device=<device name> –option=enable_ssd
  • now need to reclaim the device so that the newly created rule is applied
esxcli storage core claiming reclaim -d <device name>
  • If you now re type the command
esxcli storage core device list –device=<device name>
Under Host Cache Configuration I can now see the SSD storage and can select the properties of it and allocate space for Host Cache.
  • Administer hardware acceleration for VAAI
VAAI was introduced in vSphere 4.1, it allows ESXi to offload strorage related tasks to the storage Array like, Block Zeroing, Hardware assisted locking, Full data copying which speeds up tasks like storage vMotion, cloning, deploying from templates and space reclaiming when using thin provisioned disks.
One of the main new features in 5 that people are excited about is acceleration for NFS storage.
VAAI is enabled by default and using the client you can can configure the VAAI primitives under the host’s configuration -> Advanced Settings: 1 is enabled, 0 is disabled.
DataMover.HardwareAcceleratedMove
DataMover.HardwareAcceleratedInit
VMFS3.HardwareAcceleratedLocking
Now to see if the attached storage supports VAAI use the following command
esxcli storage core device vaai status get

Will get something similar to the below output, as you can see my home lab storage doesnt support much 🙂

eui.a2522cf953a481da
VAAI Plugin Name:
ATS Status: unsupported
Clone Status: unsupported
Zero Status: unsupported
Delete Status: unsupported

  • Configure and administer profile-based storage
Profile Driven Storage is something most people might not use but its a way to basically describe storage capabilities, Either detected from the array for defined by you. For example you can define a gold/silver/bronze storage profiles
1) Go to Home then VM Storage Profiles
2) once in the storage profile page click Manage Storage Profile
3) Select Add to add your user defined storage capability, This is where you add something descriptive about the storage your using or want to use in a profile, Below I have added that II have storage which I want to be “Gold” level storage which is running on a HP P400 Raid controller with StarWind iSCSI presenting the volume.
Select OK and you can now see you user defined storage capability.
4) Now its time to assign the Storage Capability to a Datastore, Go to Datastore View and right click the datastore you want to assign a capability too.
Select the previously created capability, and hit OK.
5) Now its time to create the VM Storage Profile, Select Create VM Storage Profile from the main Profiles Page.
6) Name it and give a description.
7) Hit Next and you will have to choose a storage capability, the one we created previously should be visible ,Hit Next and finish off the creation of the profile.
8) Time to add the profile to a VM, open up the virtual machine settings ad select the Profiles tab, Select the profile we created earlier and remember to select propogate to disk to actually apply it.
Can now check compliance of the VM to see if its complying with your profile, do this from the main storage profiles page , Select the profile and click on the virtual machines tab,
  • Prepare storage for maintenance
  • Upgrade VMware storage infrastructure

You should not have to upgrade the Storage Infrastructure too often, generally only when a new file system format comes out like vmfs3 to vmfs5.  Select the Datastore you want to update from the Datastore views, and select the configuration tab, Select the Upgrade VMFS option on the right hand side of the client.

 

This can happen live and there is no need to move virtual machines off the datastore.

One Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Anti SPAM BOT Question * Time limit is exhausted. Please reload CAPTCHA.