venerdì 31 gennaio 2014

VMware: HP StoreVirtual VSA Part3 - Management Groups Clusters and Volumes

In this third post in HP StoreVirtual VSA series we will discuss about creation of Management Groups, Clusters and Volume deployment.

These aforementioned operations can be easily performed using CMC. In the Getting Started page click Management Groups, Clusters and Volumes Wizard.



Let's create a new Management Group. A Management Group is a logical container which allow the management of one or more HP StoreVirtual VSAs, as well as physical appliances counterpart, both on single-site and multi-site configurations.



Select one or more HP StoreVirtual VSAs which be assigned to this Management Group.



Enter Management Group credentials. These credentials will be used for any management task of any HP StoreVirtual belonging to this specific Management Group.



Set an NTP server for automatic time synchronization or set time manually.



A good way to refer to Management Groups is using DNS which is, as you know, not influenced by IP address changes.



Set SMTP to enable notifications from VSAs. This is an important step since notification will include informations about system's health status and in physical counterpart will inform you regarding any physical failure that could occurr in HP StoreVirtual array.



Cluster are, similarly to VMware's ones, groups of VSAs that provide data fault tolerance due to particular configurations like network RAID. In this article we will setup a single site cluster.



Choose cluster name then storage to include in the cluster itself.



Enter cluster's virtual IP address. This IP address is the one who will be used from vSphere, or any other server/VM, to connect to the storage via iSCSI.
This is because due to the highly scalable HP StoreVirtual VSA architecture in this way we can dynamically add new VSAs to the cluster without changing anything in the front-end making our hosts point to the same virtual IP regardless how many HP StoreVirtual VSAs we will introduce in the cluster.



After cluster creation new volumes can be provisioned.



Previous screen deserves some words: Data Protection Level is a very important feature supported by HP StoreVirtual VSA. If in the cluster there are two or more VSAs to achieve a better data protection we can benefit from the network RAID protection which basically spreads data between different VSAs as the common RAID spreads data across different physical disks within the same array. Anyway I will return on this in a coming up post about multi-VSAs clusters.
Reported size, as name suggest, is the size of the provisioned volume. Beside this provisioning as for VMware's virtual machines disks, can be thin or thick.
Adaptive Optimization, as mentioned in previous articles, features the capability of moving frequently accessed data chunks to faster tiers to increase I/Os, though this featues does not comes for free, infact it requires VSA to be backed up by volumes physically placed on both SSDs and hard disks, as well as a dedicated license to enable this feature.



This is the summary, check that everything is set as desired then press Close.



Let's now assign the previously created volume to VMware ESXi hosts.

Right click Servers -> New Server or New Server Cluster if you need the previously created volume to be assigned to more than one host.



Enter host Name, Description and IP Address, then check Allow access via iSCSI and enter host's iqn which can be found in vCenter server under storage initiator configuration.
Enable load balancing feature allow the balancing of iSCSI sessions. All sessions pointing to the previously created virtual IP address will be balanced and managed by different VSAs to prevent bottlenecks and increase in storage response time.
CHAP can also be configured here. As a best practice CHAP should be enabled to prevent unauthorized access to HP StoreVirtual VSA.



Once server/server cluster has been created we need to specify which volume can be accessed by which hosts. Since I've created just VOLUME_TEST I will assign it to my ESXi host.

Right click VOLUME_TEST, Assign and Unassign Servers.



Select server/server cluster and choose the proper permission of the host on the volume.



Next step is to present HP StoreVirtual VSA to ESXi hosts as the iSCSI target. This is done accessing host's storage panel, and assigning either Static or Dynamic target, configuring CHAP, etc. pointing to HP StoreVirtual VSA virtual IP address created during Cluster configuration.

Other blog posts in HP StoreVirtual VSA Series:

HP StoreVirtual VSA Part1 - Installation
HP StoreVirtual VSA Part2 - Initial Configuration 
HP StoreVirtual VSA Part3 - Management Groups Clusters and Volumes
HP StoreVirtual VSA Part4 - Multi VSA Cluster

lunedì 27 gennaio 2014

VMware: HP StoreVirtual VSA Part2 - Initial Configuration

After installing HP StoreVirtual VSA let's discuss a bit about management and initial configuration. VSA management is mostly performed by using HP StoreVirtual Centralized Management Console (CMC), even if some management tasks can only be performed directly accessing the VSA, in our case opening VSA virtual machine console. In this post I will guide you through CMC main screens and initial configuration while in the next post we will create Management Groups, Clusters and, finally, Volumes.

CMC has a quite intuitive management interface, the Getting Started page will help you in finding VSAs and provisioning new volumes.



At first VSA has to be added to the CMC, click Find Systems -> Add. Enter VSA management IP address which by default is eth0 address you setup while installing VSA.



Press OK and wait until the VSA it is found.



Once VSA has been added to CMC the main screen will appear. From here you can have a rapid glance of system status, version and supported features, like Adaptive Optimization.



As told in previous article VSA does not support RAID configurations, this is because RAID is assumed as properly configured in the underlying storage at physical level.



VSA also supports storage tiering, which means that different profiles of storage can be set based on performances of underlying physical drives. I explain myself better: let's pretend that VSA store its data on two volumes, presented, one as a RAW LUN by the storage, one as a datastore by ESXi host, to the VSA VM. These two stores (LUN+datastore) are physically located one in NL SAS disks and one in SSD storage. Without proper configuration for VSA would be impossible to recognize that a part of its data is placed on fast SSDs while another part on slower NL SAS disks so VSA would place data on these two volumes disregarding the different kind of underlying physical storage.
VSA can be configured to classify storage as per tiers by marking SSD storage as highest tier (tier0) and NL SAS as lower tier.
By setting tiers, volumes can benefit from Adaptive Optimization which moves chunks of frequently accesses data to faster tiers thus offering better performances when needed.



Network configurations can be changed/modified via Network screen. IP addresses of both interfaces can be changed here.
Please note that only one interface can be used for management and by default this is the interface with a Default Gateway (the unused interface will report 0.0.0.0 as default gateway).
Unlike physical HP StoreVirtual appliance, VSA does not support interface bonding for throughput increase by channel aggregation.



Also NIC Flow Control is not supported in VSA while speed & duplex and frame size can be edited.



As depicted in the picture above frame size is greyed out using CMC. It can be modified by accessing VSA interface (i.e. VSA virtual machine console). This is done by typing "start" (without quotes) at login prompt, then Network TCP Status, select interface, then Frame Size can be customized.



Once changes occurred CMC will report Jumbo Frames as enabled.



Please bear in mind that JumboFrame to work properly needs all path traversed by iSCSI data set with an MTU of 9000bytes, this means that virtual switches, and physical switches, if any, must have their interfaces properly configured in order to support frames with a payload size of 9000bytes because otherwise bottlenecks will occurr since switches will fragment packets using smaller MTU size or, in worst case scenario, switches will drop packets if their ports are configured with "Do Not Fragment" policy.

Other blog posts in HP StoreVirtual VSA Series:

HP StoreVirtual VSA Part1 - Installation
HP StoreVirtual VSA Part2 - Initial Configuration
HP StoreVirtual VSA Part3 - Management Groups Clusters and Volumes
HP StoreVirtual VSA Part4 - Multi VSA Cluster

martedì 21 gennaio 2014

VMware: HP StoreVirtual VSA Part1 - Installation

A few months ago I wrote a blog post regarding EMC VNX virtual storage appliance.
Today I would like to start a blog post series on HP StoreVirtual VSA. If the VNX was intended to be a simulator to practice with Unisphere for file storage provisioning, HP StoreVirtual VSA is a proper virtual storage appliance (VSA) supported in production environments providing block-based storage via iSCSI.

A VSA is a virtual appliance deployed in a VMware environment which aggregates and abstracts physical underlying storage in a common storage pool which will be presented to the hypervisor and can be used to store virtual machine disks and related files.
StoreVirtual VSA can use both existing VMFS datastores or RDM (raw LUNs) to store data and it can be configured to support sub-volume tiering to move data chunks across tiers. StoreVirtual VSA as the "physical" HP StoreVirtual counterpart is a scale-out solution, this means that if you need to increase storage capacity, resilience or performance other StoreVirtual VSA nodes (i.e. virtual appliances) can be deployed.

I will discuss scale-out capabilities in another article since by adding StoreVirtual VSA nodes proper configurations need to be applied (cluster creation, FOM deployment, etc.).

Guest OS, of a VM residing in StoreVirtual VSA, issues I/O requests to its VM disks residing in a datastore which is presented via iSCSI to the ESXi host by StoreVirtual VSA. StoreVirtual VSA itself issues I/Os to its disks residing on datastores or RDMs located in the physical underlying storage. This allow at first to abstract storage arrays having StoreVirtual VSA disks residing in different physical storage (DAS disks, NFS or iSCSI, FCP, FCoE, etc. datastores), secondly it introduces tiering capabilities by defining (I will explain how in the next article) higher and lower disk tiers. These aforementioned requests pass through VMKernel again before hitting physical storage. Conversely data returns from physical storage to guest OS following the opposite path.
Due to the long path followed by I/O, performances are not the prior concern when dealing with VSAs. I/Os managed by VMKernel have an average latency of microseconds (KAVG metric) while to hit physical storage we incur in millisecons latencies (DAVG metric). Consider that usually using a VSA we introduce several more passages through VMKernel. As an additional note certain VSAs (AFAIK not HP's VSA) use their RAM memory, which physically resides inside an ESXi host, as a caching layer on which frequently accessed data is fetched from storage due to locality access principle. This avoids I/Os to traverse VMKernel through physical storage when a certain block, requested by a guest OS, belongs to the pre-fetched & cached blocks. If block is not there (MISS on cache) VSA retrieves it from physical storage.

Prerequisites for HP StoreVirtual VSA according to HP VSA Documentation are the following:

-3GB of RAM reserved for VSA VM.
-One vCPU with 2GHz reserved for VSA VM.
-A minimum of 5GB and a maximum of 2TB for each virtual disk. Up to 10TB of space is supported per each VSA.
-A dedicated gigabit virtual switch.

Prior to start VSA installation a dedicated gigabit virtual switch needs to be created. Both VM PortGroup and VMKernel for iSCSI will reside on the same vSwitch preventing in this way iSCSI traffic to hit physical switch. If possible (i.e. Enterprise Plus license with vDS) setting LACP/Etherchannel on virtual switch physical ports will increase available bandwidth. I also set MTU to 9000bytes on both iSCSI VMKernels and vSwitch.



Let's now install the HP StoreVirtual VSA. Once downloaded and uncompressed run the setup. If want to use GUI installer select option 2.



Centralized Management Console is the software used to manage HP StoreVirtual VSA (and physical appliance) so if you don't already have it you need to install it.



Next connect to a vCenter (or ESXi host) in order to deploy HP StoreVirtual VSA.



Select host on which place the HP StoreVirtual VSA. ESXi host's datastores or connected RDMs will be listed.



Select HP StoreVirtual VSA. I will return on FailOver Manager (FOM) installation in a coming up blog post. If you have different datastores in different storage arrays you could also enable VSA auto-tiering.



Select datastore in which HP StoreVirtual VSA files will reside. This is the datastore where VSA VM files and OS disk will be created. Data stored in HP StoreVirtual VSA will *NOT* reside here.



By default VSA comes with two virtual network adapters. Only one of these will be used for LeftHand OS management traffic, iSCSI data transfer and intra-cluster (data exchanged between different VSAs) traffic. Edit network settings accordingly to your environment.



Select VSA VM name.



Next you need to select how much space will HP StoreVirtual VSA provide and from which datastore VSA will use it. As you can see in image below I selected 10GB of space from Datastore_VSA datastore and 40GB from Datastore_DATA_VSA datastore. Up to 7 different datastores can be used to store VSA disks. Usable space presented by StoreVirtual VSA will be the sum of these values. In my case StoreVirtual VSA will have 50GB of raw/usable space. Raw space and usable space are the same since in VSA striped RAID will be used by default because VSA assumes that proper RAID configurations are already implemented on underlying physical storage.



Since in this article we are installing a single node VSA select "No, I'm done".



Summary screen will appear, if everything is correct press Deploy.



After deployment has completed press Finish button.



HP StoreVirtual VSA has been installed and powered-on in your selected ESXi host.
VSA configuration will be explained next.


Other blog posts in HP StoreVirtual VSA Series:

HP StoreVirtual VSA Part1 - Installation
HP StoreVirtual VSA Part2 - Initial Configuration 
HP StoreVirtual VSA Part3 - Management Groups Clusters and Volumes
HP StoreVirtual VSA Part4 - Multi VSA Cluster 

giovedì 16 gennaio 2014

VMware: Automating virtual Standard Switch configuration using PowerCLI

After virtual Standard Switch Management with PowerCLI let's have a step further by creating a script to automate vSS configuration.
This script is intended for vSS configuration in a real-world scenario after a brand new install of ESXi in which we have the default single vSwitch0 with "Management Network" VMKernel and "VM Network" portgroup.



Script's goal is to automatically configure vSwitch0 and deploy an additional vSwitch1 for iSCSI traffic according to the following logical design:



Networking specifications are:

Six (6) physical NICs per host (vmnic0 through vmnic5).
Two (2) vSwitches one for Management + vMotion + VM Traffic, one for iSCSI Traffic.
One (1) vmnic dedicated for management, one (1) vmnic dedicated for vMotion, two (2) vmnics for VM Traffic using "Route based on the originating virtual port ID" load balancing. Two (2) vmnics dedicated for iSCSI Traffic with PortBinding enabled. VM Network portgroup has "MAC address changes" and "Forged transmits" set to reject according to VMware best practices.

Let's now delve into script, I've committed it also on my PowerCLI GitHub Repository.
First part of the script is where variables are declared, change them accordingly to your environment and desired configurations. Save is with ".ps1" format and run it.

 ################################  
 #                              #  
 # vSS Management with PowerCLI #  
 #                              #  
 ################################  
 $virtualswitch = "vSwitch0" #vSwitch  
 $virtualswitchiscsi = "vSwitch1" #vSwitch for iSCSI  
 $esxihostip = "192.168.116.60" #ESXi host IP Address  
 $vmotionip = "192.168.170.61" #vMotion VMkernel IP Address  
 $subnetmask = "255.255.255.0" #VMKernel subnet mask  
 $mtu = "9000" #MTU Size (Jumbo Frames for iSCSI VMKernels)  
 $vmnic = @("vmnic0","vmnic1","vmnic2","vmnic3","vmnic4","vmnic5") #Array of ESXi host's vmnics  
 $iscsi_ip = @("10.10.10.1","10.10.10.2") #IP Address to assign to iSCSI VMKernels  
 $iscsitargetip = "10.10.10.3" #iSCSI Target IP Address  
 #Get VMHost  
 $vmhost = Get-VMHost -Name $esxihostip  
 #Get ESXCLI  
 $esxcli = Get-EsxCli  
 #Add vmnic1,vmnic2,vmnic3 to vSwitch0  
 Get-VirtualSwitch -VMHost $vmhost -Name $virtualswitch | Add-VirtualSwitchPhysicalNetworkAdapter -VMHostPhysicalNic (Get-VMHostNetworkAdapter -Physical -Name $vmnic[1],$vmnic[2],$vmnic[3]) -Confirm:$false  
 #Management Network: active vmnic0, standby vmnic1, unused vmnic2 vmnic3  
 Get-VirtualPortGroup -VMHost $vmhost -Name "Management Network" | Get-NicTeamingPolicy | Set-NicTeamingPolicy -MakeNicActive $vmnic[0] -MakeNicStandby $vmnic[1] -MakeNicUnused $vmnic[2],$vmnic[3]  
 #Create vMotion VMKernel  
 New-VMHostNetworkAdapter -VMHost $vmhost -PortGroup vMotion -VirtualSwitch $virtualswitch -IP $vmotionip -SubnetMask $subnetmask -VMotionEnabled:$true  
 #vMotion VMKernel: active vmnic1, standby vmnic0, unused vmnic2 vmnic3  
 Get-VirtualPortGroup -VMHost $vmhost -Name vMotion | Get-NicTeamingPolicy | Set-NicTeamingPolicy -MakeNicActive $vmnic[1] -MakeNicStandby $vmnic[0] -MakeNicUnused $vmnic[2],$vmnic[3]  
 #Reject MAC Address Changes and Forged Transmits on VM Portgroup  
 #EsxCLI command synthax: network vswitch standard portgroup policy security set --allow-forged-transmits --allow-mac-change --allow-promiscuous --portgroup-name --use-vswitch  
 $esxcli.network.vswitch.standard.portgroup.policy.security.set($false, $false, $false, "VM Network", $false)  
 #Create ISCSI vSwitch  
 New-VirtualSwitch -VMHost $vmhost -Name $virtualswitchiscsi -Nic $vmnic[4],$vmnic[5] -Mtu $mtu  
 #Create ISCSI VMKernel  
 New-VMHostNetworkAdapter -VMHost $vmhost -PortGroup ISCSI-1 -VirtualSwitch $virtualswitchiscsi -IP $iscsi_ip[0] -SubnetMask $subnetmask -Mtu $mtu  
 New-VMHostNetworkAdapter -VMHost $vmhost -PortGroup ISCSI-2 -VirtualSwitch $virtualswitchiscsi -IP $iscsi_ip[1] -SubnetMask $subnetmask -Mtu $mtu  
 #Set ISCSI VMKernel  
 Get-VirtualPortGroup -VMHost $vmhost -Name ISCSI-1 | Get-NicTeamingPolicy | Set-NicTeamingPolicy -MakeNicActive $vmnic[4] -MakeNicUnused $vmnic[5]  
 Get-VirtualPortGroup -VMHost $vmhost -Name ISCSI-2 | Get-NicTeamingPolicy | Set-NicTeamingPolicy -MakeNicActive $vmnic[5] -MakeNicUnused $vmnic[4]  
 #Add iSCSI Software Adapter  
 Get-VMHostStorage -VMHost $vmhost | Set-VMHostStorage -SoftwareIScsiEnabled:$true  
 #ISCSI PortBinding  
 $portname = Get-VMHostNetworkAdapter | where {$_.PortGroupName -match "ISCSI-*"} | %{$_.DeviceName}  
 $vmhba = Get-VMHostHba -VMHost $vmhost -Type iscsi | %{$_.Device}  
 $esxcli.iscsi.networkportal.add($vmhba, $false, $portname[0]) #Bind vmk2  
 $esxcli.iscsi.networkportal.add($vmhba, $false, $portname[1]) #Bind vmk3  
 #ISCSI Target Dynamic Discovery  
 New-IScsiHbaTarget -IScsiHba $vmhba -Address $iscsitargetip  
 #Rescan VMFS & HBAs  
 $vmhost | Get-VMHostStorage -RescanVmfs -RescanAllHba  

As expected this will be the result:



As usual code is commented, but let me spend a few words on some particular cmdlets:

 $esxcli.network.vswitch.standard.portgroup.policy.security.set($false, $false, $false, "VM Network", $false)  

is an esxcli command to change security policies on port groups (MAC Address changes, Forged Transmits and Promiscuous Mode) since PowerCLI does allow editing these features only for virtual distributed switches (vDS) and/or vDS port groups via Set-VDSecurityPolicy cmdlet.



 $esxcli.iscsi.networkportal.add($vmhba, $false, $portname[0])  

is another esxcli command for iSCSI port binding.



 New-IScsiHbaTarget -IScsiHba $vmhba -Address $iscsitargetip  

is the cmdlet to provide dynamic iSCSI target discovery.



While...


 Get-VMHostStorage -RescanVmfs -RescanAllHba  

perform a rescan of HBAs and VMFS datastores.

That's all!!

lunedì 13 gennaio 2014

VMware: virtual Standard Switch management with PowerCLI

Virtual Standard Switch (vSS) management is simple, it can be done using vSphere Flex Client or classic vSphere C# Client. vSS management using PowerCLI is even simplier: deployment, removal and customization of vSS, VMKernels, VM PortGroups can be achieved within seconds.
A big advantage of using PowerCLI is that we can perform automatically vSS implementation/customization across multiple hosts without having to browse host by host using vSphere Client.

This is an introductory post in which I will explain the basics of vSS management, what PowerCLI cmdlet to use for each vSS related activity. I will soon come up with another article providing a PowerCLI script to automate vSS provisioning in a real case scenario.

As usual let's have a look at official PowerCLI Documentation first.

Let's begin: how to create a virtual standard switch? New-VirtualSwitch cmdlet is used, 192.168.116.60 in the example below is the IP address of the ESXi host on which I will add vSwitch1.

 New-VirtualSwitch -VMHost (Get-VMHost 192.168.116.60) -Name vSwitch1 -Nic vmnic1,vmnic2  

A new vSS vSwitch1 with no associated PortGroups will be created.



To create a VMKernel the New-VMHostNetworkAdapter cmdlet is used. Since a VMKernel can be associated to a specific traffic (Management, FaultTolerance, VSAN, vMotion) we need to specify what we will use this VMKernel for. In this example I will create a vMotion VMKernel for vSwitch1 and I will assign 192.168.116.61 IP Address to it and set MTU to 9000 bytes (JumboFrames). Please note that by default every VMKernel traffic binding is set to false so setting, for example, -ManagementTrafficEnabled:$false is  worthless.

 New-VMHostNetworkAdapter -VMHost (Get-VMHost -Name 192.168.116.60) -PortGroup vMotion -VirtualSwitch vSwitch1 -IP 192.168.116.61 -SubnetMask 255.255.255.0 -FaultToleranceLoggingEnabled:$false -ManagementTrafficEnabled:$false -VsanTrafficEnabled:$false -VMotionEnabled:$true -Mtu 9000  

A VMKernel for vMotion has just been created.







Next step is how to change VMKernel properties such as load balancing policy or active/standby/unused vmnics. This is done with Set-NicTeamingPolicy cmdlet.

In this example I set vMotion VMKernel load balancing policy to Route based on source MAC hash (not for a specific reason but just for article purpouses), mark vmnic1 as active and vmnic2 as standby.

 Get-VirtualPortGroup -VMHost (Get-VMHost -Name 192.168.116.60) -Name vMotion | Get-NicTeamingPolicy | Set-NicTeamingPolicy -LoadBalancingPolicy LoadBalanceSrcMac -MakeNicActive vmnic1 -MakeNicStandby vmnic2  

As expected changes will be reported in vSphere Web Client.



Let's now create a VM PortGroup using New-VirtualPortGroup cmdlet. To this PortGroup will be assigned VLAN ID 100.

 New-VirtualPortGroup -Name "VM Network 2" -VirtualSwitch vSwitch1 -VLanId 100  



As final step how to add a new vmnic to a vSwitch. AddVirtualSwitchPhysicalNetworkAdapter cmdlet is used.

 Get-VirtualSwitch -VMHost (Get-VMHost -Name 192.168.116.60) -Name vSwitch1 | AddVirtualSwitchPhysicalNetworkAdapter -VMHostPhysicalNic (Get-VMHostNetworkAdapter -Physical -Name vmnic3) -Confirm:$false  



Let's now reverse the process. Remove-VirtualSwitchPhysicalNetworkAdapter cmdlet is used to remove a vmnic from a vSwitch.

 Get-VMHost -Name 192.168.116.60 | Get-VMHostNetworkAdapter -Physical -Name vmnic3 | Remove-VirtualSwitchPhysicalNetworkAdapter -Confirm:$false  



Remove-VirtualPortGroup cmdlet is used to remove a VM PortGroup.

 Remove-VirtualPortGroup -VirtualPortGroup (Get-VirtualPortGroup -Name "VM Network 2") -Confirm:$false  



To remove a VMKernel without incurring in "Remove-VirtualPortGroup The resource '<VMKernel_Name>' is in use." error you have to remove the vmknic currently being used by VMKernel.

 Remove-VMHostNetworkAdapter -Nic (Get-VMHostNetworkAdapter -VMHost (Get-VMHost -Name 192.168.116.60) | where {$_.PortGroupName -eq "vMotion"}) -Confirm:$false  



VMKernel will be now recognized as a simple PortGroup that can be removed with Remove-VirtualPortGroup cmdlet:

 Remove-VirtualPortGroup -VirtualPortGroup (Get-VirtualPortGroup -Name vMotion) -Confirm:$false  



Finally to delete the entire vSwitch use Remove-VirtualSwitch cmdlet.

 Remove-VirtualSwitch -VirtualSwitch (Get-VirtualSwitch -VMHost 192.168.116.60 | where {$_.Name -eq "vSwitch1"}) -Confirm:$false  



That's all!!