If you use EMC PowerPath, the process is straightforward from EMC. But in most cases, due to the price of EMC PowerPath, we use Microsoft MPIO instead (Which PowerPath uses under the covers)
If using Microsoft’s MPIO, you can use their mpclaim utility from a command line or their GUI for doing all the setup. The following assumes you are using Microsoft MPIO and an EMC VNX storage array. There are only slight variations for other combinations.
- Configure hardware for MPIO.
- Install Microsoft MPIO.
- From a command prompt, issue the command “mpclaim -h” to see the currect storage devices claimed by MS MPIO. With a VNX you should see something like “Vendor 8Product 16″. Looking at the MPIO Devices tab in the GUI will show the same information.
- From a command prompt, issue the command “mpclaim -s -d” and you should see that there are no disk present yet, as you haven’t allowed any.
- From a command prompt, issue the command “mpclaim -e” to display the vendor product ID string for the connected storage array. Depending on how your VNX is configured, you will see something like this “DGC VRAID”. There are exactly five (5) spaces after “DGC” and you must have exactly five spaces in the ID.
- Now you can add the multipath support for the speicifc IDs you want. Obviously you will want whatever you saw in the previous step, but in case you plan on using some other configurations, you can add everything from the list from above. From a command prompt enter the command “mpclaim -n -i -d “DGC VRAID”. The “-n” switch suppresses the automatic reboot. Repeat that commnd for each all device IDs desired.
- Reboot the system.
- From a command prompt, issue the command “mpclaim -s -d” and you should see the disks claimed by MPIO on the node. Again, you can use the GUI for all this, too.
- Run Disk Management and activate the new disk
I had a frozen VM because the snapshot hang at 95%. I could not do anything to the VM because its process is locked to the host. I couldn’t stop it, couldn’t cancel the task either. To release the lock and force kill it, I had to do the following:
- Restart the management agents
- Force stop the VM
- Consolidate the snapshot (if necessary)
- Restart the VM
1. Restart the management agents
These 2 commands should do
|
/etc/init.d/hostd restart /etc/init.d/vpxa restart |
2. Force stop the VM
a. The VMWare way
VM's NAME
World ID: 46664
Process ID: 0
VMX Cartel ID: 46640
UUID: 42 24 e8 f3 28 35 e1 77-dd 56 40 46 d2 a4 16 43
Display Name: plsw-ts2012-fe1
Config File: /vmfs/volumes/5156099e-0e41f131c77b4/VM-NAME/VM-NAME.vmx
Then collect the “Word ID” and run either of these commands
|
esxcli vm process kill -t force -w 46664 esxcli vm process kill -t hard -w 46664 |
At this point the VM should be stopped and the lock is released. You might need to remove and re-add it. If the VM is still lock, we will need to force stop it the Linux way
~ # ps | grep 52173320
52173320 52173320 vmx /bin/vmx
52173323 52173320 vmx-vthread-4:VM-NAME /bin/vmx
52174736 52173320 vmx-vthread-5:VM-NAME /bin/vmx
52174737 52173320 vmx-mks:VM-NAME /bin/vmx
52174738 52173320 vmx-svga:VM-NAME /bin/vmx
52174741 52173320 vmx-vcpu-0:VM-NAME /bin/vmx
The second column is the master process number . Run this command to kill it
KB 1004340 should provide you with some more methods but these 2 are usually good enough
Today I thought I’d take a look at creating a SPAN session on the 1000v to monitor traffic. I found it really easy to do! SPAN is one of those things that takes you longer to read and understand than to actually configure. I find that true with a lot of Cisco products: Fabric Path, OTV, LISP, etc.
SPAN is “Switched Port Analyzer”. Its basically port monitoring. You capture the traffic going from one port and then mirror it on another. This is one of the benefits you get out of the box for the 1000v that enables the network administrator not to have this big black box of VMs.
First I need to see which vethernet is assigned to which VM. This command can help you do that
|
show interface status | include <VM's names> |

Then create a monitor session with the following commands
|
monitor session 1 type local source interface vethernet 3 both destination interface vethernet 58 no shutdown |
And confirm the monitor session with the command

In this case, we have an error. The state is “Down”. That is because VMTEST1 and VMTEST2 are in 2 difference VM Hosts. After moving them to the same host, the state will change to up
I have couples of 3750 and Nexus 5K connected to 2 Nexus 7010. The N7Ks run in VPC mode. I have a Multicast source and multiple multicast receivers, they are on the same VLAN. This VLAN is only a Layer 2 VLAN.
The issue is, only the receivers that are on the same access switch with the source receive the traffic. If the receivers are on difference access switch, they don’t see the IGMP Join Group packet.
A bit of digging around, I found out the reason, is that the other switches does not have an mrouter port and does not know about the source.
There are 3 solutions to fix the issue:
1. Turn on an SVI interface and enable PIM
2. Turn on IGMP SNOOPING QUERIER
3. Turn off IGMP snooping on ALL switches.
Let’s focus on solution #2. Since I don’t need to route the multicast traffic outside of the VLAN, this is the best solution:
To do this on the Nexus 7000 you need to do the following:
|
vlan configuration [vlan#] ig igmp snooping querier x.x.x.x |
Where x.x.x.x is an un-used IP address
And you need to run these command on both VPC switches
For more information, have a look at this article
Before we start, here are a few things to remember:
Only isolated ports are supported in UCS. With the N1K incorporated, you can use community VLANs, but the promiscuous port must be on the N1K as well.
A server virtual Network Interface Controller (vNIC) in UCS cannot carry both a regular and an isolated VLAN.
There is no support for promiscuous ports/trunks, community ports/trunks, or isolated trunks.
Promiscuous ports need to be outside the UCS domain, such as an upstream switch/router or a downstream N1K
Now consider this scenario:

The 4900 switch is a pVLAN aware switch. It has isolated ports on Vlan 210 and promiscuous ports on Vlan 200
The Nexus 5K represents a network or a bunch of switches that are not pVLAN aware
First, we need to make the UCS aware of the pVLAN structure. After defining the vlans, we will need to change the properties of them


Next, you have to dedicate a vNIC to carry the pVLAN traffic in VMWare. Because of the UCS limitations, 1 pVLAN per vNIC only. In this case we add the isolated vlan only, and it is not a native VLAN

Next, add 2 new VLANs to the Nexus 1000v switch , and define the private VLAN properties


Then finally, we just have to add the vmnic to the pVLAN_uplinks port profile

For more information on Private VLAN and Cisco UCS integration, please refer to Cisco ID 116310
My users developed a habit to keep their un-used VMs for too long and it slowly eat up our storage. I need a way to enforce our 30 days retention policy. A bit of searching and I end up with this script. Oh and it sends emails too.
|
$vms = Get-VM | where {$_.PowerState -eq "PoweredOff"} $vmPoweredOff = $vms | %{$_.Name} $events = Get-VIEvent -Start (Get-Date).AddDays(-30) -Entity $vms | where{$_.FullFormattedMessage -like "*is powered off"} $lastMonthVM = $events | %{$_.Vm.Name} $moreThan1Month = $vmPoweredOff | where {!($lastMonthVM -contains $_)} $moreThan1Month | Remove-VM -DeletePermanently -Confirm:$false $vmNames = $moreThan1Month | Select -ExpandProperty Name Send-MailMessage -From report@domain.com -To me@domain.com -SmtpServer mail@domain.com ` -Subject "Removed $($moreThan1Month.Count) VM" -Body ($vmNames | Out-String) |
A few days ago I posted this article. For it to work, the Firewall has to open 9 TCP and 9 UDP ports. That’s a lot of opened ports, and not to mention the troubleshooting along the way.
With VMWare Center Appliance v5.5, VMWare has added a new option for Single Sign On authentication, “Active Directory as a LDAP Server”. Things get so much easier with this option as you don’t need to join the vCSA to the domain and there is only 1 opened port, tcp:389, which is for LDAP. Surprisingly, no-one has mentioned it on the Internet.
First, download and install vCSA with the default options: No fancy options yet. Active Directory is disabled because you don’t need to join the vCSA server to the domain.

Then click on the SSO option and change the default password for the Administrator@vsphere.local account. Note that this is required to setup SSO.

Then login into the web client\Administration\Single Sign-On\Configuration with the Administrator@vsphere.local . You have to login with the Administrator account to have the option. Root account doesn’t work here.. (This alone took me 2 hours to figure out)

Then add a new Identity Source. Fill out the remaining fields as follows:

Name: Your AD domain name; E.g. “corp.local”
Base DN for users: Split your domain name in pieces along the dots (“.”) and prefix each part with a “dc=”. Place commas “,” in between each part; E.g. “dc=corp,dc=local”
Domain name: Your AD domain name; E.g. “corp.local”
Domain alias: Your netbios name of the AD domain; E.g. “CORP”
Base DN for groups: Same a the Base DN for users; E.g. “dc=corp,dc=local”
Primary Server URL: The Active Directory server as a URL with the protocol “ldap://” and the port 389.; E.g. ldap://172.16.30.14:389
Secondary Sever URL: Another Active Directory server or domain controller as a URL if you have one. Otherwise leave it blank; E.g. ldap://172.16.30.15:389
Username: An Active Directory username in netbios notation with privileges to read all users and groups; E.g. “CORP\Administrator”
Password: The password of the above user.
Hit the test button and that should be it. If it doesn’t work make sure you have tcp:389 open on the domain controller server