Archive

Archive for the ‘Networking’ Category

Cisco Nexus VPC Domain ID

April 4th, 2014 Comments off

For the last 3 days, I’ve tried to configure vPC for a pair of Nexus 5548UP. No matter what I did, the pair just refused to connect and form vPC properly. So far I have tried:
- Upgrading to NX-OS version 6
- Upgrading to NX-OS version 7
- Restarting to factory default configs
- Restarting the switches many times
- Checking cables and connectors

The configurations I had for them were
1 diagram

Shared Config:

First switch

Second switch

And I kept getting this
2 vlan down

Therefore,
3 vpc bri

Do you see the problem? It turned out that’s because I used the same vPC Domain ID as my KeepAlive Port-Channel ID :101. A mistake that can be made easily in my effort of keeping the code clean.

I fixed it by changing everything I could about the configuration, eventually I changed the vPC domain ID and it works

Then I found this Cisco document:
vPC Domain ID Modification on an Active vPC Domain

The vPC peer devices use the vPC domain ID that you configure in order to automatically assign a unique vPC system MAC address. Each vPC domain has a unique MAC address that is used as a unique identifier for the specific vPC-related operations. However, the devices use the vPC system MAC addresses only for link-scope operations, such as LACP. Therefore, Cisco recommends that you create each vPC domain within the contiguous Layer 2 network with a unique domain ID.

Categories: Data Center, Networking

Cisco UCS : Tracing packet paths with a MAC address

March 14th, 2014 Comments off

In the UCS world where a virtual NIC on a virtual server is connected to a virtual port on a virtual switch by a virtual cable, it is not surprising that there can be confusion about what path packets are actually taking through the UCS infrastructure.

Similarly knowing the full data path through the UCS infrastructure is essential to understanding troubleshooting and testing failover.

In this post I will demonstrate how to trace the paths of the packets in a Cisco UCS Data Center.

The diagram below shows a Half width blade with a vNIC called eth0 created on a Cisco VIC (M81KR) with its primary path mapped to Fabric A. For simplicity only one IO Module to Fabric Interconnect link is shown in the diagram, as well as only one of the Host Interfaces (HIFs / Server facing ports) on the IO module.

1 overview

With the MAC address, you first need to find out the virtual circuit number with the following commands. Note that it will show nothing if you are in the wrong FI.

2 mac to veth

With the Veth#, now we can find the Chassis/Server ID with this command

3 veth to chassis-server

We can go further and find the Uplink/Border Interface where the Fabric Interconnect connects to the LAN with this command

4 veth to uplink

Next , we will find the FI port (Server port) that connect to this virtual circuit with the following command

Where Ethernet #/#/# is the “Bound Interface” you found above with the “show int veth #” command
3 veth to chassis-server

Now you should have the server port (Fabric-if), to find the
FEX Network Port
5 F-if to FEX-uplink

The steps above should help you identify the paths of the packets. For in depth network troubleshooting , see the following Cisco slide
ciscoslide

Cisco Nexus 7000 – VPC port-channel and mixing IO Modules support

November 15th, 2013 Comments off

It’s time to purchase new blades for our Nexus 7000. VPC has been setup and works fine on our 8x10Gb SPF+ M1 module. The question is: If I purchase a 32x10Gb SPF+ M1 module, will I be able to expand the vpc port-channels to the new module ?

I googled it, turned out that I can, as long as I don’t mix the IO generations (M1 with M1, F1 with F1 only)

VPC cross modules

More info here

Don’t forget to see the other post for Share mode vs Dedicated mode when using the N7K-M132XP-12 module.

Categories: Data Center, Networking

Shared Mode vs Dedicated mode operation in Nexus 7000 switches

August 30th, 2013 Comments off

In shared mode operation say four interfaces of 10 Giga are grouped together and they offer in total of 10 GB of bandwidth, which creates an over subscription of 4:1

sharedmode

Example:
Cisco Nexus 7000 Series 32-port, 10-Gigabit Ethernet module (N7K-M132XP-12), this 32 port 10 GE module have an 80 GB fabric connection so they are over 4:1 subscribed.
sharedmode1

Unlike in shared mode where four interfaces share the 10 Giga Bandwidth, the dedicated mode offers 10 GB bandwidth to the first interface of the group while all the three remaining will be disabled.
dedicatedmode

In the Cisco Nexus 7000 Series 32-port, 10-Gigabit Ethernet module (N7K-M132XP-12) each group can be configured individually as either dedicated or shared mode. Always a dedicated port is easy to identify in any Nexus line card as they are marked yellow.

dedicatedmode2

Categories: Networking

Using Wireshark to troubleshoot Cobranet network latency

July 18th, 2013 Comments off

So I’m working on a project to deploy IP-Public Announcement systems. The hardware were deployed but things didn’t go as planned. They have a lot of static noise when making announcements.

The hardware company did the troubleshooting and said it’s “most likely” because of the network, specifically because their hardware didn’t receive enough “Beat packets”. Based on that finding, they concluded that until we “improve” the network , the project had to stop.

A quick search on the Internet got me a few info on Cobranet beat packets
- Beat packets are broadcast packets
- They are sent 750 times per seconds (Or 0.001333ms per packets)
- The acceptable range is from 0.000833ms to 0.001833ms

Let’s use Wireshark to see if the network is capable of delivering this?

First, I put Wireshark on a port and captured everything for 25 minutes. This would capture everything coming in and out of the port, Spanning Tree, CDP, SMB….

For many reasons, I needed a pcapng file that contains only things that I want to see, which were beat packets in this case. To do this, I used a LUA module, written by Eliot Blennerhassett, AudioScience Inc to filter out the beat packets. (More info here . Finally I saved the result to a new file.

1-save

I then re-opened the new file. This time Wireshark ran much faster.

Let’s look at the I/O Stats, to do this , go to

Statistics\ IO Graphs

3.WiresharkIO

As you can see from the Graphs, the network had no problem delivered 750 packets per seconds. To provide more details, we can look closely to see how many packets fall out of range (0.000833ms to 0.001833ms)

To do this, I used the following filter

(frame.time_delta >= 0.001833) or (frame.time_delta <= 0.000833)

4. results

Looks like we have 93 packets that were out of range, 0.02%. I don’t think this is an issue.

Update1: Turned out that when I captured the server, the server also had 0.02% delayed beat packets. It means the network doesn’t delay the packets, just the server doesn’t send them fast enough.

Update2: 2 weeks after I showed this result, the Vendor admitted that the issue wasn’t the network.

Categories: Networking

VMWare vShield-App vs vShield-Edge

June 14th, 2013 Comments off

One of my customer called me today to ask what the difference was between vShieldApp and vShieldEdge, as they were looking at a competing firewall.

I paused for a minute because I could not explain it clearly, as Vmware’s website isn’t that clear about it either. I reached out to my trusted Vmware SME and he was able to explain the difference to me.

vShieldApp is a hypervisor based firewall (internal to cluster) – port based ACL functionality to isolate VM’s from each other.

vShieldEdge is a virtual firewall (internal cluster or to external world)

This is how VMWare explains it
(vShield-App is on the left, vShield-Edge is on the right)
v-appv-edge

Clear as mud ? This article will help explain further
v-app-design
VMware® vShield Edge and vShield App Reference Design Guide

Use old routers for WAN simulation

May 18th, 2013 2 comments

When I was playing with Citrix today, I needed to slow down my 1Gbs connection. My first thought was to use a WAN simulator, but the cost for one, even the entry model is over $3000 . Luckily I have a few old Cisco Routers that I can use. There is a feature called Committed Access Rate (CAR) that you can se to limit the traffic on an interace. Let limit ICMP traffic on interface 0 to 8kps , 2KB for normal and 4KB for excess burst values

Let’s test it

Categories: Networking
l>