Recently I have been working on deploying an evaluation edition of NSX in my home lab.  Starting from an AutoLab 2.6 deployment, I’ve added a few extras in order to run NSX.  This post is just to keep track of some tips and tricks I’ve come across while setting up the lab.  I’ll be updating this post as I come across new concepts and techniques.

Interesting links

I came across this article that describes the differences between Edge Services Routing and Distributed Logical Routing.

This post links to a series of articles titled “NSX for Newbies” that I found helpful to understand how to install from scratch.  This is the first post in another series on a different blog.

Outer ESXi layer

Make sure you set the portgroup used by your nested lab to allow promiscuous mode and forged transmits.  I have also set my portgroup to trunk vlans 1000-2000.  Definitely overkill but you can never have too many vlans.

VyOS Router

I’ve deployed the free open source VyOS to use as my upstream layer 3 “core network”.  This was deployed on to the outer physical ESXi instance rather than the nested virtual ESXi instances.  Here are some commands I used that weren’t immediately apparent in the user guide.

Network capture on ethernet interface eth1, vlan 1000

sudo tshark -i eth1.1000

Set MTU on ethernet interface before setting it on vlans attached to that interface.  NSX will require MTU to be at least 1600, but in this case I figured more is more.

Eg to enable jumbo frames on ethernet interface eth1 and vlans 1000 and 1001 attached to that interface:

vyos@vyos# set interfaces ethernet eth1 mtu 9000
[edit]
vyos@vyos# set interfaces ethernet eth1 vif 1000 mtu 9000
[edit]
vyos@vyos# set interfaces ethernet eth1 vif 1001 mtu 9000
[edit]

Nested ESXi instances

One thing I didn’t realise was that the vmkping command doesn’t work with vtep interfaces that NSX adds to hosts.  In order to troubleshoot connectivity you need to use a different command. In this example the vtep interface is vmk2 (as seen by esxcfg-vmknic -l), and I’m testing connectivity to the vtep interface on another host with IP 172.17.0.51:

vmkping ++netstack=vxlan -s 1570 -d -I vmk2 172.17.0.51

If the ping fails with a size of 1570, try 1470 to rule out an MTU issue. This article contains more information on troubleshooting connnectivity.

NSX Configuration

I changed my mind a few times about what subnet and vlan to use for various components of NSX.  Things like NSX controllers and VTEP interfaces can use either DHCP or an IP pool to allocate IP addresses.  I used an IP pool in my lab, but when I wanted to change it, the IP pool wasn’t in the normal place in the vSphere web client.  If you create an IP pool at the time of configuring NSX, it will instead be under the Networking & Security section in the web client, as per the documentation.

NSX Controller

See which VMs are connected to a network segment/logical switch by using the following command:

nsx-controller # show control-cluster logical-switches mac-table 5001
VNI  MAC               VTEP-IP      Connection-ID
5001 00:50:56:84:68:f1 172.17.0.52  12