Interesting Facts On VMware vSphere

Ever wondered about software-defined data centers? Well I guess you experimented a bit with VMware Workstation sometime in your networking journey. If not, then I hope you have heard of VMware vSphere, because in the following short article I expose my study notes on the topic.


Having experimented with VMware Workstation in my college studies (to emulate Linux guest OSes such as Fedora, Suse and Red Hat), I worked later at a very high level with virtual servers that run on vSphere. A couple of examples of that would be Cisco VSOM and Cisco UCM 10.x

heard that VMware vSphere is arguably the number 1 data center virtualization solution in the world. This suite does not run as a silo. It integrates with other VMware products.

VMware offers the following vSphere editions:

  • Standard
  • Enterprise
  • Enterprise Plus
  • Operations Management with Enterprise Plus: this version has been discontinued because VMware wants to align its vSphere distributions to its standard release naming path.
  • Platinum

vSphere comes in two different kits:

  • Essentials
  • Essentials Plus

Update Manager

  • starting from vSphere 6.5 the Update Manager is integrated fully in vCenter Server Appliance.
  • includes the Quick Boot feature, which -when activated- reduces significantly reboot time after an upgrade of the ESXi. In fact, after upgrading an ESXi, the physical server does not reboot, but only the ESXi software.

To simplify the definition of what vSphere is:

vSphere = ESXi + advanced features bought with the vSphere purchase.

To manage the vSphere infrastructure – which is installed on an ESXi- we need to add a vCenter node.


a vNIC is the virtual network interface cards that we find on a VMware virtual machine. a VM can have one or more vNIC cards.

Every virtual machine, when created, has by default one vNIC.



vSwitch is a virtual switch on the virtualization host. It is created by default (as a standard virtual switch) with each new installation of an ESXi hypervisor. But we can add many of it later. A vSwitch is flexible in terms of number of ports. In fact, we can increase the size of a virtual switch in VMware to hundreds and hundreds of ports! One drawback to virtual switches in VMware is that we can not connect two virtual switches in cascade as in the physical world.

a vSwitch can be either a vSS or a vDS, or we can opt for a third party virtual switch like Cisco Nexus 1000V or Cisco AVS instead of the VMware virtual switches. However bare in mind that the use of Cisco Nexus 1000V requires an Enterprise Plus license of vSphere.

a vSwitch can support these types of ports:

  • virtual machine ports: these ports connect to, as their name says, virtual machines. They can be grouped in what VMware calls Port Groups.
  • VMkernel ports: these ports connect to the VMkernel, the core of vSphere. This allows the possibility to connect storage networks (SAN, NAS, iSCSI) and dedicated management networks for example to the virtual switch.

We can have up to 4096 virtual switch ports (i.e. ports that can be grouped in Port Groups) total and 1016 active ports, whether we have vSS or vDS.

Port Group

A Port Group is as its name says, a group of ports. It is a logical construct that groups ports of the virtual switch (I will call them virtual ports) together for one reason or another. And we distinguish these types of port groups:

  • VM port groups: a group of vSwitch ports connected to virtual machines, more precisely to the vNICs. So vNIC ports configured on the same port group can communicate together locally (on the vSwitch) and do not need to traverse to the physical network for that matter. The VM port group that is created by default has the name of VM Network. And by default all virtual machines have their vNICs in the VM Network port group.
  • Uplink port groups: a group of vSwitch ports connected to uplink adapters. Here we can define active ports (in other words, vmnics that are working at any time of observation) and standby ports (which kicks in when an active vmnic goes down).
  • VMKernel port groups: a group of vSwitch ports that connect to VMKernel ports in order to enable special services like:
    • connecting a management network. A management network is mandatory in vSphere. It contains the configuration of the IP address of the ESXi which we connect to with vSphere client. With the initial setup of ESXi comes a default management network with the name of … Management Network!
    • connecting vMotion traffic
    • connecting external storage networks,
    • etc.

Each port group can be assigned a VLAN ID if we want; the VLAN ID value is optional.

A port group has a label, which is nothing but the name of the port group. Other than that, a port group has a predefined set of parameters we can customize. For that matter, go under the port group settings (ESXi -> Manage -> Networking) then look at the tabs: traffic shaping, security, teaming&failover. The settings we configure in this level are general for all port groups. However we can tune them on a per port group basis.

The concept of port groups is the same whether for a vSS or a vDS. However, on a vDS, a virtual port is referred to as dvPort, and port groups are named either dvPort Groups or distributed Port Groups.

port groups vmnic management network

Here is a diagram that links the concepts of port groups with vmnic. Here, we have four port groups: Production, Test , Management and vMotion. Notice how each port group connects to the vmnic through the vSwitch that’s in the middle of the diagram.


vSS stands for virtual Standard Switch. It connects on the one side the vNICs of virtual machines belonging to a same ESXi host, and on the other side the uplink adapters of the physical server and/or VMKernel ports.

There is a default instance of vSS per ESXi host whenever we configure an ESXi hypervisor for the first time. We can add up to 127 vSS per ESXi server.

The GUI option to add a vSS in vSphere is not self explanatory. In fact, you invoke the menu of adding a new port group and then choose the option to create a new virtual Standard Switch.

When we create a new vSS, we are given the option to attach a physical NIC. But it is optional, because we can create vSS without a connection to the physical network, thus isolating all port groups attached to this vSS.

The vSS holds both data plane and control plane.

A vSS has limited capabilities. Indeed, as of vSphere 6.5 it supports a maximum of 256 port groups.


vDS stands for virtual Distributed Switch. We can see it as an advanced version of the virtual standard switch, in terms of operation mode, capabilities and control plane and data plane separation.

So although it preserves the same concepts of VMware networking like port groups uplinks, the vDS offers simplified manageability for network engineers in comparison with vSS.

A single vDS spans many ESXi hosts and not only one. One of the advantages of this can be illustrated when deploying a new network policy: with vSS, you would need to implement the policy on each virtual switch. With vDS however, you implement it once on the vDS and it affects all associated ESXi hosts.

vDS brings with it a separation of the data and control planes: the data plane is still handled by the vDS itself, whereas the control plane is now deported to the vCenter server. However a single vCenter instance supports a maximum of 128 vDS.

vDS offers the following improvements over vSS:

  • support for both inbound and outbound traffic shaping
  • support for PVLAN, Netflow, vMotion (the feature of moving a VM from one ESXi to another)
  • support for port mirroring through the dvMirror feature.

vDS comes in different versions. Each higher version brings with it more capabilities. However, it also means that it does not support older ESXi versions.

Adding a new vDS occurs on the datacenter level (right-click –> add a new virtual distributed switch) , on the vCenter server.


vsphere uplinks

The vmnic aka pNIC or Uplink adapter is a VMware terminology that means the physical network interface card of the virtualization server (the virtualization server being the physical server on which all virtualization stuff happens). A virtualization server may have one or more physical NIC cards. So the number of vmnics we have on a particular virtualization host is nothing but the number of physical NICs on it.

An uplink port is the port on the vSwitch that connects to the uplink adapter. The uplink is then the link between the vSwitch and the pNIC.

When connecting per GUI to ESXi, we rather see vmnic0, vmnic1, etc. rather than pNIC. Beware though that in the VMware community some network engineers refer to vNICs as VMnics with capital “VM”.

When the virtualization server has only one onboard vmnic, it is named by default vmnic0. a second onboard port would be named vmnic1, etc.

vsphere uplink

We can modifiy vmnic settings such as speed, duplex, negotiation and MAC address.

The uplink or uplink adapter or vmnic (we can confuse all three terms together. However technically, the uplink is the link between vSwitch and the physical NIC) is what links the logical constructs of the ESXi host (such as vSwitch and the virtual machines behind it) to the external world – the physical network – whether it is Cisco switches, Comware, Arista, etc. To do this, we need to attach the uplinks to the vSwitch.

vmnic0 is by default attached to the vSwitch. And we can graphically attach additional vmnics. And we can assign any vmnic associated to the vSwitch as active or standby, or completely remove it from the available vmnics for this particular vSwitch (which however does not remove the vmnic from the ESXi of course, but you should know this by now).

With the vSS, we call them uplinks. With vDS, we call them dvUplinks.

VMKernel adapter

A VMKernel adapter or vmk is a virtual adapter that has a VMkernel port. VMkernel ports are virtual ports that we use to connect our virtual objects to an external services traffic such as management traffic (such as when connecting from the physical network to the ESXi), vMotion traffic, replication traffic, storage traffic, etc. And we can in fact associate not one but many service traffics to a single VMKernel adapter.

Per ESXi host we can define one or more VMKernel adapter(s); and this occurs under Hosts and Clusters menu:

The VMKernel port on the VMKernel adapter connects to the vSwitch. We manage it on the vSphere ESXi graphically just like we manage vmnics. It also has an IP address that can either be statically or dynamically configured with DHCP.

Link aggregation

We can configure LACP on two or more vmnics to benefit from higher bandwidth and achieve a layer of high availability. However, vmnics configured in LACP and “regular” vmnics can not coexist as active ports in a same Uplink port group.


ESXi is the hypervisor made by VMware for its vSphere virtualization solution. When installing it, the interface looks like this:

There are two versions: the free version and the commercial version.

The free version of ESXi:

  • does not support more than 32GB RAM per virtualisation server.
  • does not support more than 200 virtual machines

It is only meant for testing and home lab purposes.

Recent versions of ESXi (I think starting from vSphere 6.0) provide an integrated web server that allows us to directly login to the ESXi through a web browser, instead of using vSphere clients.

A group of ESXi hypervisor hosts managed by a vCenter is called a vSphere cluster.

The relationship between the physical world and the logical vSphere world can be observed also in the below figure:


The network traffic in a vSphere environment from a virtual machine to the physical network flows basically in this direction:

VM –> vNIC –> Port Group –> vSwitch –> Uplink/pNIC –> outside world.


vCenter performs the following functionalities:

  • centralises management of ESXi hosts.
  • creates baselines and VM templates.
  • collects alarms and logs.
  • provides an API to interact with third party applications such as ACI.
  • manages compute, storage and network resources for the virtual machine needs.
  • integrates the Platform Service Controller component.
  • with each newer version of vSphere, allows for a higher memory allocation threshold per virtual machine.

vCenter is sold either separately, or integrated in one of the vSphere kits.

The recommended placement of a vCenter instance is on one of the managed ESXi hosts.

You can apply DRS process to a vCenter just like any other virtual machine. When a vCenter experiences a failure, the managed virtualization hosts and their virtual machines continue to operate normally.

vCenter installation notes

vCenter can be installed as:

  • a software component on a Microsoft Windows virtual machine.
  • as a virtual machine on one of the to-be-managed ESXi hypervisors. In this case we call it vCenter Server Appliance (vCSA).

Among the parameters that we need to pay attention to during a vCenter installation, there are:

  • System Name: the name of the vCenter server. It can not be changed later. And it must match one entry in the DNS records, which means you must create an entry on the DNS server.
  • Site Name: once defined, it can not be changed later.
  • choice of the database type: vCenter needs a database to store all his data. And this database can either be internal or external. In other words, as we install a vCenter server, we can either use the internal database that comes with the installation file – which happens to be Postgres- , or connect our vCenter to an external third-party database such as MS SQL or Oracle. The biggest differentiator however, is that the internal database supports only 20 ESXi hosts and 200 virtual machines.

After the installation is finished, we define one or more datacenters under vCenter. A datacenter in vsphere is a logical construct that contains one or more ESXi hosts or vSphere clusters, or a combination of both.

vsphere datacenter

In the example on the left we have two vCenter servers, one named westcoast-vcenter and one named eastcoast-vcenter. The first vCenter has defined a datacenter named westcoast-datacenter and the second vCenter defined eastcoast-datacenter. Each datacenter has defined one ESXi cluster. Each cluster holds one ESXi Host (maybe later someone will add other ESXi hosts). Each hypervisor holds one virtual machine, namely MyVM1 and MyVM2.

vCenter on Windows

It is possible to install vCenter server as an application on top of a Windows virtual machine. For this we need:

  • a Windows 2012 R2 (minimum) virtual machine
  • the vCenter for Windows ISO file, downloadable from VMware

Once you download the vCenter for Windows file, you need to copy it on the Windows virtual machine. From there, double-click on the ISO file and it will mount itself. Go inside the directory and launch the installation script.

Deploying vCenter on a Windows system is deprecated by VMware.

vCenter Server Appliance

The vCenter Server Appliance or vCSA is a virtual appliance based on Linux that is available for download on the VMware website too, just like the vCenter for Windows file. The vCSA can however support thousands of ESXi hosts!

vSphere complementary resources

On the one side, vSphere requires access to the following resources:

  • compute resources: which are provided by the physical server
  • storage resources: either local on the physical server or remote such as SAN or NAS
  • network services: provided by the data center switches and routers.

On the other side, vSphere provides the following services:

  • VMware infrastructure services
  • VMware application services

vSphere clients

A vSphere client allows us to connect to the vCenter server or to the ESXi and administer it graphically. Vmware provides the vSphere client in two flavours:

  • vSphere client for Windows: this version is based on the C# programming language. Vmware recommends to move away from this version and get familiar with the web version. In fact, starting from vSphere 6.5, most features are removed from the Windows client but still are seen in the web client.
Vmware vSphere client for Windows

After you connect to the hypervisor you get this window and you have many tabs where you can examine and configure resources:

  • vSphere web client: this is the future VMware-recommended client version.

How vSphere Client relate to the other constructs such as datacenter and vCenter can be depicted in the below figure:



That was a quick overview of the basic vSphere concepts every network engineer should know in my opinion. This article was not meant to be an extensive guide to installing or operating vSphere software, because I am not (yet) working on Vmware vSphere regularly.

References and further reading

  • Pluralsight, vSphere 6.5 Foundations: Configure vSphere Networking by David Davis
  • Vmware Infrastructure 3 for Dummies

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *