This glossary of networking and virtual networking terminology related to System Center 2012 – Virtual Machm">
This glossary of networking and virtual networking terminology related to System Center 2012 – Virtual Machm">
Searching this page:
To search this page, simply hit CNTL-F to bring up a search window and enter the term for which you would like information. The list of terms, definitions and information links (sorted alphabetically) is included below.
Data Center Bridging (DCB) – Data Center Bridging DCB is a suite of IEEE standards that enable Converged Fabrics in the data center, where storage, data networking, cluster IPC and management traffic all share the same Ethernet network infrastructure. DCB provides hardware-based bandwidth allocation to a specific type of traffic and enhances Ethernet transport reliability with the use of priority-based flow control. Hardware-based bandwidth allocation is essential if traffic bypasses the operating system and is offloaded to a converged network adapter, which might support Internet Small Computer System Interface (iSCSI), Remote Direct Memory Access (RDMA) over Converged Ethernet, or Fiber Channel over Ethernet (FCoE). Priority-based flow control is essential if the upper layer protocol, such as Fiber Channel, assumes a lossless underlying transport.
For more info, see “Data Center Bridging Overview” at http://technet.microsoft.com/en-us/library/hh849179.aspx
Data Center Transmission Control Protocol (DCTCP) – Mixing diverse application workloads on the same network introduces a variety of workflows, including some that require small, predictable latency, while other applications require large, sustained throughput. Current Transmission Control Protocol (TCP) congestion control mechanisms do not provide sufficiently detailed congestion control settings. This results in queue formations in network switches leading to delays, fluctuations in latency, and timeouts.
Whereas the standard TCP congestion control algorithm is only able to detect the presence of congestion, Windows Server 2012 addresses this problem through DCTCP, which uses Explicit Congestion Notification (ECN) to estimate the extent of the congestion at the source, and reduce the sending rate only to the extent of the congestion. This provides a more detailed control over network traffic, allowing DCTCP to operate with very low buffer occupancies while still achieving high throughput.
For more information, see “Data Center Transmission Control Protocol (DCTCP)” at http://technet.microsoft.com/en-us/library/hh997028.aspx and in this detailed technical whitepaper from Microsoft Research at http://research.microsoft.com/pubs/121386/dctcp-public.pdf
Extensible Virtual Switch - The Hyper‑V Extensible Switch is a layer‑2 virtual interface that provides programmatically managed and extensible capabilities to connect virtual machines to the physical network. It provides improved multitenant security for customers on a shared infrastructure as a service (IaaS) cloud. The Hyper‑V Extensible Switch architecture in Windows Server 2012 is an open framework that allows third parties to add new functionality such as monitoring, forwarding, and filtering into the virtual switch. Extensions are implemented by using one of two public platforms, Network Device Interface Specification (NDIS) filter drivers and Windows Filtering Platform (WFP) callout drivers.
For more information, see “Hyper-V Extensible Switch Overview” at http://technet.microsoft.com/en-us/library/hh831823.aspx
Gateway - Gateways connect a VM network to other networks. This configuration requires that, in the logical network that the VM network uses for a foundation, the network virtualization option is selected.
Before you configure a gateway, see “Prerequisites for Gateways” at
http://technet.microsoft.com/en-us/library/jj721575.aspx#BKMK_gateways.
IP Address Pool - In SC 2012 VMM, static IP address pools make it possible for VMM to automatically allocate static IP addresses to Windows-based virtual machines that are running on any managed Hyper-V, VMware ESX or Citrix XenServer host. VMM can automatically assign static IP addresses from the pool to stand-alone virtual machines, to virtual machines that are deployed as part of a service, and to physical computers when you use VMM to deploy them as Hyper-V hosts. Additionally, when you create a static IP address pool, you can define a reserved range of IP addresses that can be assigned to load balancers as virtual IP (VIP) addresses. VMM automatically assigns a virtual IP address to a load balancer during the deployment of a load-balanced service tier.
IP address pools support both IPv4 and IPv6 addresses. However, you cannot mix IPv4 and IPv6 addresses in the same IP address pool.
For more information, see "How to Create IP Address Pools for Logical Networks in VMM" at http://technet.microsoft.com/en-us/library/gg610590.aspx
IPsec task offload - With this type of offloading, some or all of the computational work that IPsec requires is shifted from the computer’s CPU to a dedicated processor on the network adapter. IPsec task offload is a technology built into the Windows operating system that supports network adapters equipped with hardware that reduces the CPU load by performing this computationally intensive work. By moving this workload from the main computer’s CPU to a dedicated processor on the network adapter, you can make dramatically better use of the bandwidth that is available to your IPsec-enabled computer.
For more information, see “Improving Network Performance by Using IPSec Task Offload” at http://technet.microsoft.com/en-us/library/dd125367(v=ws.10).aspx
Kernel Mode Remote Direct Memory Access (kRDMA) - Kernel mode Remote Direct Memory Access (RDMA) is an accelerated input-output (I/O) delivery model that allows application software to bypass most software layers to communicate directly with the computer hardware, which improves application performance and reduces delay.
For more information, see ‘kRDMA Reference’ at http://msdn.microsoft.com/library/hh463974.aspx
Large Send Offload (LSO) – LSO is a technique for increasing outbound throughput of high-bandwidth network connections by reducing CPU overhead. It works by queuing up large buffers and letting the network interface card (NIC) split them into separate packets. The technique is also called TCP segmentation offload (TSO) when applied to TCP.
Some sources on the Internet may also refer to LSO as ‘large segment offload’ or ‘TCP segment offload’
For more information, see “Offloading the Segmentation of Large TCP Packets” at http://msdn.microsoft.com/en-us/library/windows/hardware/ff568840(v=vs.85).aspx or “Large Segment Offload” at http://en.wikipedia.org/wiki/Large_segment_offload
Load Balancer – By adding a load balancer to VMM, you can load balance requests to the virtual machines that make up a service tier. You can use Microsoft Network Load Balancing (NLB) or you can add supported hardware load balancers through the VMM console. NLB is included as an available load balancer when you install VMM. NLB uses round robin as the load-balancing method.
To add supported hardware load balancers, you must install a configuration provider that is available from the load balancer manufacturer. The configuration provider is a plug-in to VMM that translates VMM PowerShell commands to API calls that are specific to a load balancer manufacturer and model. Before you can use a hardware load balancer or NLB, you must create associated virtual IP (VIP) templates.
For more information, see “Configuring Load Balancing in VMM Overview” at http://technet.microsoft.com/en-us/library/jj721573.aspx
Logical Switch –
In Virtual Machine Manager (VMM) in System Center 2012 Service Pack 1 (SP1), a logical switch is a single logical representation of the virtual switch instances which exist in a group of hosts that share
common settings.
A Logical Switch can be described as a central container for virtual switch settings that enables consistent application port profiles and switch extensions across data center. If desired, a logical switch can be configured to automate NIC team creation.
For more information, see "How to Create a Logical Switch in VMM SP1" at
http://technet.microsoft.com/en-us/library/jj628154.aspx
Logical Network -
In Virtual Machine Manager (VMM) in System Center 2012 Service Pack 1 (SP1),
a logical network is used to organize and simplify network assignments for hosts, virtual machines and services…. they
can be used to describe networks with different purposes, create traffic isolation and even support traffic with different types of service-level agreements”.
As logical networks represent an abstraction of the underlying physical network infrastructure, they enable you to model the network based on business needs and connectivity properties.
Stated more simply, a logical network models the physical network. It separates like subnets and VLANs into named objects that can be scoped to a network site. It is a Container for fabric static IP address pools. Logical networks are exposed to tenants through
a VM network.
For more information, see “How to Create a Logical Network in VMM” at http://technet.microsoft.com/en-us/library/gg610588.aspx
MAC Address Pool – In Virtual Machine Manager (VMM) in System Center 2012 Service Pack 1 (SP1), MAC addresses are checked out at VM creation, assigned before VM boot, returned upon deletion of a VM.
For more information, see “How to Create Custom MAC Address Pools in VMM” at http://technet.microsoft.com/en-us/library/gg610632.aspx
Native Port Profile – In Virtual Machine Manager (VMM) in System Center 2012 Service Pack 1 (SP1), port profiles act as containers for the properties or capabilities that you want your network adapters to have. Instead of configuring individual properties or capabilities for each network adapter, you can specify the capabilities in port profiles, which you can then apply to the appropriate adapters.
A native port profile for uplinks (also called an uplink port profile) specifies which logical networks can connect through a particular physical network adapter.
For more information, see “How to Create a Native Port Profile for Virtual Network Adapters in System Center 2012 SP1” at In Virtual Machine Manager (VMM) in System Center 2012 Service Pack 1 (SP1), port profiles act as containers for the properties or capabilities that you want your network adapters to have. Instead of configuring individual properties or capabilities for each network adapter, you can specify the capabilities in port profiles, which you can then apply to the appropriate adapters.http://technet.microsoft.com/en-us/library/jj628155.aspx
Network Sites (aka Logical Network Definitions). In Virtual Machine Manager (VMM) in System Center 2012 Service Pack 1 (SP1), a network site associates one or more subnets, VLANs, and subnet/VLAN pairs with a logical network. It also enables you to define the host groups to which the network site is available. Network sites are sometimes referred to as logical network definitions, for example, in the VMM command shell.
For more information on network sites, see “Configuring Logical Networking in VMM Overview” at http://technet.microsoft.com/en-us/library/jj721568.aspx
Network Virtualization aka Software Defined Networking (SDN) - Hyper-V network virtualization provides the concept of a virtual network that is independent of the underlying physical network. With this concept of virtual networks, which are composed of one or more virtual subnets, the exact physical location of an IP subnet is decoupled from the virtual network topology. As a result, customers can easily move their subnets to the cloud while preserving their existing IP addresses and topology in the cloud, so that existing services continue to work unaware of the physical location of the subnets. Hyper-V network virtualization in Windows Server 2012 provides policy-based, software-controlled network virtualization that reduces the management overhead that enterprises face when they expand dedicated infrastructure-as-a-service (IaaS) clouds. In addition, it provides cloud hosting providers with better flexibility and scalability for managing virtual machines to achieve higher resource utilization.
For more information on SDN, see http://social.technet.microsoft.com/wiki/contents/articles/16090.introduction-to-software-defined-networking-sdn.aspx
NIC Teaming - aka load balancing and failover (LBFO), allows multiple network adapters to be placed into a team for the purposes of bandwidth aggregation and traffic failover (to maintain connectivity in the event of a network component failure. This feature has long been available from network adapter vendors; however, NIC Teaming is now included as an in-box feature with Windows Server 2012.
NIC Teaming is compatible with all networking capabilities in Windows Server 2012 with five exceptions: SR-IOV, RDMA, native host QoS, TCP chimney, and 802.1X authentication. From a scalability perspective, on Windows Server 2012, a maximum of 32 network adapters can be added to a single team, and an unlimited number of teams can be created on a single host. NIC teaming type is also important, as some modes are switch dependent.
For more info, see “NIC Teaming Overview” at http://technet.microsoft.com/en-us/library/hh831648.aspx
Port Classification -
In Virtual Machine Manager (VMM) in System Center 2012 Service Pack 1 (SP1), port classifications provide global names for identifying different types of virtual network adapter port profiles. A port
classification can be used across multiple logical switches while the settings for the port classification remain specific to each logical switch. For example, you might create one port classification named FAST to identify ports that are configured to have
more bandwidth, and another port classification named SLOW to identify ports that are configured to have less bandwidth.
Port classifications can be described as a reusable container of port profile settings, Hyper-V switch port settings and extension port profiles. Port classifications are exposed to tenants through the configuration of a Cloud in VMM.
For more information about port profiles, port classifications, and logical switches, see “How to Create
a Port Classification in System Center 2012 SP1” at http://technet.microsoft.com/en-us/library/jj628153.aspx and
Port classifications can be described as < class="apple-converted-space" style="font-size:12px;"> “Configuring Ports and Switches for VM Networks in System Center 2012 SP1”
at http://technet.microsoft.com/en-us/library/jj721570.aspx
Port Mirroring - With Port Mirroring, traffic sent to or from a Hyper-V Virtual Switch port is copied and sent to a mirror port. With Hyper-V Virtual Switch port mirroring, you can select the switch ports that are monitored as well as the switch port that receives copies of all the traffic.
There are a range of applications for port mirroring, and many independent software vendors leverage mirroring to implement extensions to Hyper-V networking for performance management, security analysis, and network diagnostics.
For more information, see "Port Mirroring" at
http://technet.microsoft.com/en-us/library/jj679878.aspx#bkmk_portmirror
Port Virtual Local Area Network (PVLAN) –
Network isolation through use of a VLAN is related to security, but unlike IPsec - which encrypts the network traffic, isolation via a VLAN logically segments the traffic. VLANs suffer
scalability issues. A VLAN ID is a 12-bit number, and VLANs are in the range 1-4095. In a multi-tenant data center, if you want to isolate each tenant by using a VLAN, configuration is complex and difficult. These scalability issues of VLANs are solved when
you deploy Hyper-V Network Virtualization, where tenants each have multiple virtual subnets. A simple solution when each tenant only has a single VM is to use PVLAN.
PVLAN addresses some of the scalability issues of VLANs. PVLAN is a switch port property. With PVLAN there are two VLAN IDs, a primary VLAN ID and a secondary VLAN ID. A PVLAN may be in one of three modes: Isolated, promiscuous or community.
PVLAN can be used to create an environment where VMs may only interact with the Internet and not have visibility into other VMs’ network
traffic. To accomplish this put all VMs (actually their Hyper-V switch ports) into the same PVLAN in isolated mode. Therefore, using only two VLAN IDs, primary and secondary, all VMs are isolated from each other.
In addition to PVLAN, Hyper-V Virtual Switch also provides support for VLAN trunk mode.
Trunk mode provides network services or network appliances on a VM with the ability to see traffic from multiple VLANs. In trunk mode, a switch port receives traffic from all VLANs that you configure in an allowed VLAN list. You can also configure
a switch port that is connected to a VM - but is not bound to the underlying NIC - for trunk mode.
For more information, see “Port Virtual Local Area Network (PVLAN) and Trunk Mode” at
http://technet.microsoft.com/en-us/library/jj679878.aspx#bkmk_pvlan
Receive Segment Coalescing (RSC) - RSC is a stateless offload technology that helps reduce CPU utilization for network processing on the receive side by offloading tasks from the CPU to an RSC-capable network adapter. CPU saturation due to networking-related processing can limit server scalability. This problem in turn reduces the transaction rate, raw throughput, and efficiency. RSC enables an RSC-capable network interface card to do the following:
The network interface card performs these tasks based on rules that are defined by the network stack subject to the hardware capabilities of the specific network adapter. This ability to receive multiple TCP segments as one large segment significantly reduces the per-packet processing overhead of the network stack. Because of this, RSC significantly improves the receive-side performance of the operating system (by reducing the CPU overhead) under network I/O intensive workloads.
Note: RSC is supported in hosts if any of the team members support it. RSC is not supported through Hyper-V switches.
For more information, see “Receive Segment Coalescing” at http://technet.microsoft.com/en-us/library/hh997024.aspx
Receive-side scaling (RSS) – RSS enables network adapters to distribute the kernel-mode network processing load across multiple processor cores in multi-core computers. The distribution of this processing makes it possible to support higher network traffic loads than would be possible if only a single core were to be used. In Windows Server 2012, RSS has been enhanced, including computers with more than sixty-four processors. RSS achieves this by spreading the network processing load across many processors and actively load balancing TCP terminated traffic.
In addition, RSS in Windows Server 2012 provides automatic load balancing capabilities for non-TCP traffic (including UDP unicast, multicast, and IP-forwarded traffic), improved RSS diagnostics and management capabilities, improved scalability across Non-Uniform Memory Access (NUMA) nodes, and improved resource utilization and resource partitioning through facilitation of kernel-scheduler and networking stack alignment.
For more information, see “Receive-side Scaling” at http://technet.microsoft.com/en-us/library/hh997036.aspx
Remote Direct Memory Access (RDMA) - RDMA is a kernel bypass technique which makes it possible to transfer large amounts of data quite rapidly. Because the transfer is performed by the DMA engine on the network adapter, the CPU is not used for the memory movement, which frees the CPU to perform other work.
For more information, see "NetworkDirect" at http://technet.microsoft.com/en-us/library/hh997033.aspx
Single Root I/O Virtualization (SR-IOV)] - SR-IOV allows a device, such as a network adapter, to separate access to its resources among various PCIe hardware functions. SR-IOV includes both PCIe Physical Function (PF) and PCIe Virtual Functions (VFs).
Each PF and VF is assigned a unique PCI En, see "NetworkDirect" at http://technet.microsoft.com/en-us/library/hh997033.aspx
SR-IOV enables network traffic to bypass the software switch layer of the Hyper-V virtualization stack. Because the VF is assigned to a child partition, the network traffic flows directly between the VF and child partition. As a result, the I/O overhead in the software emulation layer is diminished and achieves network performance that is nearly the same performance as in non-virtualized environments.
Note: Because SR-IOV data bypasses the host operating system stack, network adapters that expose the SR-IOV feature will no longer expose the feature if they are a member of a team. Teams can be created in virtual machines to team SR-IOV virtual functions.
For more information, see “How to Create a Logical Switch in System Center 2012 SP1” as well as in the eight-part overview at http://social.technet.microsoft.com/wiki/contents/articles/9296.hyper-v-sr-iov-overview.aspx
TCP Chimney Offload – The TCP chimney architecture offloads the data transfer portion of TCP protocol processing for one or more TCP connections to a network adapter. This architecture provides a direct connection, called a chimney, between applications and an offload-capable network adapter.
The chimney offload architecture reduces host network processing for network-intensive applications, so networked applications scale more efficiently and end-to-end latency is reduced. In addition, fewer servers are needed to host an application, and servers are able to use the full Ethernet bandwidth.
Note: Virtual machine chimney, also called TCP offload, has been removed in Windows Server 2012. The TCP chimney will not be available to guest operating systems.
Note: TCP chimney offload is not supported through a software-based team in Windows Server 2012. If software-based NIC teaming is not used, however, you can leave it enabled.
For more information, see “High Speed Networking” at http://technet.microsoft.com/en-us/network/dd277646.aspx
Virtual Machine Queue (VMQ) – Virtual Machine Queue (VMQ) allows the host’s network adapter to pass DMA packets directly into individual virtual machine memory stacks. Each virtual machine device buffer is assigned a VMQ, which avoids needless packet copies and route lookups in the virtual switch. Essentially, VMQ allows the host’s single network adapter to appear as multiple network adapters to the virtual machines, allowing each virtual machine its own dedicated network adapter. The result is less data in the host’s buffers and an overall performance improvement to I/O operations.
A VMQ-capable network adaptor classifies incoming frames to be routed to a receive queue based on filters which associate the queue with a virtual machine’s virtual network adaptor. These hardware queues may be affinitized to different CPUs thus allowing for receive scaling on a per-virtual machine network adaptor basis.
Windows Server 2012 dynamically distributes incoming network traffic processing to host processors, based on processor use and network load. In times of heavy network load, Dynamic VMQ (D-VMQ) automatically uses more processors. In times of light network load, Dynamic VMQ relinquishes those same processors. In fact, support for Static VMQ has been removed in Windows Server 2012! Drivers using NDIS 6.3 will automatically access Dynamic VMQ capabilities that are new in Windows Server 2012.
For more information, see “Receive-side Scaling and Dynamic VMQ” at
Windows Server 2012 dynamically distributes incoming network traffic processing to host processors, based on processor use and network load. In times of heavy network>http://technet.microsoft.com/en-us/library/jj679878.aspx#bkmk_vmq q
Virtual IP (VIP) Template – In Virtual Machine Manager (VMM) in System Center 2012 Service Pack 1 (SP1), a VIP template contains load-balancer-related configuration settings for a specific type of network traffic. For example, you can create a template that specifies the load-balancing behavior for HTTPS traffic on a specific load balancer by manufacturer and model. These templates represent the best practices from a load-balancer configuration standpoint.
For more information, see “How to Create VIP Templates for Hardware Load Balancers in VMM” at http://technet.microsoft.com/en-us/library/gg610569.aspx
Virtual Machine (VM) Network - In Virtual Machine Manager (VMM) in System Center 2012 Service Pack 1 (SP1), a VM Network defines a routing domain for one or more virtual subnets.
For more information, see “Configuring VM Networks in VMM in System Center 2012 SP1 Illustrated Overview” at http://technet.microsoft.com/en-us/library/jj983727.aspx
Virtual Network Adapter Profile - In Virtual Machine Manager (VMM) in System Center 2012 Service Pack 1 (SP1),
For more information, see “Configuring VM Networks in VMM in System Center 2012 SP1 Illustrated Overview” at http://technet.microsoft.com/en-us/library/jj983727.aspx
Virtual Switch Extension Manager - In Virtual Machine Manager (VMM) in System Center 2012 Service Pack 1 (SP1), a virtual switch extension manager makes it possible for you to use a vendor network-management console and the VMM management server together. You can configure settings or capabilities in the vendor network-management console—which is also known as the management console for a forwarding extension—and then use the console and the VMM management server in a coordinated way.
To do this, you must first install provider software (which is provided by the vendor) on the VMM management server. Then you must add the virtual switch extension manager to VMM, which tells the VMM management server to connect to the vendor network-management database and to import network settings and capabilities from that database. The result is that you can see those settings and capabilities, and all your other settings and capabilities, together in VMM.
For more information, see “Common Scenarios for Network in Virtual Machine Manager” at http://technet.microsoft.com/en-us/library/jj870823.aspx