Posts Tagged ‘Networking’

How to configure NIC Teaming with HP Proliant and Cisco or Procurve Switch Infrastructure?

August 6, 2011

Very often I do need to configure NIC teams with HP hardware with Cisco or Procurve networking infrastructure therefore I would like to share an general overview of the NIC HP teaming capabilities, general teaming algorithms and especially how to configure an Cisco or Procurve switches.

My personal preferred teaming mode here is to use the 802.3ad teaming mode as it provides most redundancy and performance throughput capabilities and is the current industry standard which is well understood by enterprise switches.

image

image

HP generally provides following NIC teaming capabilities and algorithms:

1.1 Network Fault Tolerance (NFT) only – Network Fault Tolerance (NFT) is the foundation of HP ProLiant Network Adapter Teaming. In NFT mode, from two to eight teamed ports are teamed together to operate as a single virtual network adapter. However, only one teamed port—the primary teamed port—is used for both transmit and receive communication with the server. The remaining adapters are considered to be stand-by (or secondary adapters) and are referred to as non-primary teamed ports. Non-primary teamed ports remain idle unless the primary teamed port fails. All teamed ports may transmit and receive heartbeats, including non-primary adapters.

The fault-tolerance feature that NFT represents for HP ProLiant Network Adapter Teaming is the only feature found in every other team type. The foundation of every team type supports NFT.

1.2 Network Fault Tolerance (NFT) with Preference Order – Network Fault Tolerance Only with Preference Order is identical in almost every way to NFT with the only difference being that this team type allows the SA to prioritize the order in which teamed ports should be the primary teamed port. This ability is important in environments where one or more teamed ports are more preferred than other ports in the same team. The need for ranking certain teamed ports better than others can be a result of unequal speeds, better adapter capabilities (for example, higher receive/transmit descriptors or buffers, interrupt coalescence, and so on), or preference for the team’s primary port to be located on a specific switch.

1.3 Transmit Load Balancing (TLB) with Fault Tolerance – Transmit Load Balancing with Fault Tolerance (TLB) is a team type that allows the server to load balance its transmit traffic. TLB is switch independent and supports switch fault tolerance by allowing the teamed ports to be connected to more than one switch in the same LAN. With TLB, traffic received by the server is not load balanced. The primary teamed port is responsible for receiving all traffic destined for the server. In case of a failure of the primary teamed port, the NFT mechanism ensures connectivity to the server is preserved by selecting another teamed port to assume the role.

1.4 Transmit Load Balancing (TLB) with Fault Tolerance and Preference Order – Transmit Load Balancing with Fault Tolerance and Preference Order is identical in almost every way to TLB with the only difference being that this team type allows the SA to prioritize the order in which teamed ports should be the primary teamed port. This ability is important in environments where one or more teamed ports are more preferred than other ports in the same team. The need for ranking certain teamed ports higher than others can be a result of unequal speeds, better adapter capabilities (for example, higher receive/transmit descriptors or buffers, interrupt coalescence, and so on), or preference for the team’s primary port to be located on a specific switch.

1.5 Switch-assisted Load Balancing (SLB) with Fault Tolerance – Switch-assisted Load Balancing with Fault Tolerance (SLB) is a team type that allows full transmit and receive load balancing. SLB requires the use of a switch that supports some form of Port Trunking (for example, EtherChannel, MultiLink Trunking, and so on). SLB does not support switch redundancy because all ports in a team must be connected to the same switch. SLB is similar to the 802.3ad Dynamic team type.

1.6 802.3ad Dynamic with Fault Tolerance – 802.3ad Dynamic with Fault Tolerance is identical to SLB except that the switch must support the IEEE 802.3ad dynamic configuration protocol called Link Aggregation Control Protocol (LACP). In addition, the switch port, to which the teamed ports are connected, must have LACP enabled. The main benefit of 802.3ad Dynamic is that an SA will not have to manually configure the switch. 802.3ad Dynamic is a standard feature of HP ProLiant Network Adapter Teaming.

1.7 Automatic (both) – The Automatic team type is not really an individual team type. Automatic teams decide whether to operate as an NFT, or a TLB team, or as an 802.3ad Dynamic team. If all teamed ports are connected to a switch that supports the IEEE 802.3ad Link Aggregation Protocol (LACP) and all teamed ports are able to negotiate 802.3ad operation with the switch, then the team will choose to operate as an 802.3ad Dynamic team. However, if the switch does not support LACP or if any ports in the team do not have successful LACP negotiation with the switch, the team will choose to operate as a TLB team. As network and server configurations change, the Automatic team type ensures that HP ProLiant servers intelligently choose between TLB and 802.3ad Dynamic to minimize server reconfiguration.

2. Load Balancing Algorithm

All load-balancing team types (TLB, SLB, and 802.3ad Dynamic) load balance transmitted frames. There is a fundamental decision that must be made when determining load balancing mechanisms: whether or not to preserve frame order.

Frame order preservation is important for several reasons – to prevent frame retransmission because frames arrive out of order and to prevent performance-decreasing frame reordering within OS protocol stacks. In order to avoid frames from being transmitted out of order when communicating with a target network device, the team’s load-balancing algorithm assigns “outbound conversations” to a particular teamed port. In other words, if frame order preservation is desired, outbound load balancing by the team should be performed on a conversation-by-conversation basis rather than on a frame-by-frame basis. To accomplish this, the load-balancing device (either a team or a switch) needs information to identify conversations. Destination MAC address, Destination IP address, and TCP Connection are used to identify conversations.

It is very important to understand the differences between the load-balancing methods when deploying HP ProLiant Network Adapter Teaming in an environment that requires load balancing of routed Layer 3 traffic. Because the methods use conversations to load balance, the resulting traffic may not be distributed equally across all ports in the team. The benefits of maintaining frame order outweigh the lack of perfect traffic distribution across teamed ports’ members. Implementers of HP ProLiant Network Adapter Teaming can choose the appropriate load balancing method via the NCU.

 clip_image002

2.1 TLB Automatic method

Automatic is a load-balancing method that is designed to preserve frame ordering.

This method will load balance outbound traffic based on the highest layer of information in the frame. For instance, if a frame has a TCP header with TCP port values, the frame will be load balancing by TCP connection (see “TLB TCP Connection method” below). If the frame has an IP header with an IP address but no TCP header, then the frame is load balanced by destination IP address (see “TLB Destination IP Address method” below). If the frame does not have an IP header, the frame is load balanced by destination MAC address (see “TLB Destination MAC Address method” below).

2.2 TLB TCP Connection method

TCP Connection is also a load-balancing method that is designed to preserve frame ordering.

This method will load balance outbound traffic based on the TCP port information in the frame’s TCP header. This load-balancing method combines the TCP source and destination ports to identify the TCP conversation. Combining these values, the algorithm can identify individual TCP conversations (even multiple conversations between the team and one other network device). The algorithm used to choose which teamed port to use per TCP conversation is similar to the algorithms used in the “TLB Destination IP Address method” and “TLB Destination MAC Address method” sections below.

If this method is chosen, and the frame has an IP header with and IP address but not a TCP header, then the frame is load balanced by destination IP address (see “TLB Destination IP Address method” below). If the frame does not have an IP header, the frame is load balanced by destination MAC address (see “TLB Destination MAC Address method” below).

2.3 TLB Destination IP Address method

Destination IP Address is a load-balancing method that will attempt to preserve frame ordering.

This method makes load-balancing decisions based on the destination IP address of the frame being transmitted by the teaming driver. The frame’s destination IP address belongs to the network device that will ultimately receive the frame. The team utilizes the last three bits of the destination IP address to assign the frame to a port for transmission.

Because IP addresses are in decimal format, it is necessary to convert them to binary format. For example, an IP address of 1.2.3.4 (dotted decimal) would be 0000 0001.00000010.00000011.0000 0100 in binary format. The teaming driver only uses the last three bits (100) of the least significant byte (0000 0100 = 4) of the IP address. Utilizing these three bits, the teaming driver consecutively assigns destination IP addresses to each functional network port in its team starting with 000 being assigned to network port 1, 001 being assigned to network port 2, and so on. Of course, how the IP addresses are assigned depends on the number of network ports in the TLB team and how many of those ports are in a functional state (see Table 4-4).

clip_image004


3. How to configure 802.3ad with Cisco and HP Procurve?

3.1 Configuration of Cisco Switch with 2 network ports

Switch#conf ter
Switch(config)#Int PORT (a.e. Gi3/1)
Switch(config-if)#switchport mode access
Switch(config-if)#spanning-tree portfast
Switch(config-if)#channel-group <48> mode active
Switch(config-if)#Int PORT (a.e. Gi3/1)
Switch(config-if)#switchport mode access
Switch(config-if)#spanning-tree portfast
Switch(config-if)#channel-group <48> mode active

3.2 Configuration of HP Procurve with 2 network ports

PROCURVE-Core1#conf ter
PROCURVE-Core1# trunk PORT1-PORT2 (a.e. C1/C2) Trk<ID> (a.e. Trk99) LACP
PROCURVE-Core1# vlan <VLANID>
PROCURVE-Core1# untagged Trk<ID> (a.e. Trk99)
PROCURVE-Core1# show lacp
PROCURVE-Core1# show log lacp

a.e.: How to add additional ports to an existing HP trunk?

image

Note: In this example I do add port D5 and D6 to an configured trunk with trunk ID 70. In total I do have here an 4 port NIC team with ports C23, C24, D5, D6 based on LACP.

3.3 Configuration of HP NIC

clip_image006

NOTE: Automatic can also be used, as the teaming network drivers will automatically detect and handle the best teaming method with the switches => 802.3ad Dynamic Fault Tolerance

image

image

3.4 Results in “sh running-config” in Cisco Example

interface GigabitEthernet3/1
description SERVERNAME-NIC1
switchport access vlan <VLANID>
switchport mode access
spanning-tree portfast
channel-group 60 mode active

interface GigabitEthernet3/2
description SERVERNAME-NIC2
switchport access vlan <VLANID>
switchport mode access
spanning-tree portfast
channel-group <1-48> mode active

interface Port-channel<1-48>
description SERVERNAME-TEAM1
switchport
switchport access vlan <VLANID>
switchport mode access

3.5 Results in “sh int status” in Cisco Example

Gi3/1 SERVERNAME-NIC1 connected 10 a-full a-1000
Gi3/2 SERVERNAME-NIC2 connected 10 a-full a-1000
Po60 SERVERNAME-TEAM1 connected 10 a-full a-1000

Note: In this example the network ports Gi3/1 and Gi3/2 are bound to an new PortChannel (Po60) which is created and connected to VLAN10.

4. References

http://www.cisco.com/en/US/tech/tk389/tk213/technologies_configuration_example09186a008089a821.shtml

http://www.cisco.com/application/pdf/paws/98469/ios_etherchannel.pdf

http://cdn.procurve.com/training/Manuals/2900-MCG-Jan08-11-PortTrunk.pdf

IMPORTANTE: In NIC Teaming with HP hardware scenarios especially when Hyper-V is involved it is important to follow the installation guide from the NIC manufacturer, in case of HP NCU it is mainly important to strictly follow the installation order:

1. Install OS + patches
2. Install Hyper-V role
3. Install NCU (Network Configuration Utility) (included in Proliant Support Pack, current version 8.70)

More detailed steps can you find in the HP reference guide here:

Using HP ProLiant Network Teaming Software with Microsoft® Windows® Server 2008 (R2) Hyper-V  (4th Edition)
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01663264/c01663264.pdf

Note: Please be aware most of this blog post are cross-references from HP and Cisco networking documentation.

Stay tuned …. Winking smile

Regards

Ramazan

Advertisements

Deploying Microsoft Windows Server 2008 Networking with Cisco

June 6, 2011

Cisco has published an really useful whitepaper around networking deployments with their Cisco gear and does provide great technical details around the new enhanced capabilities of the new networking features from 2008/Vista stack.

Techniques like Teredo, ISATAP, NAT, SMBv2, IPv6…..will be discussed in detail especially the multisite cluster DNS is also covered here:

image

image

image

image

image

image

image

image

Deploying Microsoft Windows Server 2008 and Vista on a Cisco Network
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/App_Networking/extmsftw2k8vistacisco.pdf

Stay tuned…. Winking smile

Regards

Ramazan

HYPER-V R2 Networking – How to configure management and VM networks

February 26, 2010

Hi All 🙂

I see very often confusion around the configuration of the hyper-v networking and the creation of the virtual networks (Hyper-V). Therefore I will try to give you here an short / simple overview how you can configure your Hyper-V Host networking environment in regards to best practice which ensures the reliability and availability of your virtualized workloads.

Generally there are three different types of networks available :

image

EXTERNAL

External networks are mainly used for "”PUBLIC” communication = provide VMs access to network where physical NIC is connected which is used for EXTERNAL network creation. NOTE: For high availability and throughput reasons NIC teaming is possible here, but must be supported from NIC manufacturer.

INTERNAL

Internal networks are dedicated on the Hyper-V host only, The difference between an PRIVATE network here is only, the virtual adapter (loopback) which will be created on host with an INTERNAL network. This loopback adapter can be used to get TCP/IP access to the INTERNAL network from the HOST only. For this you need to assign an IP address from the IP subnet which you want to access.

PRIVATE

An private network is private only = no TCP/IP communication from host to VMs possible = isolated VMs/network. 

1. EXTERNAL (VM <-> PUBLIC NETWORK ACCESS)

This network type must be used when you need external “PUBLIC” connectivity for your Virtual Machines. VMs or better their virtual network adapters (vNIC) which are bound in their VM settings to this network will have logical access to the same network as the underlying physical NIC (pNIC) connected is. Hope that is so far clear Winking smile

In R2 (SP1) Hyper-V systems it is possible AND best practice to have an DEDICATED pNIC for the VM networking. This is configurable trough the setting “ALLOW MANAGEMENT OPERATING SYSTEM TO SHARE THIS NETWORK ADAPTER”. If you want to give the physical network adapter (pNIC) dedicated to the VM network traffic then UNCHECK this setting!

image_thumb1

An Best Practice Example with minimum 2 physical NIC ports where both are connected to same network/subnet. One physical NIC is used for management traffic to Hyper-V host system and 2nd physical NIC is exclusively for the VM traffic. Each VM which are connected to the Virtual Switch in Hyper-V does have network access to the underlying connected network/subnet.

image

The effect of “Allow management OS to share this network adapter” setting is:

Checked => Adapter is shared with OS and Hyper-V

Unchecked => Adapter is DEDICATED to Hyper-V ONLY

For lab/test environments the physical adapter can also be shared (Checked : “Allow management OS to share this network adapter”) but it is not a recommended scenario for production usage! Physical server-class NICs are also a recommendation here!

EXAMPLE:

When I create here an “EXTERNAL” network which is called “PUBLIC – EXTERNAL” here and bound to my HP NC373i which is connected to VLAN1 (192.168.0.x) then each VM which are later connected to this virtual network will have access to VLAN1 – of course if their IP settings are correctly set Winking smile In addition there will no Virtual Adapter (Switch) created as in RTM times as we had chosen to dedicate the pNIC for this External switch and will therefore NOT be shared ! You will monitor in the properties of your selected pNIC that Hyper-V will unbind all protocols (except 3rd Party Management Software) A.e.: In my case,  the used HP NC373i adapter – Added New Protocol –> “Microsoft Virtual Network Switch Protocol” ONLY!

image

NOTE : no further IP communication to host via this adapter is possible as this adapter will now be exclusive used by Hyper-V virtual machines which are connected to the belonging external VSwtich!

2. INTERNAL (HOST <-> VM)

The network type “INTERNAL” should be used when you require ONLY VM INTERNAL networking communication. The “speciality” here is, an virtual NIC (loopback) adapter will be created on the host where you then can configure an IP from the VM Network IP segment and would therefore gain network access into the VM “INTERNAL” network subnet.

image

EXAMPLE:

When I create here an INTERNAL network called “New INTERNAL Network” where I do required HOST <-> VM INTERNAL ONLY communication – IP Network 10.0.1.x – this action will automatically create an vNIC (earlier Virtual Server times = similar to loopback adapter) on my Hyper-V Host. Now I can configure an IP address of that (INTERNAL) IP segment (a.e. 10.0.1.99) and I would be able to communicate from HOST to GUEST or vice versa via network TCP/IP communication.

image

Now, there will be an “Virtual” NIC created on my Hyper-V Host where I can configure the TCP/IP settings of the VMs IP network range: image

3. PRIVATE (Same Host: VM <-> VM ONLY)

This network should be used when you want to create an VM ONLY networking (PRIVATE) WITHOUT host access. This is mainly used when you want to establish an totally secure and separated network environment for testing purposes (security, cloning, restore…). Here you will not be able to establish an network communication between HOST and GUEST – the only possible method here for data exchange between host and guest is the Hyper-V ISO sharing method !

image

At example :

When I create here an PRIVATE network called “My PRIVATE Network” there will no (vNIC) created on the Hyper-V HOST like in example with INTERNAL network. This PRIVATE Hyper-V internal virtual network switch will be later selectable in the “available” network list where you can connect your Virtual Machines virtual network adapters (vNIC) to it. This will establish an VM-VM communication if the TCP/IP settings are configured correctly.

image

Hope that helps a bit to an general understanding around the configuration for Hyper-V R2 networks and the usage scenarios.

Again, It is BEST PRACTICE to configure dedicated physical adapters and separate the VM and management network traffic!

Additionally here are in addition some more useful links and published documentation around Hyper-V networking :

UNDERSTANDING HYPER-V NETWORKING

USING HYPER-V AND FAILOVER CLUSTERING

HYPER-V – GETTING STARTED

HYPER-V – LIVE MIGRATION NETWORK GUIDE

If you still have any questions please let me know.

Stay tuned…. Winking smile

Regards

Ramazan