Maksym Yemelyanov - stock.adobe.
There are five Windows network drivers available for containers to choose from: NAT, transparent, L2 Bridging, L2 Tunneling and overlay networks. Make sure you understand their use cases before assigning a specific driver to a Windows or Hyper-V container.
Containers use host networks to implement networking for container applications, but containers also implement different types of virtual networks to enable internal-internal communication and internal-external communication. Each Windows container starts with a virtual network adapter. You can connect the virtual network adapter using any of the five types of Windows network drivers available in the container.
Before implementing networking for containers, take a look at all of the network options available. For example, you might want to know which network driver you can use to enable containers to talk to each other and/or to enable container applications with public access. Once you enable the container feature on a Windows Hyper-V machine, the system creates a new Hyper-V virtual network interface card and connects all Windows or Hyper-V container vNICs to the Hyper-V virtual switch.
Network Address Translation
This is the default networking option for containers. The Network Address Translation (NAT) driver provides port forwarding, which enables you to forward traffic from container to host to give applications access to outside networks. When you start the containers, each container will obtain an IP address from the NAT driver or any custom network you have defined. The Windows driver, WinNAT, is responsible for passing network communication from containers to the host or external applications. You can use NAT in production environments, but there are a few limitations. Multiple subnets aren't supported, and there is no automatic network configuration. Any networks you create on the host machine will also be available to Windows containers.
Transparent container networks
If you want your container or container applications to connect directly to physical networks, consider creating a transparent network. A transparent network obtains an IP address and options from an external Dynamic Host Configuration Protocol (DHCP) server. You can also assign static IP addresses to transparent networks. You can create a transparent network by using the docker network command, but you must specify -d transparent, as shown in the command below:
Docker network create –d transparent ProdCustomNet
The command above enables ProdCustomNet to obtain an IP address from the DHCP server, while the command below creates the transparent network with an IP address and subnet values:
Docker network create –d transparent –subnet=172.20.20.0/24 –gateway=172.20.20.0/24 ProdCustomNet
If you run a container in a Hyper-V VM and plan to use a static IP address with a transparent network, then you must enable MACAddressSpoofing to enable network traffic from multiple media access control addresses.
L2 Bridging and L2 Tunneling
L2 Bridging and L2 Tunneling networks are ideal if you plan to use Hyper-V network virtualization or software-defined networking (SDN). L2, as the name suggests, is capable of performing address translation at Layer 2. You can only assign static IP addresses for L2 Bridging and L2 Tunneling networks in containers. If you're a cloud hosting provider or have different tenants creating L2 Bridging or L2 Tunneling networks in containers, it makes more sense to use overlay networks to enable communication across multiple hosts. To create an L2 Bridging network, specify -d l2bridge, as shown in the command below:
Docker network create –d l2bridge –subnet=172.20.20.0/24 –gateway=172.20.20.1 ProdL2BridgeNet
Overlay container networks
If you deploy containers in swarm mode, multiple containers attached to the same overlay network can communicate across multiple container hosts. The system configures each overlay network with a private address range and its own subnet.
Which container network should you use?
There are many use cases for Windows network drivers. For example, you can use NAT for development purpose; transparent container networks for development purposes and for small to midsize deployments; L2 Bridging networks for SDN deployments; L2 Tunneling to connect to Azure networks; and overlay networks for when you deploy swarm clusters and would like containers to communicate with each other across hosts.