Azure Network Traffic Management

The following is from Azure Administrator Training lab for AZ-103

System Routes

Azure uses system routes to direct network traffic between virtual machines,on-premises networks, and the Internet. The following situations are managedby these system routes:

  • Traffic between VMs in the same subnet.
  • Between VMs in different subnets in the same virtual network.
  • Data flow from VMs to the Internet.
  • Communication between VMs using a VNet-to-VNet VPN.
  • Site-to-Site and ExpressRoute communication through the VPN gateway.

For example, consider this virtual network with two subnets. Communicationbetween the subnets and from the frontend to the internet are all managed byAzure using the default system routes.

Information about the system routes is recorded in a route table. A route tablecontains a set of rules, called routes, that specifies how packets should berouted in a virtual network. Route tables are associated to subnets, and eachpacket leaving a subnet is handled based on the associated route table.Packets are matched to routes using the destination. The destination can bean IP address, a virtual network gateway, a virtual appliance, or the internet.If a matching route can’t be found, then the packet is dropped.

 

User Defined Routes

As explained in the previous topic, Azure automatically handles all networktraffic routing. But, what if you want to do something different? For example,you may have a VM that performs a network function, such as routing,firewalling, or WAN optimization. You may want certain subnet traffic to bedirected to this virtual appliance. For example, you might place an appliancebetween subnets or a subnet and the internet.

In these situations, you can configure user-defined routes (UDRs). UDRscontrol network traffic by defining routes that specify the next hop of thetraffic flow. This hop can be a virtual network gateway, virtual network,internet, or virtual appliance.

✔️ Each route table can be associated to multiple subnets, but a subnet canonly be associated to a single route table. There are no additional charges forcreating route tables in Microsoft Azure. Do you think you will need to createcustom routes?

For more information, you can see:

Custom routes – https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-udr-overview#custom-routes

HTML CSS and JavaScript are the client-side elements of each website. Learn them to become a web developer.

 

Routing Example

Let’s look at a specific network routing example. In this example you have avirtual network that includes three subnets.

  • The subnets are Private, DMZ, and Public. In the DMZ subnet there is anetwork virtual appliance (NVA). NVAs are VMs that help with networkfunctions like routing and firewall optimization.
  • You want to ensure all traffic from the Public subnet goes through theNVA to the Private subnet.

✔️ In the next three topics we will look at how to: create the routing table,create a custom route, and associate the route to the subnet.

 

Create a Routing Table

Creating a routing table is very straightforward. You must provide Name,Subscription, Resource Group, Location, and whether you want to use BorderGateway Protocol (BGP) route propagation.

BGP is the standard routing protocol commonly used on the Internet toexchange routing and reachability information between two or morenetworks. Routes are automatically added to the route table for all subnetswith BGP propagation enabled. In most situations this is what you want. Forexample, if you are using ExpressRoute you would want all subnets to havethat routing information.

For our example, the routing table is named myRouteTablePublic and BGP isenabled.

 

Create a Custom Route

For our example,

  • The new route is named ToPrivateSubnet.
  • The Private subnet is at 10.0.1.0/24.
  • The route uses a virtual appliance. Notice the other choices for Next hoptype: virtual network gateway, virtual network, internet, and none.
  • The virtual appliance is located at 10.0.2.4.

In summary, this route applies to any address prefixes in 10.0.1.0/24 (privatesubnet). Traffic headed to these addresses will be sent to the virtual appliancewith a 10.0.2.4 address.

 

Associate the Route

The last step in our example is to associate the Public subnet with the newrouting table. Each subnet can have zero or one route table associated to it.

✔️ In this example remember that the virtual appliance should not have apublic IP address and IP forwarding should be enabled on the device.

 

 

Load Balancing

Azure Load Balancer

The Azure Load Balancer delivers high availability and network performanceto your applications. It is an OSI Layer 4 (TCP and UDP) load balancer thatdistributes inbound traffic to backend resources using load balancing rulesand health probes. Load balancing rules determine how traffic is distributedto the backend. Health probes ensure the resources in the backend arehealthy.

The Load Balancer can be used for inbound as well as outbound scenariosand scales up to millions of flows for all TCP and UDP applications.

✔️ Keep this diagram in mind since it covers the four components that mustbe configured for your load balancer: Frontend IP configuration, Backendpools, Health probes, and Load balancing rules.

For more information, you can see:

Load Balancer documentation – https://docs.microsoft.com/en-us/azure/load-balancer/

 

Public Load Balancer

There are two types of load balancers: public and internal.

A public load balancer maps the public IP address and port number of incoming traffic to the private IP address and port number of the VM, and vice versa for the response traffic from the VM. By applying load-balancing rules, you can distribute specific types of traffic across multiple VMs or services. For example, you can spread the load of incoming web request traffic across multiple web servers.

The following figure shows internet clients sending webpage requests to the public IP address of a web app on TCP port 80. Azure Load Balancer distributes the requests across the three VMs in the load-balanced set.

Internal Load Balancer

An internal load balancer directs traffic only to resources that are inside avirtual network or that use a VPN to access Azure infrastructure. Frontend IPaddresses and virtual networks are never directly exposed to an internetendpoint. Internal line-of-business applications run in Azure and are accessedfrom within Azure or from on-premises resources. For example, an internalload balancer could receive database requests that need to be distributed tobackend SQL servers.

An internal load balancer enables the following types of load balancing:

  • Within a virtual network. Load balancing from VMs in the virtualnetwork to a set of VMs that reside within the same virtual network.
  • For a cross-premises virtual network. Load balancing from on-premisescomputers to a set of VMs that reside within the same virtual network.
  • For multi-tier applications. Load balancing for internet-facing multi-tier applications where the backend tiers are not internet-facing. Thebackend tiers require traffic load-balancing from the internet-facing tier.
  • For line-of-business applications. Load balancing for line-of-businessapplications that are hosted in Azure without additional load balancerhardware or software. This scenario includes on-premises servers thatare in the set of computers whose traffic is load-balanced.

✔️ Can you see how a public load balancer could be placed in front of theinternal load balancer to create a multi-tier application?

 

Load Balancer SKUs

When you create an Azure Load Balancer you will select for the type (Internalor Public) of load balancer. You will also select the SKU. The load balancersupports both Basic and Standard SKUs, each differing in scenario scale,features, and pricing. The Standard Load Balancer is the newer LoadBalancer product with an expanded and more granular feature set over BasicLoad Balancer. It is a superset of Basic Load Balancer.

Here is some general information about the SKUs.

  • SKUs are not mutable. You may not change the SKU of an existingresource.
  • A standalone virtual machine resource, availability set resource, orvirtual machine scale set resource can reference one SKU, never both.
  • A Load Balancer rule cannot span two virtual networks. Frontends andtheir related backend instances must be in the same virtual network.
  • There is no charge for the Basic load balancer. The Standard loadbalancer is charged based on number of rules and data processed.
  • Load Balancer frontends are not accessible across global virtualnetwork peering.

✔️ New designs and architectures should consider using Standard LoadBalancer.

 

Backend Pools

To distribute traffic, a back-end address pool contains the IP addresses of thevirtual NICs that are connected to the load balancer.

How you configure the backend pool depends on whether you are using theStandard or Basic SKU.

Standard SKU Basic SKU
Backend poolendpoints Any VM in a singlevirtual network,including a blend ofVMs, availability sets,and VM scale sets. VMs in a singleavailability set or VMscale set.

Backend pools are configured from the Backend Pool blade. For the StandardSKU you can connect to an Availability set, single virtual machine, or avirtual machine scale set.

✔️ In the Standard SKU you can have up to 1000 instances in the backendpool. In the Basic SKU you can have up to 100 instances.

 

Load Balancer Rules

A load balancer rule is used to define how traffic is distributed to the backendpool. The rule maps a given frontend IP and port combination to a set ofbackend IP addresses and port combination. To create the rule the frontend,backend, and health probe information should already be configured. Here isa rule that passes frontend TCP connections to a set of backend SQL (port1433) servers. The rule uses a health probe that checks on TCP 1443.

Load balancing rules can be used in combination with NAT rules. Forexample, you could use NAT from the load balancer’s public address to TCP3389 on a specific virtual machine. This allows remote desktop access fromoutside of Azure. Notice in this case, the NAT rule is explicitly attached to aVM (or network interface) to complete the path to the target; whereas a LoadBalancing rule need not be.

✔️ Can you see the difference between load balancing rules and NAT rules?Remember, this approach should only be used when you need connectivityfrom the Internet. Most normal communications would occur from on-premises to Azure connections such as site-to-site VPN and ExpressRoute.

 

Session Persistence

By default, Azure Load Balancer distributes network traffic equally amongmultiple VM instances. The load balancer uses a 5-tuple (source IP, sourceport, destination IP, destination port, and protocol type) hash to map traffic toavailable servers. It provides stickiness only within a transport session.

Session persistence specifies how traffic from a client should be handled. Thedefault behavior (None) is that successive requests from a client may behandled by any virtual machine. You can change this behavior.

  • Client IP specifies that successive requests from the same client IPaddress will be handled by the same virtual machine.
  • Client IP and protocol specifies that successive requests from the sameclient IP address and protocol combination will be handled by the samevirtual machine.

✔️ Keeping session persistence information is very important in applicationsthat use a shopping cart. Can you think of any other applications?

 

Health Probes

A health probe allows the load balancer to monitor the status of your app.The health probe dynamically adds or removes VMs from the load balancerrotation based on their response to health checks. When a probe fails torespond, the load balancer stops sending new connections to the unhealthyinstances.

There are two main ways to configure health probes: HTTP and TCP.

HTTP custom probe. The load balancer regularly probes your endpoint (every 15 seconds, by default). The instance is healthy if it responds with anHTTP 200 within the timeout period (default of 31 seconds). Any status otherthan HTTP 200 causes this probe to fail. You can specify the port (Port), theURI for requesting the health status from the backend (URI), amount of timebetween probe attempts (Interval), and the number of failures that must occurfor the instance to be considered unhealthy (Unhealthy threshold).

TCP custom probe. This probe relies on establishing a successful TCPsession to a defined probe port. If the specified listener on the VM exists, theprobe succeeds. If the connection is refused, the probe fails. You can specifythe Port, Interval, and Unhealthy threshold.

✔️ There is also a guest agent probe. This probe uses the guest agent insidethe VM. It is not recommended when HTTP or TCP custom probeconfigurations are possible.

 

 

Azure Traffic Manager

Azure Traffic Manager

Microsoft Azure Traffic Manager allows you to control the distribution of usertraffic to your service endpoints running in different datacenters around theworld.

  • Traffic Manager works by using the Domain Name System (DNS) todirect end-user requests to the most appropriate endpoint. Serviceendpoints supported by Traffic Manager include Azure VMs, Web Apps,and cloud services. You can also use Traffic Manager with external, non-Azure endpoints.
  • Traffic Manager selects an endpoint based on the configured traffic-routing method. Traffic Manager supports a range of traffic-routingmethods to suit different application needs. Once the endpoint is selectedthe clients then connect directly to the appropriate service endpoint.
  • Traffic Manager provides endpoint health checks and automaticendpoint failover, enabling you to build high-availability applicationsthat are resilient to failure, including the failure of an entire Azureregion.

For more information, you can see:

Traffic Manager – https://azure.microsoft.com/en-us/services/traffic-manager/

 

Traffic Manager Features

Azure Traffic Manager provides quick setup, great performance, andapplication availability. Traffic Manager enables you to control how traffic isdistributed across your application endpoints. An endpoint can be anyInternet-facing endpoint, hosted in Azure or outside Azure.

Here are some specific ways you can use Traffic Manager.

  • Improve availability of critical applications. Traffic Manager allowsyou to deliver high availability for your critical applications bymonitoring your endpoints in Azure and providing automatic failoverwhen an endpoint goes down.
  • Improve responsiveness for high performance applications. Azureallows you to run cloud services or websites in datacenters locatedaround the world. Traffic Manager provides faster page loads and betterend-user experience by serving users with the hosted service that is“closest” to them.
  • Upgrade and perform service maintenance without downtime. You canseamlessly carry out upgrade and other planned maintenance operationson your applications without downtime for end users by using TrafficManager to direct traffic to alternative endpoints when maintenance isin progress.
  • Combine on-premises and Cloud-based applications. Traffic Managersupports external, non-Azure endpoints enabling it to be used withhybrid cloud and on-premises deployments.
  • Distribute traffic for large, complex deployments. Traffic-routingmethods can be combined using nested Traffic Manager profiles tocreate sophisticated and flexible traffic-routing configurations to meetthe needs of larger, more complex deployments.

✔️ We will be covering the four basic routing methods: Priority,Performance, Geographic, and Weighted. These methods can be combinedinto what is known as nested Traffic Manager profiles. Azure recently addedMultvalue and Subnet routing methods. These will not be covered in thecourse.

 

Priority Routing

Scenario: An organization wants to provide reliability for its services bydeploying one or more backup services in case their primary service goesdown.

In this case the Traffic Manager profile contains a prioritized list of serviceendpoints. Traffic Manager sends all traffic to the primary (highest-priority)endpoint first. If the primary endpoint is not available, Traffic Managerroutes the traffic to the second endpoint, and so on. Availability of theendpoint is based on the configured status (enabled or disabled) and theongoing endpoint monitoring.

The Priority traffic routing method allows you to easily implement a failoverpattern. You configure the endpoint priority explicitly or use the defaultpriority based on the endpoint order.

✔️ Can you think of another example, other than failover, where the Priorityrouting method could be used?

 

Performance Routing

Scenario: An organization has deployed endpoints in two or more locationsacross the globe and wants to ensure users are routed to achieve the bestresponsiveness. For example, an application can be hosted in West Europeand West US. A user from Denmark can reasonably expect to be served by theendpoint residing in the West Europe datacenter and should experience lowerlatency and higher responsiveness.

The Performance routing method is designed to improve the responsivenessby routing traffic to the location that is closest to the user. The closestendpoint is not necessarily measured by geographic distance. Instead TrafficManager determines closeness by measuring network latency. TrafficManager maintains an Internet Latency Table to track the round-trip timebetween IP address ranges and each Azure datacenter.

With this method Traffic Manager looks up the source IP address of theincoming DNS request in the Internet Latency Table. Traffic Manager choosesan available endpoint in the Azure datacenter that has the lowest latency forthat IP address range, then returns that endpoint in the DNS response.

✔️ Remember Traffic Manager does not receive DNS queries directly fromclients. Rather, DNS queries come from the recursive DNS service that theclients are configured to use. Therefore, the IP address used to determine the‘closest’ endpoint is not the client’s IP address, but it is the IP address of therecursive DNS service. In practice, this IP address is a good proxy for theclient.

 

Geographic Routing

Scenario: In certain organizations knowing a user’s geographic region androuting them based on that is very important. Examples include complyingwith data sovereignty mandates, localization of content and user experience,and measuring traffic from different regions.

When a Traffic Manager profile is configured for Geographic routing, eachendpoint associated with that profile needs will have a set of geographiclocations assigned to it. Any requests from those regions gets routed only tothat endpoint.

Some planning is required when you create a geographical endpoint. Alocation cannot be in more than one endpoint. You build the endpoint from a:

  • Regional Grouping. For example, All (World), Europe, Middle East, orAsia.
  • Country/Region. For example, Europe Denmark and Middle EastTurkey.
  • State/Province (only available in Australia, Canada, UK, and USA). Forexample, North America / Central America / Caribbean United StatesMaryland or North America / Central America / Caribbean CanadaOntario.

✔️ Similar to Performance routing Traffic Manager uses the source IPaddress of the DNS query to determine the region from which a user isquerying from. Usually, this is the IP address of the local DNS resolver doingthe query on behalf of the user.

 

Weighted Routing

Scenario: Sometimes an organization wants to prefer one endpoint overanother. For example, if you are testing or bringing a new endpoint onlineand want to gradually increase traffic over time.

The Weighted traffic-routing method allows you to distribute traffic evenly orto use a pre-defined weighting.

In the Weighted traffic-routing method, you assign a weight to each endpointin the Traffic Manager profile configuration. The weight is an integer from 1to 1000. This parameter is optional. If omitted, Traffic Manager uses adefault weight of ‘1’. The higher weight, the higher the priority.

✔️ Using the same weight across all endpoints results in an even trafficdistribution. Using higher or lower weights on specific endpoints causesthose endpoints to be returned more or less frequently in the DNS responses.

 

Implementing Traffic Manager Profiles

To implement Traffic Manager you must create a profile. The profile willinclude the routing method and the DNS time to live (TTL). The TTL valuecontrols how often the client’s local caching name server will query theTraffic Manager system for updated DNS entries. Any change that occurswith Traffic Manager, such as traffic routing method changes or changes inthe availability of added endpoints, will take this period of time to berefreshed throughout the global system of DNS servers.

Traffic Manager can monitor your services to ensure they are available. Formonitoring to work correctly, you must set it up the same way for everyendpoint within this profile. You can specify the protocol, the port, and therelative path. Traffic Manager will try to access the file specified in therelative path via the defined protocol and port to check for uptime.

✔️ You can Enable or Disable your profile at anytime.

 

Implementing Traffic Manager Endpoints

Your Traffic Manager profile must also define the endpoints. There are twobasic types of endpoints:

  • Azure endpoints. Use this type of endpoint to load balance traffic to aCloud service, Web app, or Public IP address in the same subscription.
  • External endpoints. Use this type of endpoint to load balance traffic toany fully-qualified domain name (FQDN), even for applications nothosted in Azure.

For example, you could create a weighted Traffic Manager profile and addendpoints for publicly accessible virtual machines.

✔️ You will implement the weighted routing method in the lab.

 

Traffic Manager vs Load Balancer

This table compares the Azure Load Balancer with Traffic Manager. The techonologies can be used in isolation or in combination.

Service Azure Load Balancer Traffic Manager
Technology Transport (Layer 4) DNS
Protocols Any Any HTTP/S endpoint needed for endpoint monitoring
Endpoints Azure VMs and Cloud Services role instances Azure VMs, Cloud Services, Azure Web Apps, and external endpoints
Network connectivity Both internet-facing and internal (VNet) applications Internet-facing only