Loading....

Load Balancing is often associated with a web server where multiple web servers must work together to host a website. To help ensure this, there are some critical aspects of load balancing that are essential. These tasks include: managing traffic spikes and preventing the network load from overtaking one server, minimizing the client request response time, and ensuring the performance and reliability of the web servers. Here in this article, we will learn how the Linux instances can be integrated with AWS gwlb. So let’s get started.

Why Load Balance is required?

When it comes to web applications, a load balancer is critical. This is because a server that gets overloaded may fail to respond to incoming requests from a user’s web browser. This can lead companies to a bad user experience and the loss of customers. Load balancing is basically the distribution of traffic across multiple back-end servers. It makes sure that no single server will be overloaded. Because web traffic is distributed among different servers, this boosts the performance of web applications.

Why is GWLB preferred instead of any other load balancer?

VPC Endpoints sometimes referred to as VPC Edge Gateways, are a new type of load balancer. While a traditional router routes traffic to the VPC instances based on their destination IP addresses, the Gateway Load Balancer allows you to target traffic based on specific instance attributes. The two primary differentiators between the two types of load balancers are a gateway load balances both data flow and application layer traffic. In contrast, a router will load and balance only the data flow.

Linux Ethernet(layer 2) with GWLB

The Linux Geneve module cannot handle GWLB packets as some requirements by GWLB packets must be fulfilled by the appliances for compatibility. For example, GWLB uses GENEVE encapsulation to send IP packets with layer-3 headers. This way, any application using a Layer 3 protocol such as TCP/UDP can inspect the data. Since this method of encapsulation does not natively work with either Linux or Linux’s GENEVE module, the applications above can range from firewalling appliances to email inspection, deep packet inspection, and specialized NAT solutions.

Linux IP (layer 3) with GWLB

The Gateway Load Balancer is a brand-new load balancer that works at layer 3 of the OSI model. It operates based on Hyperplane, which can handle several thousand connections per second. The functionality includes stickiness and flow symmetry for traffic passed through it, performs health checks, and enables auto scaling groups as targets. This implies that customers can now put attention on constructing the applications and security procedures they utilize for inspection and no longer must have the hassle of creating or using complicated methods to administer availability and scaling of the appliance fleets they use for inspection. As a result, customers can now concentrate on constructing the applications and security guidelines they employ for inspection, and not need to have the hassle of producing or using complex techniques to organize the availability and scaling of the appliances they use for inspection. Partners and 3rd party appliance vendors, in any case, can now create new ways to supply Network Function Virtualization (NFV) solutions as a guaranteed service or in an outsourced way.

How to address the problem

To address the issue of Linux with GWLB handling, AWS has proposed a solution called gwlbtun( AWS-gateway-load-balancer-tunnel-handler

). This software provides support for using the Gateway Load Balancer AWS service. It is designed to be used on a GWLB target, taking in the Geneve encapsulated data and creating Linux tun (layer 3) interfaces per endpoint. This allows standard Linux tools (iptables, etc.) to work with GWLB.

Structuring of the files in gwlbtun

The main. cpp contains the start of the code but primarily interacts with GeneveHandler, defined in GeneveHandler.cpp. That class instantiates UDPPacketReceiver and TunInterface as needed and generally manages the entire packet handling flow. GenevePacket and PacketHeader handle parsing and validating GENEVE packets and IP packets, respectively, and are called by GeneveHandler as needed.

How to download the dependencies in AWS Linux 2

Paste the following code in AWS Linux 2 shell

  • sudo yum groupinstall “Development Tools”
  • sudo yum install cmake3

How gateway load balancers can be integrated with Linux instances

Let’s look first at what the gwlbtun application does. GWB tunnel handler is a user-space program listening for incoming GENEVE packets from the GWLB. Gateway Load Balancer offers both Layer 3 and Layer 4 gateway and load balancing capabilities. It is a transparent bump-in-the-wire device that doesn’t change any part of the packet. It is architected to handle thousands of requests per second, volatile traffic patterns, and introduces extremely low latency. When the GWLB tunnel handler sees packets coming in from a new GWLB endpoint, it creates two new tunnel interfaces, named “gwo-” and “gwi-,” where the “-” represents the base 60 encoded endpoint ENI ID. The “gwi” interface provides the packets coming in from the endpoint, decapsulated, and appears as the original L3 packet which the gateway endpoint received. User space application listens to this interface, re-encoding the packet in the correct flow’s GENEVE headers and then sending it back to the GWLB to continue its path.

The control is sub-divided into two types

1) 1 arm- mode

2) 2 arm-mode

There are several other flow types as well .

Now here 1 arm- mode refers to the traffic flowing bi-directionally with a single endpoint through GWLB and 2 arm- mode refers to the same traffic flow but with multiple endpoints.

Some necessary commands for gwlb tunnel handler on Github

# ./gwlbtun -h

AWS Gateway Load Balancer Tunnel Handler

Usage: ./gwlbtun [options]

Example: ./gwlbtun

-h         Print this help

-c FILE    Command to execute when a new tunnel has been built. See below for arguments passed.

-r FILE    Command to execute when a tunnel times out and is about to be destroyed. See below for arguments passed.

-t TIME    Minimum time in seconds between last packet seen and to consider the tunnel timed out. Set to 0 (the default) to never time out tunnels.

Note the actual time between last packet and the destroy call may be longer than this time.

-p PORT    Listen to TCP port PORT and provide a health status report on it.

-s         Only return simple health check status (only the HTTP response code), instead of detailed statistics.

-d         Enable debugging output.

-x         Enable dumping the hex payload of packets being processed.

Tunnel command arguments:

The commands will be called with the following arguments:

1: The string ‘CREATE’ or ‘DESTROY’, depending on which operation is occurring.

2: The interface name of the ingress interface (gwi-<X>).

3: The interface name of the egress interface (gwo-<X>).  Packets can be sent out via in the ingress as well, but having two different interfaces makes routing and iptables easier.

4: The GWLBE ENI ID in base 16 (e.g. ‘2b8ee1d4db0c51c4’) associated with this tunnel.

The <X> in the interface name is replaced with the base 60 encoded ENI ID (to fit inside the 15 character device name limit).

Example Scripts for different types of Connections :

Some important scripts provided by gwlb tunnel handler are

  1. Creating a pass-through
  2. 1 arm mode (GWLB tun with multiple GWLB Endpoints)
  3. 2 arm mode (NAT-ing) (GWLB tun with two GWLB Endpoints)
  4. GWLB tun with multiple GWLB Endpoints

Here we will discuss first two of them . Further scripts for different types of GWLB connections are provided here.

To create-passthrough

GitHub Code :  https://github.com/aws-samples/aws-gateway-load-balancer-tunnel-handler/blob/main/example-scripts/create-passthrough.sh

#!/bin/bash

echo "==> Setting up simple passthrough"

echo Mode is $1, In Int is $2, Out Int is $3, ENI is $4

tc qdisc add dev $2 ingress

tc filter add dev $2 parent ffff: protocol all prio 2 u32 match u32 0 0 flowid 1:1 action mirred egress mirror dev $3

Here we see a line-by-line explanation of the script.

Line 1: The first thing that we see on the screen is the standard bash header. Including the date, time, and version of the script is commonplace.

Line 2: Next, the script echoes back that it has run successfully and is being called with three arguments. These are used to describe the bandwidth that is to be consumed.

Line 3-4: Next, we echo out that the script has fired and what its parameters are. This is only for informational purposes.

Line 5: The next line instructs tc to create a new queueing discipline to the gwi- interface .

Line 6: Apply a filter on the GWI interface, parented to the root, matching all of the protocols, and packet flows, and mirroring them to the GWP interface (given by gwltun as $3 to the shell script).

One arm topology

Step 1: You will deploy this code with AWS System Manager for building a connection with the virtual server(instance), and the results will be something running like this.

Quick alternative setup

The GitHub repository includes a “example-topology-two-way-template.template” file in the example-scripts folder. The file contains a CloudFormation template that allows you to create the first topology.The template of this guide depends on Systems Manager to connect to the instances. You will need to enable Systems Manager before you can use it. Go to the Systems Manager service and select “Quick Setup”. Choose Host Management, and accept all defaults. Alternatively, you can add in a small EC2 instance in the Application Public subnet, and then SSH to the other hosts from there.

% aws ssm start-session –target <gwlbtun instance id>

Starting session with SessionId: <session id>

sh-4.2$ sudo systemctl status gwlbtun

gwlbtun.service – AWS GWLB Tunnel Handler

Loaded: loaded (/usr/lib/systemd/system/gwlbtun.service; static; vendor preset: disabled)

Active: active (running) since Thu 2022-03-10 18:56:27 UTC; 1h 54min ago

Main PID: 28839 (gwlbtun)

CGroup: /system.slice/gwlbtun.service

└─28839 /root/aws-gateway-load-balancer-tunnel-handler/gwlbtun -c /root/aws-gateway-load-balancer-tunnel-handler/example-scripts/create-passthrough.sh -p 80

Mar 10 18:56:27 ip-10-10-1-65.us-west-2.compute.internal systemd[1]: Started AWS GWLB Tunnel Handler.

Mar 10 18:56:41 ip-10-10-1-65.us-west-2.compute.internal gwlbtun[28839]: New interface gwi-g0W4R5VOKSp and gwo-g0W4R5VOKSp for ENI ID 8665b333888bd421 created.

Mar 10 18:56:41 ip-10-10-1-65.us-west-2.compute.internal gwlbtun[28839]: ==> Setting up simple passthrough

Mar 10 18:56:41 ip-10-10-1-65.us-west-2.compute.internal gwlbtun[28839]: Mode is CREATE, In Int is gwi-g0W4R5VOKSp, Out Int is gwo-g0W4R5VOKSp, ENI is 8665b333888bd421

The CloudFormation template has been successfully deployed to our instance via the UserData script. It has also started. The log lines show that it detected incoming traffic from our Application host and created two new virtual interfaces, gwi-g0W4R5VOKSp and gwo-g0W4R5VOKSp, for use. In addition, it called the “create-passthrough.sh” script to enable everything.

Step 2 : Connect with the application instance

A simple code for connecting with the application instance :

% aws ssm start-session –target <application instance id> Starting session with SessionId: <session id> sh-4.2$

Step 3 : Testing the connection

With this code, you can determine whether the application instance is connected or not

sh-4.2$ ping 8.8.8.8

Step 4 : In this instance of gwlbtun you can do tcpdump on the eth0 interface to capture one ping and understand the whole story running at the back end.

sh-4.2$ sudo tcpdump -n -i eth0 port 6081

Here, GWLB has encapsulated the original ICMP packet from 10.20.0.60 to 8.8.8.8. One layer above it is the GENEVE header that GWLB added. Gwlbtun records the options inside the header (the ENI ID and Flow Cookie) to be able to reapply them when sending traffic back out toward GWLB. GWLB adds the outer UDP header to send the traffic gwlbtun instance. The four lines order the ping request from the GWI interface, the ping request sent out the GWO interface, the ping reply from the gwi- interface, and finally, the ping reply from gwo- going back to GWLB to continue onward.

Step 5 : Packet decapsulation on gwi interface

sh-4.2$ sudo tcpdump -i gwi-g0W4R5VOKSp -n

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on gwi-g0W4R5VOKSp, link-type RAW (Raw IP), capture size 262144 bytes

21:11:25.948415 IP 10.20.0.60 > 8.8.8.8: ICMP echo request, id 8242, seq 946, length 64

21:11:25.956061 IP 8.8.8.8 > 10.20.0.60: ICMP echo reply, id 8242, seq 946, length 64

Step 6 : Packets matching on gwo interface and health check

These packets match with the earlier encapsulated packets. However, they’re now native IP packets, with the GENEVE encapsulate handled by gwlbtun. It is also a Load Balancer health checker.

sh-4.2$ sudo tcpdump -i gwo-g0W4R5VOKSp -n

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on gwo-g0W4R5VOKSp, link-type RAW (Raw IP), capture size 262144 bytes

21:11:25.948424 IP 10.20.0.60 > 8.8.8.8: ICMP echo request, id 8242, seq 946, length 64

21:11:25.956066 IP 8.8.8.8 > 10.20.0.60: ICMP echo reply, id 8242, seq 946, length 64

Finally, this page explains a bit about the interfaces used by gwlbtun, and provides some statistics that will show you how it’s working. Gwlbtun will return 503 if there’s a problem and 200 response code if everything is Healthy. In particular, this page shows you the number of cached flows it has, as well as the number of times it has performed the operation of returning a GENEVE header to the traffic flowing through it, as a result of traffic egress from GWLBTun.

sh-4.2$ curl localhost

<!DOCTYPE html>

<html lang="en-us">

<head><title>Health check</title></head><body>

UDP receiver on port 6081: Healthy, 77582 packets in, 76741737 bytes in, 0.627s since last packet.

Interface gwi-g0W4R5VOKSp: Healthy, 12 packets in from OS, 576 bytes in from OS, 77582 packets out to OS, 73638457 bytes out to OS, 0.627s since last packet.

Interface gwo-g0W4R5VOKSp: Healthy, 77593 packets in from OS, 73638985 bytes in from OS, 0 packets out to OS, 0 bytes out to OS, 0.627s since last packet.

Flow Cache contains 8 records - 0 were just purged.

</body></html>

Conclusion

Gateway Load balancer tunnel handler act as a dispatcher for incoming requests. It distributes them among multiple web servers to prevent any single server from crashing. Additionally, it continuously monitors web servers to make sure they are healthy. If one fails, it removes the unhealthy server from the server farm until it has recovered. Some load balancers can add additional virtual servers to cope with increasing demand. Now with the integration of Linux instances with GWBL tunnel handler you can directly configure your Linux virtual servers which makes the traffic transmission more safe and easy within the instances and out of them.

Leave a Reply

Your email address will not be published.