Introduction to HAProxy Stick Tables

  • 时间: 2018-09-22 08:22:18

HTTP requests are stateless by design. However, this raises some questions regarding how to track user activities, including malicious ones, across requests so that you can collect metrics, block users, and make other decisions based on state. The only way to track user activities between one request and the next is to add a mechanism for storing events and categorizing them by client IP or other key.

Out of the box, HAProxy Enterprise Edition and HAProxy give you a fast, in-memory storage called stick tables . Originally, stick tables were created to solve the problem of server persistence. However, StackExchange, the network of Q&A communities that includes Stack Overflow, saw the potential to use them for rate limiting of abusive clients, aid in bot protection, and tracking data transferred on a per client basis. They sponsored further development of stick tables to expand the functionality. Today, stick tables are an incredibly powerful subsystem within HAProxy.

The name, no doubt, reminds you of sticky sessions used for sticking a client to a particular server. They do that, but also a lot more. Stick tables are a type of key-value store where the key is what you track across requests, such as a client IP, and the values consist of counters that, for the most part, HAProxy takes care of calculating for you. They are commonly used to store information like how many requests a given IP has made within the past 10 seconds. However, they can be used to answer a number of questions, such as:

  • How many API requests has this API key been used for during the last 24 hours?
  • What TLS versions are your clients using? (e.g. can you disable TLS 1.1 yet?)
  • If your website has an embedded search field, what are the top search terms people are using?
  • How many pages is a client accessing during a time period? Is it enough as to signal abuse?

Stick tables rely heavily on HAProxy’s access control lists, orACLs, so we recommend checking out our previous blog post:Introduction to ACLs if you haven’t done so already. It gives a great overview of the ACL system. When combined with the Stick Table Aggregator that’s offered within HAProxy Enterprise Edition, stick tables bring real-time, cluster-wide tracking. Stick tables are an area where HAProxy’s design, including the use of Elastic Binary Trees and other optimizations, really pays off.

Uses of Stick Tables

There are endless uses for stick tables, but here we’ll highlight three areas: server persistence, bot detection, and collecting metrics.

Server persistence, also known as sticky sessions, is probably one of the first uses that comes to mind when you hear the term “stick tables”. For some applications, cookie-based or consistent hashing-based persistence methods aren’t a good fit for one reason or another. With stick tables, you can have HAProxy store a piece of information, such as an IP address, cookie, or range of bytes in the request body (a username or session id in a non-HTTP protocol, for example), and associate it with a server. Then, when HAProxy sees new connections using that same piece of information, it will forward the request on to the same server. This is really useful if you’re storing application sessions in memory on your servers.

Beyond the traditional use case of server persistence, you can also use stick tables for defending against certain types of bot threats. Request floods, login brute force attacks, vulnerability scanners, web scrapers, slow loris attacks—stick tables can deal with them all. That’s a full blog post in itself and will be coming soon, but we’ll show you some of the general concepts here.

A third area we’ll touch on is using stick tables for collecting metrics. Sometimes, you want to get an idea of what is going on in HAProxy, but without enabling logging and having to parse the logs to get the information in question. Here’s where the power of the Runtime API comes into play. Using the API, you can read and analyze stick table data from the command line, a custom script or executable program. This opens the door to visualizing the data in your dashboard of choice. If you prefer a packaged solution, the HAProxy Enterprise Edition comes with a fully-loaded dashboard for visualizing stick table data.

Defining a Stick Table

A stick table collects and stores data about requests that are flowing through your HAProxy load balancer. Think of it like a machine that color codes cars as they enter a race track. The first step then is setting up the amount of storage a stick table should be allowed to use, how long data should be kept, and what data you want to observe. This is done via the stick-table directive in a frontend or backend .

Here is a simple stick table definition:

backend webfarm    stick-table type ip size 1m expire 10s store http_req_rate(10s)    # other configuration...

In this line we specify a few arguments: type , size , expire and store . The type, which is ip in this case, decides the classification of the data we’ll be capturing. The size configures the number of entries it can store—in this case one million. The expire time, which is the time since a record in the table was last matched, created or refreshed, informs HAProxy when to remove data. The store argument declares the values that you’ll be saving.

Did you know? If just storing rates, then the expire argument should match the longest rate period; that way the counters will be reset to 0 at the same time that the period ends.

Each frontend or backend section can only have one stick-table defined in it. The downside to that is if you want to share that storage with other frontends and backends. The good news is that you can define a frontend or backend whose sole purpose is holding a stick table. Then you can use that stick table elsewhere using the table parameter. Here’s an example (we’ll explain the http-request track-sc0 line in the next section):

backend <b>st_src_global</b>    <b>stick-table</b> type ip size 1m expire 10s store http_req_rate(10s)frontend <b>fe_main</b>    <b>bind</b> *:80    <b>http-request</b> track-sc0 src table st_src_global

Two other stick table arguments that you’ll want to know about are nopurge and peers . The former tells HAProxy to not remove entries if the table is full and the latter specifies a peers section for syncing to other nodes. We’ll cover that interesting scenario a little later.

When adding a stick table and setting its size it’s important to keep in mind how much memory the server has to spare after taking into account other running processes. Each stick table entry takes about 50 bytes of memory for its own housekeeping. Then the size of the key and the counters it’s storing add up to the total.

Keep in mind a scenario where you’re using stick tables to set up a DDoS defense system. An excellent use case, but what happens when the attacker brings enough IPs to the game? Will it cause enough entries to be added so that all of the memory on your server is consumed? Memory for stick tables isn’t used until it’s needed, but even so, you should keep in mind the size that it could grow to and set a cap on the number of entries with the size argument.

Tracking Data

Now that you’ve defined a stick table, the next step is to track things in it. This is done by using http-request track-sc0 , tcp-request connection track-sc0 , or tcp-request content track-sc0 . The first thing to consider is the use of a sticky counter, sc0 . This is used to assign a slot with which to track the connections or requests. The maximum number that you can replace 0 with is set by the build-time variable MAX_SESS_STKCTR . In HAProxy Enterprise Edition, it’s set to 12, allowing sc0 through sc11 .

This can be a bit of a tricky concept, so here is an example to help explain the nuances of it:

backend <b>st_src_global</b>    <b>stick-table</b> type ip size 1m expire 10m store http_req_rate(10m)backend <b>st_src_login</b>    <b>stick-table</b> type ip size 1m expire 10m store http_req_rate(10m)backend <b>st_src_api</b>    <b>stick-table</b> type ip size 1m expire 10m store http_req_rate(10m)frontend <b>fe_main</b>    <b>bind</b> *:80    <b>http-request</b> track-sc0 src table st_src_global    <b>http-request</b> track-sc1 src table st_src_login <b>if</b> { <b>path_beg</b> /login }    <b>http-request</b> track-sc1 src table st_src_api <b>if</b> { <b>path_beg</b> /api }

In this example, the line http-request track-sc0 doesn’t have an if statement to filter out any paths, so sc0 is tracking all traffic. Querying the st_src_global stick table with the Runtime API will show the HTTP request rate per client IP. Easy enough.

Sticky counter 1, sc1 , is being used twice: once to track requests beginning with /login and again to track requests beginning with /api . This is okay because no request passing through this block is going to start with both /login and /api , so one sticky counter can be used for both tables.

Even though both tables are being tracked with sc1 they are their own stick table definitions, and thus keep their data separate. So if you make a few requests and then query the tables via the Runtime API, you’ll see results like the following:

$ echo "show table st_src_global" | socat stdio UNIX-CONNECT:/var/run/hapee-1.8/hapee-lb.sock# table: st_src_global, type: ip, size:1048576, used:10x18f907c: key=127.0.0.1 use=0 exp=3583771 http_req_rate(86400000)=3$ echo "show table st_src_api" | socat stdio UNIX-CONNECT:/var/run/hapee-1.8/hapee-lb.sock# table: st_src_api, type: ip, size:1048576, used:10x18f919c: key=127.0.0.1 use=0 exp=3572396 http_req_rate(86400000)=2$ echo "show table st_src_login" | socat stdio UNIX-CONNECT:/var/run/hapee-1.8/hapee-lb.sock# table: st_src_login, type: ip, size:1048576, used:10x18f989c: key=127.0.0.1 use=0 exp=3563780 http_req_rate(86400000)=1

You can see three total requests in the st_src_global table, two requests in the st_src_api table , and one in the st_src_login table. Even though the last two used the same sticky counter, the data was segregated. If I had made a mistake and tracked both st_src_global and st_src_login using sc0 , then I’d find that the st_src_login table was empty because when HAProxy went to track it, sc0 was already used for this connection.

In addition, this data can be viewed using HAProxy Enterprise Edition’s Real-Time Dashboard.

Using the dashboard can be quicker than working from the command-line and gives you options for filtering and sorting.

Types of Keys

A stick table tracks counters for a particular key, such as a client IP address. The key must be in an expected type , which is set with the type argument. Each type is useful for different things, so let’s take a look at them:

Type Size (b) Description
ip 50 This will store an IPv4 address. It’s primarily useful for tracking activities of the IP making the request and can be provided by HAProxy with the fetch method src. However, it can also be fed a sample such as req.hdr(x-forwarded-for) to get the IP from another proxy.
ipv6 60 This will store an IPv6 address or an IPv6 mapped IPv4 address. It’s the same as the ip type otherwise.
integer 32 This is often used to store a client ID number from a cookie, header, etc. It’s also useful for storing things like the frontend ID via fe_id or int(1) to track everything under one entry (reasons for which we will cover in a later section)
string len This will store a string and is commonly used for session IDs, API keys and similar. It’s also useful when creating a dummy header to store custom combinations of samples. It requires a len argument followed by the number of bytes that can be stored. Larger samples will be truncated.
binary len This is used for storing binary samples. It’s most commonly used for persistence by extracting a client ID from a TCP stream with the bytes converter. It can also be used to store other samples such as thebase32 (IP+URL) fetch. It requires a len argument followed by the number of bytes that can be stored. Longer samples will be truncated.

The type that you choose defines the keys within the table. For example, if you use a type of ip then we’ll be capturing IP addresses as the keys.

Types of Values

After the store keyword comes a comma delimited list of the values that should be associated with a given key. While some types can be set using ACLs or via the Runtime API, most are calculated automatically by built-in fetches in HAProxy like http_req_rate . There can be as many values stored as you would like on a given key.

There are many values that a stick table can track. For a full list of values, see the stick-table section of the documentation , but here are some interesting highlights:

http_req_rate

This is likely the most frequently stored/used value in stick tables. As its name may imply, it stores the number of HTTP requests, regardless of whether they were accepted or not, that the tracked key (e.g. source IP address) has made over the specified time period. Using this can be as simple as the following:

<b>stick-table</b> type ip size 1m expire 10s store http_req_rate(10s)<b>tcp-request</b> <b>inspect-delay</b> 10s<b>tcp-request</b> <b>content</b> track-sc0 src<b>http-request</b> deny <b>if</b> { sc_http_req_rate(0) gt 10 }

The first line defines a stick table for tracking IP addresses and their HTTP request rates over the last ten seconds. This is done by storing the http_req_rate value, which accepts the period as a parameter. Note that we’ve set the expire parameter to match the period of 10 seconds.

The second line is what inserts or updates a key in the table and updates its counters. Using the sticky counter sc0, it sets the key to the source IP using the src fetch method. You might wonder when to use tcp-request content track-sc0 instead of http-request track-sc0 . It’s mostly a matter of preference, but since the TCP phase happens before the HTTP phase, you should try to order tcp-* directives before http-* ones or else you’ll get warnings when HAProxy starts up. Also, if you want the ability to deny connections in the earlier TCP phase, lean towards using the tcp-request variant. However, if you’re capturing HTTP headers, cookies or other data encapsulated within the HTTP message, then to use tcp-request content track-s0 , you must use an inspect-delay directive. We’ll talk about that a little later on.

Finally the third line denies the request with a 403 Forbidden if the client has made more than 10 requests over the expiration period of 10 seconds. Notice that when deciding whether to deny the request, we check the value of http_req_rate with the sc_http_req_rate function, passing it 0, the number corresponding to our sticky counter, sc0 .

Values that return a rate, like http_req_rate , all take an argument that is the time range that they cover. The maximum time that can be tracked is about 30 days (e.g. 30d ). For longer periods of time consider using the counter http_req_cnt and extrapolate from there.

conn_cur and conn_rate

Two closely related counters, conn_cur and conn_rate , track how many connections a given key has or is making. The conn_cur counter is automatically incremented or decremented when the tcp-request content track-sc0 src line is processed to reflect the number of currently open connections for the key, or source IP. The conn_rate counter is similar but is given a period of time and calculates an average rate of new connections over that period.

<b>stick-table</b> type ip size 1m expire 10s store conn_cur<b>tcp-request</b> <b>content</b> track-sc0 src<b>tcp-request</b> <b>content</b> reject <b>if</b> { sc_conn_cur(0) gt 10 }

One way to use this is to detect when a client has opened too many connections so you can deny any more connections from them. In this case the connection will be rejected and the connection closed if the source IP currently has more than 10 connections open at the moment.

These counters are primarily used to protect against attacks that involve a lot of new connections that originate from the same IP address. In the next section, you’ll see HTTP counters, which are more effective at protecting against HTTP request floods. The HTTP counters track requests independently of whether HTTP keep-alive or multiplexing are used. However in the case of floods of new connections, these counters can stop them best.

http_err_rate

This tracks the rate of HTTP requests that end in an error code (4xx) response. This has a few useful applications:

    • You can detect vulnerability scanners, which tend to get a lot of error pages like 404 Not Found
    • You can detect missing pages by using a URL path as the stick table key. For example:

      <b>stick-table</b> type string len 128 size 2k expire 1d store http_err_rate(1d)<b>tcp-request</b> <b>content</b> track-sc0 <b>path</b>

      This will make a table that can be retrieved by the Runtime API and shows the error rate of various paths:

      # table: fe_main, type: string, size:2048, used:20xbc929c: key=/ use=0 exp=86387441 http_err_rate(86400000)=00xbc99ac: key=/foobar use=0 exp=86390564 http_err_rate(86400000)=1
    • You can detect login brute force attacks or scanners. If your login page produces an HTTP error code when a login fails, then this can be used to detect brute force attacks. For this you would track on src rather than on path as in the previous example.

bytes_out_rate

The bytes_out_rate counter measures the rate of traffic being sent from your server for a given key, such as a path. Its primary use is to identify content or users who are creating the largest amounts of traffic. However, it has other interesting uses as well. It can help measure traffic by site or path, which you can use in capacity planning or to see which resources might need to be moved to their own cluster (e.g. If you operate a CDN, this could be used to select heavily trafficked content to move to other caching nodes).

We might also use bytes_out_rate as another data set to feed into an anomaly detection system (e.g. a web script that never sends much traffic all of a sudden begins sending 3gb might indicate a successful exfiltration of data).

Similar to bytes_out_rate , bytes_in_rate observes how much traffic a client is sending, which could be used to detect anomalous behavior, to factor into billing on a VPN system where client traffic is to be counted in both directions, and that type of thing.

gpc0 / gpc1

The general purpose counters (gpc0 and gpc1) are special—along with gpt0 (general purpose tag)—for defaulting to 0 when created and for not automatically updating. ACLs can be used to increase this counter via the sc_inc_gpc0 fetch method so that you can track custom statistics with it.

If you track gpc0_rate , it will automatically give you a view of how quickly gpc0 is being incremented. This can tell you how frequently this event is happening.

Making Decisions Based on Stick Tables

Now that you’ve seen how to create stick table storage and track data with it, you’ll want to be able to configure HAProxy to take action based on that captured information. Going back to a common use case for stick tables, let’s see how to use the data to persist a client to a particular server. This is done with the stick on directive and is usually found in a backend section looking like the following:

<b>stick-table</b> type string len 32 size 100k expire 30m<b>stick</b> <b>on</b> req.cook(sessionid)

In this example, notice that we don’t use the store parameter on the stick-table directive. A server ID, which is an integer that HAProxy uses to identify each server, is unique in that you don’t need to define it via a store keyword for it to be stored. If needed for a persistence setting, the server ID will be stored in the stick table in question automatically. After the stick on directive extracts the client’s session ID from a cookie and stores it as the key in the table, the client will continue to be directed to the same server.

While on the topic of persistence, let us say we have a cluster of MySQL servers participating in master-master replication and we are worried that writing to one might cause a duplicate primary key if, at that moment, the primary master goes down and then comes back up. Normally this can make a rather complicated situation wherein both MySQL servers have some queries that the other doesn’t and it requires a lot of fighting to get them back in sync. Suppose that instead we added the following to our MySQL backend?

backend <b>mysql</b>    mode tcp    <b>stick-table</b> type integer size 1 expire 1d    <b>stick</b> <b>on</b> int(1)    <b>on-marked-down</b> shutdown-sessions    <b>server</b> <b>primary</b> 192.168.122.60:3306 <b>check</b>    <b>server</b> <b>backup</b> 192.168.122.61:3306 <b>check</b> <b>backup</b>

With this configuration, we store only a single entry in the stick table, where the key is 1 and the value is the server_id of the active server. Now if the primary server goes down, the backup server’s server_id will overwrite the value in the stick table and all requests will keep going to the backup even if the master comes back online. This can be undone by cycling the backup node into maintenance mode, or via the Runtime API, when you are ready to have the cluster resume normal operations.

Did you know? on-marked-down shutdown-sessions causes HAproxy to close all existing connections to a backend when it goes down. Normally HAProxy allows existing connections to finish which could result in duplicate primary keys if the connections kept working or query timeouts if it didn’t.

Another way to use stick tables is for collecting information about traffic to your website so that you can make decisions based on it. Say that you wanted to know if it was safe to disable TLS 1.1. You could set up a stick table that tracks the TLS versions that people are using. Consider the following example:

backend <b>st_ssl_stats</b>    <b>stick-table</b> type string len 32 size 200 expire 24d store http_req_rate(24d)frontend <b>fe_main</b>    <b>tcp-request</b> <b>inspect-delay</b> 10s    <b>tcp-request</b> <b>content</b> track-sc0 ssl_fc_protocol table st_ssl_stats

Now you can query the server and see which TLS protocols have been used:

$ echo "show table st_ssl_stats" | socat stdio UNIX-CONNECT:/var/run/hapee-1.8/hapee-lb.sock# table: st_ssl_stats, type: string, size:200, used:20xe4c62c: key=TLSv1 use=0 exp=2073596788 http_req_rate(2073600000)=10xe5a18c: key=TLSv1.2 use=0 exp=2073586582 http_req_rate(2073600000)=2

Or you could turn it around and track clients who have used TLSv1.1 by IP address:

backend <b>st_ssl_stats</b>    <b>stick-table</b> type ip size 200 expire 1h store http_req_rate(1d)frontend <b>fe_main</b>    <b>tcp-request</b> <b>inspect-delay</b> 10s    <b>tcp-request</b> <b>content</b> track-sc0 src table st_ssl_stats <b>if</b> { ssl_fc_protocol TLSv1.1 }

Now your stick table is a list of IPs that have used TLSv1.1. To learn more about the Runtime API, take a look at our blog post Dynamic Configuration with the HAProxy Runtime API .

If you look through the documentation you will see fetches specific to stick tables like sc_http_req_rate (one for each value you can store in a stick table) all starting with sc_ . You will notice in the documentation that some of them have sc0_ , sc1_ , and sc2_ aliases without arguments. These forms are deprecated as they don’t let you access all of the sticky counters, but do the same thing. These fetches, in conjunction with ACLs, can be used to protect your website from malicious activity by returning the values in the stick table and giving you information needed to decide whether to deny the connection or request.

For example, to block requests that have made more than 100 requests over the time period defined on the stick table definition and by the key defined in the track line, you would use sc_http_req_rate in the following way:

<b>http-request</b> deny <b>if</b> { sc_http_req_rate(0) gt 100 }

If you aren’t tracking the key that you want to look up, you can use the table_http_req_rate and similar fetches to retrieve a value without updating it. Using track-sc* will update http_req_rate and similar counters while looking up a value like this will not. These work like converters where they take the key as input, the table name as an argument, and output the value. For example we could do:

<b>http-request</b> deny <b>if</b> { src,table_http_req_rate(st_src_global) gt 100 }

These fetches are a small amount of extra work for the CPU if you are already tracking the key via an http-request track-sc0 or tcp-request content track-sc0 line elsewhere. However, there are a few good reasons for it:

      • You want to check if a request should be blocked without increasing the request counter by tracking it (so that a client can make 10 requests a second and everything above that gets blocked, rather than making 10 requests in a second and having future blocked requests keep them getting blocked until they cool down)
      • You want to pass another key; for example passing req.arg(ip) instead of src would allow an API of sorts where you could request http://192.168.122.64/is_blocked?ip=192.168.122.1 and see if that IP is blocked (or what its request rate is).
      • You’re using the Stick Table Aggregator and want to query data from the table that contains it creates (a new table is created that contains the aggregated data).

Other Considerations

inspect-delay

Let’s talk about a line that is sometimes needed and ends up causing confusion:

<b>tcp-request</b> <b>inspect-delay</b> 10s

You only need to use this in a frontend or backend when you have an ACL on a statement that would be processed in an earlier phase than HAProxy would normally have the information. For example, tcp-request content reject if { path_beg /foo } needs a tcp-request inspect-delay because HAProxy won’t wait in the TCP phase for the HTTP URL path data. In contrast http-request deny if { path_beg /foo } doesn’t need an tcp-request inspect-delay line because HAProxy won’t process http-request rules until it has an HTTP request.

When tcp-request inspect-delay is present, it will hold the request until the rules in that block have the data they need to make a decision or until the specified delay is reached, whichever is first.

nbproc

If you are using the nbproc directive in the global section of your configuration, then each HAProxy process has its own set of stick tables. The net effect is that you’re not sharing stick table information among those processes. Also note that the peers protocol, discussed next, can’t sync between processes on the same machine.

There are two ways to solve this. The first is to use the newer nbthread directive instead. This is a feature introduced in HAProxy Enterprise Edition 1.8r1 and HAProxy 1.8 that enables multithreading instead of multiple processes and shares memory, thus sharing stick tables between threads running in a single process. See our blog post Multithreading in HAProxy to learn more about it.

Another solution is to use a configuration like the following:

listen fe_main    bind *:443 ssl crt /path/to/cert.pem    bind *:80    server local unix:/var/run/hapee-1.8/ssl_handoff.sock send-proxy-v2frontend fe_secondary    bind unix:/var/run/hapee-1.8/ssl_handoff.sock accept-proxy process 1    # Stick tables, use backend, default backend, etc goes here.

The first proxy terminates TLS and passes traffic to a single server listed as server local unix:/var/run/hapee-1.8/ssl_handoff.sock send-proxy-v2 . Then you add another frontend with bind unix:/var/run/hapee-1.8/ssl_handoff.sock accept-proxy process 1 in it. Inside this frontend you can have all of your stick table and statistics collection without issue. Since TLS termination usually takes most of the CPU time, it’s highly unusual to need more than one process for the backend work.

peers

Now that we’ve covered how to use stick tables, something to consider is setups that utilize HAProxy in active-active clusters, where a new connection from a client may end up at one of multiple HAProxy servers, such as by Route Health Injection or Amazon Elastic Load Balancer. One server has all of the stick table entries, but the other node has its own set of stick table definitions. To solve that problem you can add a peers section to the top of your configuration:

peers <b>mypeers</b>    peer centos7vert 192.168.122.64:10000    peer shorepoint 192.168.122.1:10000

Then change your stick table definition to include a peers argument:

<b>stick-table</b> type string len 32 size 100k expire 30m peers mypeers

At least one of the peers needs to have a name that matches the server’s host name or you must include a setenv hostname line in the global section of the configuration to inform HAProxy what it should see the host name as.

Now the two servers will exchange stick table entries; but there is a downside: they won’t sum their individual counters, so http_req_rate on one will overwrite the value on the other, rather than both seeing a sum of the two.

Enter the Stick Table Aggregator. This is a feature of HAProxy Enterprise Edition that watches for values coming in over the peers protocol, adds the values together, then returns the combined result back to each of the HAProxy instances. The benefit of this is the ability to associate events that you wouldn’t be able to otherwise, since the data resides on two or more different nodes.

For example, in an active-active cluster of HAProxy load balancers, an attacker will be hitting both instances. If you aren’t combining the data, you’re only seeing half of the picture. Getting an accurate representation of the state of your nodes is important to detecting and stopping attacks. Here’s a representation of how the aggregator allows peers to exchange information:

Check out our webinar DDoS Attack and Bot Protection with HAProxy Enterprise Edition for a full example of using the Stick Table Aggregator.

Conclusion

In this article, you learned about HAProxy’s in-memory storage mechanism, stick tables, that allow you to track client activities across requests, enable server persistence, and collect real-time metrics. Have a use for stick tables that we didn’t mention? Post it below! Want to get a closer look at the HAProxy Enterprise Stick Table Aggregator for combining stick table data from multiple nodes?Contact us to learn more or sign up for afree trial of HAProxy Enterprise Edition.

You should now have an idea of what stick tables can be used for and how to get started using them. This allows you to do a great many things, but we really just scratched the surface. Stay tuned as we continue to add content around this and other HAProxy features!