NGINX Proxies: Serving Multiple Endpoints in a Location

NGINX Proxies: Serving Multiple Endpoints in a Location

09:45, 07.06.2024

Article Content
arrow

  • Managing Multiple Locations of NGINX Proxies
  • 1. Configuration of Nginx
  • 1.1. The server Block Directive
  • 1.2. The location Block Directive
  • 2. Management of Multiple Proxy Endpoints
  • 2.1. Establishment of Endpoints
  • 2.2. The proxy_pass Directive
  • 2.3. Creation of Sample Data
  • To Sum Up

Managing Multiple Locations of NGINX Proxies

Prior to diving into all the details about the configuration and way more, let’s start with the basic things. Nginx is a rather powerful web server that can be applied as a reverse proxy, load balancer, or forward proxy. Just for general information on this topic, let’s discuss the principle of the load balancer because this terminology might be really confusing. So, this balancer shares the requests among a group of servers and only after that relays the response from the server to the client.

When speaking about reverse proxies, these are the kind of apps that are between internal servers and clients. This type of server functions by accepting clients’ traffic and redirecting it to the internal network. In such a way, users cannot access internal servers directly, they are accessing it only via a reverse proxy. The major benefits of the reverse proxy are related to security, scalability, and better SSL encryption.  

Based on our practical experience in the field, we will explain how to configure Nginx in such a way that it will serve a couple of endpoints with the same location. For these purposes, we will use the settings of the reverse proxy.

1. Configuration of Nginx

In the Nginx config files, it can be easily clarified in what way the server deals with HTTP requests. The primary config file is called nginx.conf. If speaking specifically about Ubuntu, then on this operating system it can be found in such directory as /etc/nginx. The directives can be accessible via this file and they can be grouped in blocks.  

1.1. The server Block Directive

The server blocks can be also named contexts. They totally define the virtual server. Here, in the illustration below we use listen for setting IP/hostname and port:

server {
   listen 345.0.1.1:8080;
}

In this sample, the virtual server is on address 345.0.1.1 and port details are 8080.

1.2. The location Block Directive

In the local block, there is kept information on how the server should deal with the matched HTTP requests. The location is specified with a regular expression/prefix string. So the URL of the HTTP request fits with the local block (specifically with prefix string or regular expression).

In order to serve the content, we can use root as in the example below:

server {
   listen 345.0.1.1:8080;
   location /books {
  root /info/categories;
   }
}

In the above-reported sample, the files are returned from directory info/categories and by adding books to location, we will match URLs with the books.

One more alternative can be alias. This directive adds the request URL to the local direct path, in such a way it is possible to skip the string prefix:

location /books {
   alias /info/categories;
}

2. Management of Multiple Proxy Endpoints

Here we will try to explain how the process works by creating 2 virtual servers and they will be simulating 2 endpoints. After that, we will explain the process of Nginx server setup to proxy requests with the endpoints underneath one URL.

2.1. Establishment of Endpoints

Let’s begin with the 2 simplest endpoints:

server {
   listen 8081;
   location /user1 {
  alias /info/user1;
   }
}
 
server {
   listen 8082;
   location /user2 {
  alias /info/user2;
   }
}

In such a way, we defined 2 virtual servers, each block includes such information as:

  • The first server is on 8081 port, the content is served from /info/user1 direct and matching requests with /user1.
  • The second example is on 8082 port, the content is served from /info/user2 directory and matches requests with /user2.

2.2. The proxy_pass Directive

For establishing proper forwarding, we should utilize proxy-pass in the local block. Such type of directive functions by delivering HTTP requests to a specific address. Here is an illustration of how should it look like:

 server {
   listen 8000;
   location /api {
  proxy_pass http://345.0.1.1:8081/user1;
   }
 
location /api/user2 {
   proxy_pass http://345.0.1.1:8082/user2;
}
}

In this illustration, there was made a virtual server on port 8000 and 2 locations in this server, these locations function in such a way:

  • /api redirects the requests to the initial endpoint (http:// 345.0.1.1:8081/user1)
  • /api/user2 redirects the requests to http:// 345.0.1.1:8082/user2 endpoint

Another necessary fact is how proxy_pass redirects URLs. Here, we will share 2 cases such as:

  • Proxy_pass connects URL with the hostname, in such a way http:// 345.0.1.1:8081
  • Proxy_pass links URL with the path, in such a form http:// 345.0.1.1:8081/user1

2.3. Creation of Sample Data

Prior to the testing of the setting, you should make test files in /info/user2 and /info/user1 directs, this can be accomplished in such a form:

$ sudo echo { 'message' : 'Hi from user1' } | sudo tee /info/user1/echo.json
{ message : Hi from user1 }
$ sudo echo { 'message' : 'Hi from user2' } | sudo tee /info/user2/echo.json
{ message : Hi from user2 }

After the creation of the sample data, you can start the testing process in the following way:

$ curl http://345.0.1.1:8000/api/echo.json
{ message : Hi from user1 }
$ curl http://345.0.1.1:8000/api/user2/echo.json
{ message : Hi from user2 }

The JSON files were shown in the output of the testing process. To conclude with the process, let’s discuss all the details of the initial request that are such as:

  • The request is forwarded from http:// 345.0.1.1:8000/api/echo.json to http:// 345.0.1.1:8081/user1/echo.json
  • The request http:// 345.0.1.1:8081/user1/echo.json is processed and returned the resource /info/user1/echo.json

To Sum Up

Based on our practical experience in the field of virtual servers, we decided to share a couple of useful recommendations and real examples of how to utilize Nginx server as a reverse proxy. We mentioned several practical examples of how 2 endpoints can function under one location path. Hope that this article was helpful for your case and you can easily utilize everything that you need from the information above.

views 4m, 57s
views 2
Share

Was this article helpful to you?

VPS popular offers

Other articles on this topic

HostZealot presents NVMe VPS
HostZealot presents NVMe VPS
cookie

Accept cookies & privacy policy?

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we'll assume that you are happy to receive all cookies on the HostZealot website.