How to deploy Nginx container load balancer failover?

Hi all, Great project team! I am creating an API application which I want to deploy in the aksh cloud and my application is made up of the following containers:

  1. Nginx Container: API Gateway
  2. API container
  3. SQL Server Database Container

I am using Nginx as a gateway to route all network traffic to the API container and I also have a dns which is and points to the Nginx container endpoint.

In my production environment, nginx container can be used as a load balancing server. Then, in the case of a single node, nginx or this server may fail. In order to ensure high availability, the first thing to think about is to deploy another nginx server. But the IP addresses of two nginx servers are different. So how to ensure that when one server fails, the other server will be used automatically?

I know that Nginx HA (High availability) exists but it is for Onpremise environment. How could I use Nginx HA in my production environment in the akash cloud and without using kubernetes, AWS Network Load Balancer, Azure Standard Load Balancer or Google Cloud Platform?

Any documentation, example or idea will be greatly appreciated.


Hello hanarce1 -

With Akash deployments - and in your case the API endpoints - simple load balancing could be achieved via DNS round robin. Via this methodology you could deploy the API endpoints on different Akash providers ensuring that the failure of a single provider/endpoint will not result in downtime.

If the nginx load balancer is necessary/required - you would likely want to install that load balancer external to Akash and then load balance over redundant Akash deployments of the API endpoint.

Let me know if this answers the question. Additionally - if you have further inquiries - I would recommend posting in our Discord server if possible and in the deployments channel. Our admins and community experts monitor those channels constantly and will receive very quick response/expertise. Our Discord server is linked below.