In our client-server design that we saw here, users directly access the web server. This would be problematic if the web server is offline or overloaded with user requests. To deal with such a problem, we typically use a load balancer.
A load balancer evenly distributes incoming traffic among web servers that are defined in a load-balanced set. Client connects to the public IP of the load balancer. With this setup, web servers are unreachable by clients directly. For better security, private IPs are used for communication between servers. A private IP is an IP address reachable only between servers in the same network; however, it is unreachable over the internet. The load balancer communicates with web servers through private IPs.
Similarly, there is a Load Balancer between the web servers and the database servers. This also provides a fail-over mechanism for both web servers and database servers (provided we have replication). If one of the servers fail, we can route the traffic to the other server. We may add more servers in case the load increases and route the traffic using one of these metrics: round-robin, random, least connections, fastest response time, etc.
We have both hardware and software load balancers. Hardware load balancers are highly performant but expensive. Software load balancers like HAProxy and Nginx can run on commodity hardware or cloud environments providing a less expensive and more flexible solution.