🕒 Reading Time: 6 minutes Hey there everybody, my name is AJ, and I’m one of the System Administrators here at CARI.net. I’m going to be writing a few blog posts on how to leverage our Cloud Server infrastructure to bolster your existing website or application that resides in our data center and/or another service provider’s data center. The purpose of this article is to show you how load balancing in general works and how you can easily distribute the traffic of your site to multiple locations on the cheap. Please note that this is not a full infrastructure migration (yet), and because of this some of the resiliency components like failover are not yet present. Before we get started, I want to take the opportunity to give you an overview of what load balancing is and how it works. When people refer to or speak about load balancing they are talking about distributing the work load of one server to multiple servers. This process helps you by lowering the overall system requirements while improving resiliency and fault tolerance. Two of the primary methods of load balancing are referred to as “Layer 4” (transport) and “Layer 7” (application) load balancing. While both of these methods can be used to distribute load evenly among all the desired locations, or in a biased fashion based on your needs, how they do it varies greatly. “Layer 4” transport load balancers typically act as a router, taking in traffic on a specific port and then routing it directly to a list of recipients. There’s typically no analysis of packets or categorization of traffic, leaving that instead to the recipient server. This method can be incredibly fast, but can only be configured simply and lacks some security based on your environment. On the other hand, “Layer 7” application load balancing uses software to listen for requests on a specific port and then, based on the nature of the request, will forward the traffic to the recipient that matches the criteria. Layer 7 load balancers can handle special URL rewrites, header insertions, traffic analysis, and are great for reducing the SSL load on your actual web servers. At this point you likely have questions based on some of the terminology I used, but you should have a theoretical understanding of how load balancing works. Thankfully, there’s no lack of documentation out there to help you with your research. Moving on, I’ll be making the following assumptions:
  • You have a (RedHat-based) server with a website that needs some more power and is running PHP 5.5 (if you’re not running 5.5… why not?). If 5.3 is sufficient, ignore the “remi repo” part.
  • You have purchased either a Linux Cloud Server or a CARIcloud account and have deployed your first CentOS6 VM.
  • You have access to your DNS records to make a couple of adjustments.
  • You have a working familiarity with the command line of a Linux Server, or are not scared of trying.
If you don’t have those things, it’s not a huge deal. You can follow along to see what we do and then use this knowledge to design your first deployment in a scalable manner. Without further ado, let’s log into and update your new server.
#yum update -y
Now we’re going to install a couple repos and then restart the server.
#rpm -Uvh http://rpms.famillecollet.com/enterprise/remi-release-6.rpm
#sed -i '5,14s/\(enabled\=\).*/\11/' /etc/yum.repos.d/remi.repo
#yum install epel-release wget rsync fail2ban libnetfilter_conntrack.x86_64
#shutdown -r now
Now there’s a couple different pieces of software that can act as a load balancer or reverse proxy. These include Nginx (pronounced Engine X), Pound, and Apache2 (httpd). I would suggest that you duplicate your existing web server configuration. For brevity’s sake, I will be setting up nginx as the load balancer and web server. If you wish to use apache, configure it to listen on 127.0.0.1 and on a custom port that we will push the nginx traffic to later. Continuing on with the nginx configuration:
#yum install nginx php-fpm php-cli php-mysqlnd php-gd php-imap php-ldap php-odbc php-pear php-xml php-xmlrpc php-magickwand php-magpierss php-mbstring php-mcrypt php-mssql php-shout php-snmp php-soap php-tidy -y
There’s a whole pile of dependencies, but they’re worth it. Set the services to start on boot:
#chkconfig nginx on
#chkconfig php-fpm on
Now we configure the time zone for php-fpm. I’ll be using America\Los_Angeles since we’re on the West Coast and San Diego isn’t cool enough for it’s own time distinction for some reason…:
#sed -i 's/^.\(date.timezone \=\).*/\1\ America\/Los\_Angeles/' /etc/php.ini
#sed -I 's/^.\(cgi.fix_pathinfo \=\).*/\1 0/' /etc/php.ini
For the sake of simplicity, let’s go ahead and create /var/www/html if it doesn’t already exist:
#mkdir -p /var/www/html
Now we’re going to start configuring nginx. For starters, let’s move the default configs so they don’t get in the way:
#cd /etc/nginx
#mv conf.d conf.d.bak
#mkdir conf.d
#chmod 755 conf.d
#mv nginx.conf nginx.bak
#vim nginx.conf
user nginx; 
worker_processes 1; #this should not ever exceed the number of cores/cpu available 
error_log /var/log/nginx-error.log info; 
worker_rlimit_nofile 32768; #this can be increased based on site load 65536 or higher is recommended for a high traffic site 
 
events { 
      worker_connections 1024; #this can be increased based on site load, but should typically not exceed 50% of your memory 
      use epoll; 
      accept_mutex on; 
      accept_mutex_delay 100ms; 
} 
http { 
      include /etc/nginx/conf.d/*.conf;  #include extra config files 
      log_format compression '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" "$gzip_ratio"'; 
      access_log /var/log/nginx/access-$request_uri.log; 
      server { 
            listen 80; 
            server_tokens off; 
            location / {
                  proxy_pass http://websrv$request_uri; 
            } 
      }
}
Next we will configure which servers we want to use on the back-end for “websrv”. This will include the local web server in addition to any other servers you have configured as web servers already:
#vi /etc/nginx/conf.d/upstream.conf
upstream websrv {
      ip_hash; #this will allow for a single client to interact with a specific host each request for persistence
      server 127.0.0.1:8080;
      server xxx.xxx.xxx.xxx:80; #this is your existing server
}
Now we will configure the local web server. Based on your needs, this could vary greatly. This is going to be a basic site configuration for multiple sites on one server:
#vi /etc/nginx/conf.d/webserver.conf
server {
      listen 127.0.0.1:8080;
      server_name localhost;
      root /var/www/html; #tune to match your site needs
      location / {
            try_files $uri/ /index.html;
      }
      location ~ \.php$ {
            try_files $uri =404;
            fastcgi_pass 127.0.0.1:9000;
            fastcgi_index index.php;
            include fastcgi_params;
      }
}
Now, unfortunately, out of the box Nginx and PHP-fpm don’t really get along that well, so we need to make them by executing the following:
#echo "fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;" >> /etc/nginx/fastcgi_params
Let’s start nginx and php fpm!
#service nginx start
#service php-fpm start
Now that the environment is set up, let’s add a little bit of SSH security:
#yum install fail2ban
#chkconfig fail2ban on
#service fail2ban start
This will prevent people who fail to log in using SSH from reaching your server for about 5 minutes by default. You can change the ban time in /etc/fail2ban/jail.conf. There are many other methods that fail2ban can monitor and prevent a base level of malicious activity. We can now safely copy the data between this new server and your existing server. You can either kick off a manual rsync process each time, or you can use lsyncd to monitor the directories and synchronize data as necessary.
#yum install lsyncd -y
Next, we need to set up SSH keys (if you haven’t already) and copy them between the two servers for passwordless logins.
#ssh-keygen -t rsa -b 4096
Hit enter for the default location, and type a passphrase if you wish (it’s suggested, but not required). Next, you want to copy the IDs between the servers. This will act as the authentication between the servers. server1:
#ssh-copy-id
Type the password and hit enter. It will give you a notice saying it’s been copied into the default key location. Repeat the process on server 2. On the source server, where your primary data is stored, open up /etc/lsync.d.conf and add the following lines
settings { logfile = "/var/log/lsyncd/lsyncd.log", statusfile = "/var/log/lsyncd/lsyncd.status" }
sync{default.rsyncssh, source="", host="", targetdir=""}
#mkdir /var/log/lsyncd
#service lsyncd start
Once this is done, the directories and files should begin replicating from the original server to the new one. Any time you make a change to the original server, the change will replicate to the remote server automatically so long as lsyncd is running. If you’re running a CMS like wordpress, you will need to change where the SQL server is pointed to in the configuration files on the new server and set the SQL server credentials as necessary. Once all the data has migrated over, and you’ve verified that your site can connect as necessary, it’s time to point your main website DNS to the new Cloud Server. Ideally setting the TTL lower than 43200 to about 3600 for a short time will allow the propagation to spread out globally significantly faster. This concludes the first blog post on this topic. If you have any questions or special requirements, join us in chat or give us a call! We have an incredibly talented support staff that should have no problems helping you with your specific deployment. Thanks for reading, AJ Wasem Senior System Administrator CARI.net