It's the end of the year and probably not my last post. This topic is something I've had to think about for sometime since the start of my job in Aug 2014. They were using private IPs and I had no a clue what this meant except that the machines could talk to each other without incurring bandwidth charges. That concept was foreign to me because I would bind MySQL to all addresses in order to connect from my web server, which I know is bad practice and rectified today heh.
Private network is similar to a LAN in your home where you connect multiple machines to a switch and voila they all get internal ips. These internal ips allow for the switch to direct traffic to the correct place quickly! It is possible that there is a delay between external ip to external ip whereas there is minimal for internal to internal. I say it is possible because some switches are smart enough to know where to route traffic.
How does this look? Your server has two network interfaces (NIC) where one is connected to the internet (effectively) and the other connected to a switch for internal traffic. When you send traffic to an another internal ip the netmask associated with your internal ip determines if the packet with the destination ip going to be sent out through the internal NIC or the external.
It's interesting stuff and I just enabled private networking on digitalocean for my two droplets and setup the internal interface then binded MySQL to the internal ip and switched my web server to send to the db internal ip. This means that any overhead that was incurred previously due to external ip destination is null and void as well as no more external access to MySQL. External access has always been my thing, but since joined my current job I've been introduced to SSH tunneling, so I can easily create a tunnel to my database server and ride on that :).
I also upgraded to PHP 7.