SeaFuzz SeaFuzz - 6 months ago 40
Git Question

Git Deployment Behind Load Balancer

I am trying to land on an effective deployment strategy in a bit of a challenging environment. We have several (Google Compute Engine) instances of HTTP servers that sit behind a load balancer that has a single public IP address. The web servers do not have public IP addresses so performing receive hooks in this setup is pretty much out from what I can tell.

In the environment we are about to migrating away from we did the not so recommended

git pull master
to do deployments and would like to move away from that work flow if possible. But as of now I am not seeing a very good alternative.

Has anyone dealt with this kind of a deployment challenge and what was the solution that worked?


There are many possible solutions to this problem, but here's a couple:

Deploy to all servers using API

One option is to deploy a command or set of code to all servers at once. To do this you'll need to firstly use a CLI call to lookup all the IP addresses of your web servers, then connect to them (SSH) via your bastion host if you have one, then perform the same command on all servers at once.

To achieve this you'll likely need to write a script to put the steps together an execute as one.

Deploy immutably

This is the "modern" approach, where you take a server (outside of your load balancer), deploy your code to it and ensure it's ready, then take a snapshot/image of it and push that image out to your load balancer - replacing your current servers.

This method is "safer", because you don't have a reliance on, say, a git pull command working in terms of connectivity, etc. You know that it's going to contain what you baked into your image.

To achieve this kind of thing you can use tools like Packer to provision servers from an existing image (which you could create from one of your current instance), and/or various automation tools like Chef, Puppet, TerraForm, etc to deploy the new provisioned instances to your load balancer.