Continue with the Bamboo plan for AWS build, now it is time to look into the deployment plan. For each environment, I got 4 sub plans: Create Stack, Deploy Config, Swap URL and Delete Stack.
Create Stack: As explained in the part 1, Stack means the infrastructure. e.g A Nginx stack has those components: ELB, Autoscaling group, AMI, EC2 instance with certain Nginx version installed and Route53 DNS entries.
Here is the detailed tasks, download the artifacts (codes, Cloudformation template, baked AMI ID etc) from build plan, and upload the Cloudformation template to S3 then create the stack that is defined in the Cloudformation.
Some snippets of the Cloudformation template:
Mapping is very useful. The value can be retrieved by Fn::FindInMap
"Mappings": { "ELBSecurityGroup": { "dev": { "back": [ "sg-a5c63ea1" ], "front": [ "sg-a5c63ea2" ] }, "uat": { "back": [ "sg-a5c63ea3" ], "front": [ "sg-a5c63ea4" ] }, "prod": { "back": [ "sg-a5c63ea5" ], "front": [ "sg-a5c63ea6" ] } }
Instance profile is better than hard coded access key
"InstanceProfile": { "Properties": { "Path": "/", "Roles": [ { "Ref": "InstanceRole" } ] }, "Type": "AWS::IAM::InstanceProfile" }, "InstanceRole": { "Properties": { "AssumeRolePolicyDocument": { "Statement": [ { "Action": [ "sts:AssumeRole" ], "Effect": "Allow", "Principal": { "Service": [ "ec2.amazonaws.com" ] } } ] }, "ManagedPolicyArns": ["arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM"], "Path": "/", "Policies": [ { "PolicyName": "ReadS3andTable", "PolicyDocument": { "Version": "2012-10-17", "Statement": [ { "Sid": "ReadS3JackieNginx", "Effect": "Allow", "Action": [ "s3:Get*", "s3:List*" ], "Resource": [ "arn:aws:s3:::jackie-nginx/*", "arn:aws:s3:::jackie-nginx" ] }, { "Sid": "ReadTableJackieNginx", "Effect": "Allow", "Action": [ "dynamodb:BatchGetItem", "dynamodb:DescribeTable", "dynamodb:GetItem", "dynamodb:ListTables", "dynamodb:Query", "dynamodb:Scan" ], "Resource": "arn:aws:dynamodb:ap-southeast-2:XXXXXXXXXXX:table/jackie-nginx" } ] } } ] }, "Type": "AWS::IAM::Role" }
ELB, the Ref refers parameters which can be passed from bamboo variables.
"ElasticLoadBalancer": { "Properties": { "CrossZone": "true", "HealthCheck": { "HealthyThreshold": { "Ref": "ELBHealthyThreshold" }, "Interval": { "Ref": "ELBInterval" }, "Target": { "Ref": "ELBTarget" }, "Timeout": { "Ref": "ELBTimeout" }, "UnhealthyThreshold": { "Ref": "ELBUnhealthyThreshold" } }, "Listeners": [ { "InstancePort": { "Ref": "ELBInstancePort1" }, "LoadBalancerPort": { "Ref": "ELBLoadBalancerPort1" }, "Protocol": { "Ref": "ELBProtocol1" } }, { "InstancePort": { "Ref": "ELBInstancePort2" }, "LoadBalancerPort": { "Ref": "ELBLoadBalancerPort2" }, "Protocol": { "Ref": "ELBProtocol2" }, "SSLCertificateId": "arn:aws:iam::XXXXXXXXXXXX:server-certificate/nginx.jackiechen.org" } ], "SecurityGroups": { "Fn::FindInMap": [ "ELBSecurityGroup", { "Ref": "Env" }, { "Ref": "Subnet" } ] }, "Scheme": "internet-facing", "Subnets": { "Fn::FindInMap": [ "ELBSubnet", { "Ref": "Env" }, { "Ref": "Subnet" } ] } }, "Type": "AWS::ElasticLoadBalancing::LoadBalancer" }
Route53 record, create cname for ELB
"DNSRecord": { "Properties": { "Comment": { "Fn::Join": [ "", [ { "Ref": "App" }, "-", { "Ref": "Owner" }, "-", { "Ref": "Env" }, "-", { "Ref": "Version" }, ".", { "Ref": "HostedZone" }, "." ] ] }, "HostedZoneName": { "Fn::Join": [ "", [ { "Ref": "HostedZone" }, "." ] ] }, "Name": { "Fn::Join": [ "", [ { "Ref": "App" }, "-", { "Ref": "Owner" }, "-", { "Ref": "Env" }, "-", { "Ref": "Version" }, ".", { "Ref": "HostedZone" }, "." ] ] }, "ResourceRecords": [ { "Fn::GetAtt": [ "ElasticLoadBalancer", "DNSName" ] } ], "TTL": "60", "Type": "CNAME" }, "Type": "AWS::Route53::RecordSet" },
Autoscaling group, Launch config, Scaling policy, Cloudwatch alarm … are also should be included in the template. Too much to cover it here. I will upload the template to my Github later.
Deploy Config – Also mentioned in part 1 that config has different versions. The config could be different code release or config version. e.g Nginx configuration file.
I use S3 to save all versions of codes, and use DynamoDB to record which stack uses which version. Here is my example of Nginx config:
Each config version has a folder in the S3 bucket.
Use DynamoDB to control which Stack should be on which version.
The hot deployment script is to deploy the new codes/config on the fly. In my Nginx example, I included a hot update script in the baked AMI, and use SSM agent to trigger it from Bamboo. Here are the code snippets:
do_hotupdate() { JQ_SELECT=".Items[] | select(.env_stack.S==\"${Env}-${StackID}\") | .version.S" LATEST_VERSION=$(aws --output $OUTPUT_TYPE --region $REGION dynamodb scan --table-name jackie-nginx | jq --raw-output "$JQ_SELECT") CURRENT_VERSION=$(cat /opt/openresty/nginx/conf/Release) date | tee -a /var/log/release.log echo Current running config version for ${Env}-${StackID} is ${CURRENT_VERSION} | tee -a /var/log/release.log echo The latest config version for ${Env}-${StackID} is ${LATEST_VERSION} | tee -a /var/log/release.log if [[ ${CURRENT_VERSION} != ${LATEST_VERSION} ]] then echo "Downloading latest configs..." | tee -a /var/log/release.log aws s3 cp s3://jackie-nginx/${LATEST_VERSION}/files/conf /opt/openresty/nginx/conf --recursive | tee -a /var/log/release.log /etc/init.d/nginx reload |& tee -a /var/log/release.log else echo no updates | tee -a /var/log/release.log fi }
invoke_ssm_agent() { for ec2_instance_id in ${ec2_instance_ids}; do ec2_instance_id_command_id=$(aws --output $OUTPUT_TYPE --region $REGION ssm send-command --instance-ids ${ec2_instance_id} --document-name "AWS-RunShellScript" --comment "nginx config hot update" --parameters commands="/etc/init.d/nginx hotupdate" --output-s3-bucket-name "jackie-pipeline" --output-s3-key-prefix "ssm" | jq -r .Command.CommandId) ec2_instance_id_command_id_status=$(aws --output $OUTPUT_TYPE --region $REGION ssm list-command-invocations --command-id ${ec2_instance_id_command_id} --details | jq -r '.CommandInvocations[].Status') sleep 10 while [ ${ec2_instance_id_command_id_status} == "Pending" ] do sleep 2 ec2_instance_id_command_id_status=$(aws --output $OUTPUT_TYPE --region $REGION ssm list-command-invocations --command-id ${ec2_instance_id_command_id} --details | jq -r '.CommandInvocations[].Status') done aws --output $OUTPUT_TYPE --region $REGION s3 cp s3://jackie-pipeline/ssm/${ec2_instance_id_command_id}/${ec2_instance_id}/awsrunShellScript/0.aws:runShellScript/stdout stdout echo "---------------------------------------------------------------------------------------------" echo "The update status for " ${ec2_instance_id} " is " ${ec2_instance_id_command_id_status} echo "---------------------------------------------------------------------------------------------" cat stdout grep emerg stdout if [ $? -eq 0 ]; then echo ERROR; exit 1; fi done }