I recently used AWS ECS (Elastic Container Service) with Fargate to deploy the backend REST API for Fargate. I’ll make a separate post on why I chose this architecture. Now, the way ECS works is that it pulls latest container image from ECR (Elastic Container Registry), where you have to upload your Dockerized container images.
The problem happens when you have to make changes to your code. Whenever you make a change, you will have to build the Docker container image, tag it, and push it to ECR. If you have worked with Docker images before, you’d know that it takes a fair amount of time.
Also, it feels a little repetitive and tedious to do this each time you update your code, no? It would be so nice if all this happened automatically and all you had to do was push your code to the Git repo like you always do. Well, let’s implement this!
To summarize, what we want here is a mechanism wherein the following actions will happen automatically each time there is a new commit on our repo -
Dockerfile
present.latest
.Seems straightforward enough. How should we implement this?
AWS CodeBuild is a managed CI service that builds our code and produces output that is ready to be deployed. It is part of the CodePipeline service, which I have written about before here.
We’ll link CodeBuild to our GitHub repo, so whenever there is a new commit, CodeBuild task will be triggered.
Next step is - what should CodeBuild do? To declare this, CodeBuild searches for a buildspec.yml
file in the root of your repo.
As I’ve covered in the previous post, CodeBuild build specification includes four phases - Install, pre_build, build, and post_build.
In our usecase, we shall log in to ECR in the pre_build
phase, build and tag the Docker image in the build
phase, and finally upload the image to ECR repo in post_build
phase.
Here is the final buildspec.yml
file for our usecase -
version: 0.2phases: install: runtime-versions: docker: latest commands: - nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://127.0.0.1:2375 --storage-driver=overlay2& - timeout 15 sh -c "until docker info; do echo .; sleep 1; done" pre_build: commands: - echo Logging in to Amazon ECR.... - aws --version # update the following line with your own region - $(aws ecr get-login --no-include-email --region us-east-1) - COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7) - IMAGE_TAG=${COMMIT_HASH:=latest} # update the following line with your ECR Repo URI - REPOSITORY_URI=000431702369.dkr.ecr.us-east-1.amazonaws.com/gradgoggles build: commands: - echo Build started on `date` - echo Building the Docker image... # update the following line with the name of your own ECR repository - docker build -t gradgoggles-api . # update the following line with the URI of your own ECR repository (view the Push Commands in the console) - docker tag gradgoggles-api:latest 000431702369.dkr.ecr.us-east-1.amazonaws.com/gradgoggles:latest post_build: commands: - echo Build completed on `date` - echo pushing to repo # update the following line with the URI of your own ECR repository - docker push 000431702369.dkr.ecr.us-east-1.amazonaws.com/gradgoggles:latest - echo Writing image definitions file... - printf '{"ImageURI":"%s"}' 000431702369.dkr.ecr.us-east-1.amazonaws.com/gradgoggles:latest > imageDetail.jsonartifacts: files: - imageDetail.json
Here are some issues I faced while implementing and maintaining this setup. I have also included solutions in case you come across any of them.
docker
runtime was not available. To fix this, I selected the amazonlinux2-x86_64-standard:3.0
Environment Image. For some reason, it wasn’t working by selecting other images (and this is nowhere mentioned in AWS docs).COMMAND_EXECUTION_ERROR
with an exit status of 255
, ensure that the service role CodeBuild is using has the permission to talk to ECR. Attach a AmazonEC2ContainerRegistryPowerUser
policy to the CodeBuild role in IAM to fix the error.toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading
.FROM
keyword in the Dockerfile
.docker login -u username -p pwd
in the pre_build
step, so as to circumvent the issue.Thanks for reading and see you in the next one! Stay safe.