Continuous Deployment with Automation with HIPAA on Medstack

Automation

One of the foundations of successful software product development is how quickly you are able to implement ideas and release them to market. A good continuous deployment  process goes a long way in bringing about this agility, while lowering stress related to release days.

At Focalworks, we have worked in diverse contexts – from helping early-stage startups to get from an idea to market, to enabling multinational companies to extend, re-architect, and build their software for improved scale and performance by leveraging on technologies, tooling, and best practices that are current and contemporary. And depending on the context, the right-fit solution can be very different.

In our experience, most startups initially need a very nimble environment and processes to quickly develop, deploy and validate. And as a technology partner, we know that along with this velocity, code quality and security is extremely important.

In this article, I am going to talk about how, for one of our favourite client-partners – Trualta Care Network, a startup in the Healthcare space, we made the transition from a manual deployment process to a completely automated deployment on Medstack.

Let’s use the sands of time and go back 5 years.  (OR skip ahead to the how-to.)

The backstory

We had just started developing a custom Laravel solution for Trualta to build a platform to empower caregivers to get better at the art of caregiving. The urgent and immediate goal: put the product in the hands of a few family caregivers and get some early feedback. The plan was to create organic adoption and grow. We were running the entire setup on a single virtual server and we would manage the deployment manually.

The initial reception of the product was really good. And it soon picked up a lot of interest with caregiver support groups and organisations. Given the very specific needs articulated by the various organisations, we had made an early decision to create separate code bases for some of these early clients, and manage separate deployments. 

And that was that for about a year.

Then, demand for new deployments started surging and turnaround time and the potential of manual oversights started becoming a risk that needed to be mitigated. To automate any code updation and the multiple deployments, we wrote Bash Scripts which would run the required Linux shell commands.

Over the next couple of years, the product rapidly evolved. And with interesting feature requests coming from the various clients, we made a constant push to innovate and improve.

At this point, we were spending a lot of time pushing the code updates for each client deployment, which was not acceptable. Also, with prospective clients such as insurance companies and government agencies, it became necessary to ensure that we moved towards being compliant with standards such as HIPAA. With this objective, we transitioned our hosting to Medstack. Medstack provides a platform that delivers built-in privacy and security protocols tailor-made to healthcare industry expectations, including encryption, certificate and key management, backups, monitoring and logging. They are a great team to work with and it was really nice to explore a new platform and what they had to offer.

As part of our continuing journey towards achieving the highest possible data privacy and protection, we wanted to implement the best possible access control and deployment automation processes. We migrated the entire application codebase to a multi-tenant architecture about which I am going to talk about in a separate article. And we leveraged Medstack’s Docker-based deployment along with a very strong API support to automate the deployment process, in such a way that the development team should have no view of the user data. 

I’ll now walk you through the current continuous deployment process, established through our re-engineering initiative, to deploy incremental code changes.

The How-To

We start with a Docker image. Being a big fan of Gitlab, we had decided to use the Gitlab pipelines and runners to build our Docker images. We create a tag on Gitlab which triggers a Gitlab pipeline. This pipeline runs a code scan for secret detection, and then it starts the image build process. Once that is done, we get a notification on our Zulip chat application which we use for internal communication. By the way, Zulip is a great open source alternative to slack and you can host it on your own infrastructure

Here is a visual walkthrough of the process of deployment using some screenshots:

At first, we create a release tag on Gitlab – this can be a beta release meaning the deployment will be done on the staging server for review or it’s a production release resulting in running the pipeline responsible for deploying the code to the production server.

As you can see, we follow the semantic versioning where we increment the Y part when a new feature is added. Any bug fixing or patches will result in incrementing the Z part.

The tag creation triggers the pipeline. Right now, we have two stages in our pipeline which you can see in the screenshot below.

Once the pipeline is complete, we have configured Gitlab to send a notification to Zulip. A typical success notification looks like this: 

Once this is done, the developer would wait for the correct time to push the new Docker image using the Medstack API. We maintain a SHELL script which makes the necessary CURL call to the API and then does the routine check of each client’s instance to ensure that the containers are up with the latest version of Docker image that was just created. Below is a flow diagram of the deployment process.

The deployment process is automated and on a given day, when everything is working in your favour, you feel great. However, things go wrong and that’s something which will happen when humans are involved. Rather, that’s what makes us human. But if things go wrong, it’s important to know when it went wrong to quickly react to it. And, to know what went wrong. If the build process fails, a notification on Zulip is sent informing us about the failure. Then, it’s a manual process of checking what went wrong. For example, recently, our pipelines started to fail and from the logs we realised that it was the disk space which was full. So, we write a script which would remove the unused images hence keeping the disk usage to the minimum. So, this is a step which we cannot generally predict and hence what we need is a notification so that we know what went wrong and we are able to react to it as soon as possible.

So, creating a build and updating a server now can be done only by creating a tag on Gitlab. No need to give server level access to developers and we also control who can deploy code, because not everyone has the permission to create a tag on Gitlab. With this level of automation, deployment of code to staging or production environments is so simple that developers can push releases literally on a daily basis, without having to worry about breaking something. And the turn-around-time has significantly reduced. 

Through the implementation of a robust and automated continuous deployment process, we have been able to successfully create an environment where the entire team, including the developers, feel empowered to innovate and remove any point of friction. And help drive up the overall efficiency of the software development process for the business owner.

Photo by Alesia Kazantceva on Unsplash

Written by: Amitav Roy
Edited by: Tanmoy Palit

You may also like