I recently had the privilege of attending a conference called DockerCon, where several brilliant minds were at work demonstrating new avenues of success in containerizing applications. There were several themes at this conference, but for me the strongest was MTA (Modernizing Traditional Applications) – an approach to help make existing legacy apps more secure, more efficient and portable to hybrid cloud infrastructure. Here are some points outlining what I’ve found most valuable in Docker MTA.
Trial & Error: How I Discovered Docker
I’ve been in software for a very long time, I’ve done things the right way and watched them fail and also done things the wrong way and watched them succeed. I vividly remember the days when continuous integration didn’t exist, deployments often occurred in the middle of the night, configuration drift wasn’t a known failure and immutable designs were just a pipe dream. I discovered Docker in 2015 when I wanted to setup a secured proxy with Nginx for one of my Java applications. I knew how to accomplish this task via a manual server configuration, but I’ve always been a fan of automating these mundane tasks.
Automation initially started with generifying configurations, which shifted to shell scripts, and then graduated to full-blown applications to stub out code, configs and build VM images. However, I’d been hearing good things about Devops tools such as Ansible, Docker, Puppet and Chef, so I thought I would see what all the fuss was about.
I started my quest for tooling to help with this particular task and I stumbled upon Jason Wilder’s Nginx and Yves Blusseau’s Letsencrypt Docker images. I was able to follow some hacked-together tutorials, and within just a few minutes, I had my application behind an encrypted Nginx reverse proxy.
Hooked by Automation
That was it – I was hooked, and I jumped into Docker with both feet. I went back through and started writing deployment scripts and Dockerfiles to leverage this new technology for each of my projects.
It’s humorous to me that I immediately found value in the time that was saved by automating a simple task; (I didn’t initially realize the benefit this technology gave me). These OS agnostic capabilities and immutable development and deployment configurations would eventually change the way I approached software development and deployment altogether. As I continued to use Docker, I, like other companies and teams, found that I could consolidate resources and spend less time tracking down environment-specific bugs.
As time progressed, I was able to build and deploy software that behaved exactly the same in production as it did in development (all while running on my laptop). So, if you’re following me on this, I think I know what you’re asking yourself right now.
Removing Errors, Extending Functionality and other Benefits
…Yes, I was able to kill development servers, run and build the infrastructure locally, and completely remove configuration and environment specific errors when deploying to production.
The second benefit was being able to extend functionality with microservices for my already existing applications. By making use of the existing application API, I was able to add new functionality to these applications without even changing a single line of code in the old repositories.
If you want to learn more about the efficacies in this approach, IBM’s Jason McGee gave a really simplistic demonstration during the general session on October 18.
At DevelopmentNow, we strive to be as effective and efficient as possible.
To do so, we leverage tools such as Docker to ensure that best practices are maximized. Our team is experienced at providing solutions using a variety of platforms, tools, and languages to best fit our partner goals. Contact us today to learn more.