Migrating from Docker Compose to Skaffold


Over the last weekend I had occasion to unshelve a Ruby-on-Rails project that I had worked on in 2015. It was a great glimpse into the state of the art of the time. I had chosen some forward leaning but hopefully future-proof tech for the infra side of things. We’d been deploying to Heroku for test and prod environments but leveraged Docker for our development environments.

One of the last few commits to the repo added support for Docker Compose which allowed us to stand up the full stack with a simple command:

  • Ruby-on-Rails single container web app
  • Postgres database
  • Redis for caching

The configuration file (docker-compose.yml) was only about 25 lines that looked like this

  build: .
  command: ./bin/rails server -p 3000 -b
   REDISCLOUD_URL: redis://redis
   DATABASE_URL: postgres://postgres@db/development
   - .:/myapp
   - db
   - redis
   - "3000:3000"
  image: postgres:9.4.1
   - "5432:5432"
  image: redis
   - "6379:6379"

By running the docker-compose up command you could get a reproducible version of the app on your local laptop. I was wary that after 6 years, none of this would work. We all know how much software rots when it isn’t looked after. Much to my surprise the whole stack came up in short order because I had done some work to pin versions of my Ruby environment, Postgresql version and dependencies (Gemfile).

Docker has been a huge leap forward for the ability to nail down a point in time version of a stack like this.

Transition to Continuous Development with Skaffold

With this setup, we had setup Rails to do hot code reloading so that when we changed the business logic on our dev branches it would automatically update in the running app. This was awesome for quickly iterating and testing any changes you were making but when you wanted to update the image or add a dependency you had to stop the running app rebuild the Docker image.

Once I had gotten myself back to the best-of-breed dev setup of 2015, I decided to see what it would take to get a representative environment on the latest and greatest dev tools of today. Obviously I am bit biased, so I turned my attention to figuring out how to port the Docker Compose setup to Skaffold and Minikube.

Many folks are familiar with Minikube, which lets you quickly and efficiently stand up a Kubernetes cluster as either a container running in Docker or a VM running on your machine.

Skaffold is a tool that lets you get a hot-code reloading feel for your apps running on Kubernetes. Skaffold watches your filesystem and as things change it does the right thing to make sure your app is updated as needed. For example, changes to your Dockerfile or any of your app code will cause a rebuild of your container image and it will be redeployed to your minikube cluster.

Below is a diagram showing the before (Docker Compose) and after (Minikube and Skaffold):

The discerning eye will look at these two diagrams and notice that the Kubernetes YAML files are new and that we have a new config file to tell Skaffold how to build and deploy our app.

To create my initial Skaffold configuration all I needed to do was run the skaffold init command which tries to detect the Dockerfiles and Kubernetes YAMLs in my app folder and then lets me pair them up to create my Skaffold YAML file. Since I didn’t yet have Kubernetes YAML, I passed in my Docker Compose file so that Skaffold would also provide me an initial set of Kubernetes manifests.

skaffold init --compose-file docker-compose.yml


In this section I’ll walk you through the same process with a readily available application. In this case, we’ll be using Taiga which is an open source “project management tool for multi-functional agile teams”. I found Taiga by searching on GitHub for docker-compose.yml files to test my procedure with.

If you have a Google account and want to run the tutorial interactively in a free sandbox environment click here:

Open in Cloud Shell
  1. Install Skaffold, Minikube and Kompose. Kompose is used by Skaffold to convert the docker-compose.yml into Kubernetes manifests.
  2. Start minikube.
minikube start

3. Clone the docker-taiga repository which contains the code necessary to get Taiga up and running with Docker Compose.

git clone https://github.com/docker-taiga/taiga
cd taiga

4. The Docker Compose file in docker-taiga doesn’t set ports for all the services which is required for proper discoverability when things are converted to Kubernetes. Apply the following patch to ensure each service exposes its ports properly.

cat > compose.diff <<EOF
diff --git a/docker-compose.yml b/docker-compose.yml
index e09d717..94920c8 100644
--- a/docker-compose.yml
+++ b/docker-compose.yml
@@ -12,0 +13,2 @@ services:
+    ports:
+      - 80:8000
@@ -22,0 +25,2 @@ services:
+    ports:
+      - 80:80
@@ -33,0 +38,2 @@ services:
+    ports:
+      - 80:80
@@ -46,0 +53,2 @@ services:
+    ports:
+      - 5432:5432
@@ -55,0 +64,2 @@ services:
+    ports:
+    - 5672
diff --git a/variables.env b/variables.env
index 4e48c17..8de5b09 100644
--- a/variables.env
+++ b/variables.env
@@ -1 +1 @@
@@ -3 +3 @@ TAIGA_SCHEME=http
patch -p1 < compose.diff

5. Next, we’ll bring in the source code for one of Taigas components that we want to develop on, in this case the backend. We’ll also set our development branch to the 6.0.1 tag that the Compose File was written for.

git clone https://github.com/taigaio/taiga-back -b 6.0.1

6. Now that we have our source code, Dockerfile, and Compose File, we’ll run skaffold init to generate the skaffold.yaml and Kubernetes manifests.

skaffold init --compose-file docker-compose.yml

Skaffold will ask which Dockerfiles map to the images in the Kubernetes manifests.

The default answers are correct for all questions except the last. Make sure to answer yes (y) when it asks if you want to write out the file.

? Choose the builder to build image dockertaiga/back Docker (taiga-back/docker/Dockerfile)
? Choose the builder to build image dockertaiga/events None (image not built from these sources)
? Choose the builder to build image dockertaiga/front None (image not built from these sources)
? Choose the builder to build image dockertaiga/proxy None (image not built from these sources)
? Choose the builder to build image dockertaiga/rabbit None (image not built from these sources)
? Choose the builder to build image postgres None (image not built from these sources)
apiVersion: skaffold/v2beta12
kind: Config
  name: taiga
  - image: dockertaiga/back
    context: taiga-back/docker
      dockerfile: Dockerfile
    - kubernetes/back-claim0-persistentvolumeclaim.yaml
    - kubernetes/back-claim1-persistentvolumeclaim.yaml
    - kubernetes/back-deployment.yaml
    - kubernetes/db-claim0-persistentvolumeclaim.yaml
    - kubernetes/db-deployment.yaml
    - kubernetes/default-networkpolicy.yaml
    - kubernetes/events-deployment.yaml
    - kubernetes/front-claim0-persistentvolumeclaim.yaml
    - kubernetes/front-deployment.yaml
    - kubernetes/proxy-claim0-persistentvolumeclaim.yaml
    - kubernetes/proxy-deployment.yaml
    - kubernetes/proxy-service.yaml
    - kubernetes/rabbit-deployment.yaml
    - kubernetes/variables-env-configmap.yaml

? Do you want to write this configuration to skaffold.yaml? Yes

We’ll also fix an issue with the Docker context configuration that Skaffold interpreted from the Compose File. The taiga-back repo keeps its Dockerfile in a sub-folder rather than the top-level and uses the context from the root. Also

sed -i 's/context:.*/context: taiga-back/' skaffold.yaml
sed -i 's/dockerfile:.*/dockerfile: docker\/Dockerfile/' skaffold.yaml
sed -i 's/name: TAIGA_SECRET/name: TAIGA_SECRET_KEY/' back-deployment.yaml

7. Now we’re ready to run Skaffold’s dev loop to continuously re-build and re-deploy our app as we make changes to the source code. Skaffold will also display the logs of the app and even port-forward important ports to your machine.

You should start to see the backend running the database migrations necessary to start the app and be able to get to the web app via http://localhost:4056

skaffold dev --port-forward

8. To initialize the first user, run the following command in a new terminal and then log in to Taiga with the username admin and password 123123.

kubectl exec deployment/back python manage.py loaddata initial_user

Congrats! You’ve now transitioned your development tooling from Docker Compose to Skaffold and Minikube.


Porting my brain from Python to Go

Over the past few months, I have made it a goal to get down and dirty with Go. I have been getting more and more involved with Kubernetes lately and feel like its time that I pick up its native language. For the last 6 years, I’ve had Python as my go-to language with Ruby sneaking in when the time is right. I really enjoy Python and feel extremely productive in it. In order to get to some level of comfort with Go I knew that I would have to take a multi-faceted approach to my learning.

Obviously, the first thing I did was just get it installed on my laptop. Being a Mac and Homebrew user, I simply ran: brew install golang. So now I had the toolchain installed and could compile/run things on my laptop. Unfortunately ‘Hello World’ was not going to get me to properly port my existing programming skills into using Go as a primary/native language. I knew enough about Go to know that I need to understand how and why it was built. Additionally, I had been in interpreted language land my whole life (Perl->Python->Ruby) so was taking another leap outside of just changing dialects. At this point I took a step back from just poking at source code to properly study up on the language.

I’ve had a Safari Books Online subscription for quite some time and have leveraged it heavily when learning new technologies. This case was no different. I picked up Programming in Go: Creating Applications for the 21st Century. I found this book to be at the right level for me, existing programming experience but new to Go. I also found that there was enough context as to how to do things the Go way such that I wouldn’t just be writing a bunch of Python in a different language. After a few flights browsing the concepts, I started to read it front to back. At the same time, I found a resource specific to my task at hand, Go for Python Programmers. This was a good way to see the code I was accustomed to writing and how I might write it differently in Go. As I continued reading and studying up, I made sure to pay particular attention when looking at Go code in the wild. The ask of myself was really to understand the code and the idioms, not just glance at it as pseudocode.

There were a few things that I needed some more clarification on after reading through my study materials. I was still confused about how packaging worked in practice. For clarification, my buddy and Go expert Evan Brown pointed me to the Go Build System repository. The thing that clicked for me here was the structure of having a repository of code that is related to each other, splitting out libraries into directories, then using the cmd directory in order to make the binaries that tied things together. This repo also has a great README that shows how they have organized their code. Thanks Go Build Team!

The next thing that I needed to hash out was how exactly I would use my Object Oriented penchant in Go. For this I decided to turn to Go by Example which had a wealth of simple example code for many of the concepts I was grappling with. Things started to click for me after looking at  structsmethods, and interfaces again through a slightly different lens.

Sweet! So now I understood (more or less) how the thing worked but I hadn’t built anything with it. The next phase was figuring out how the rubber met the road and having a concrete task to accomplish with my new tool.

I didn’t have any projects off the top of my head that I could start attacking but remembered that a few moons ago I had signed up for StarFighters, a platform that promised to provide a capture the flag style game that you could code against. I looked back through their site and noticed that their first game, StockFighter, had been released. StockFighter provides a REST API that players can code against in order to manipulate a faux stock market. I didn’t know anything about the stock market but figured this would be as good a task as any to get started. I played through the first few levels by creating a few one off binaries. Then on the harder levels I started to break out my code, create libraries, workers and all kinds of other pieces software to help me complete the tasks that  StockFighter was throwing at me. One huge help in getting me comfortable with creating this larger piece of software was that I had been using Pycharm with the Go plugin. This made code navigation, refactoring, testing and execution familiar.

Shit had gotten real. I was building a thing and feeling more comfortable with each level I played.

After my foray with StockFighter, I felt like I could use a different challenge. It turns out if you start asking around if people need some software built at Google, there will be plenty of people who want to take you up on the offer. The homie Preston was working on an IoT demonstration architecture and needed a widget that could ingest messages from PubSub and then store the data in BigTable. As he explained the project, I told him that it should take me no more than 4 hours to complete the task. I hadn’t use the Go SDKs at the time so I figured that would eat up most of my time. I sat down that afternoon and started the timer. This was the litmus test. I knew it. My brain knew it. My fingers knew it.

I made it happen in just under 5 hours which to me was a damn good effort as I knew that generally estimates are off by 2x. After doing a little cross-compiling magic, I was able to ship Preston a set of binaries for Mac, Linux and Windows that achieved his task.

I’m certainly not done with my journey but I’m happy to have had small successes that make me feel at home with Go.



Building a container service with Mesos and Eucalyptus

    Over the past few months, I’ve been digging into what it means to work with a distributed container service. Inspired by Werner Vogel’s latest post about ECS, I decided to show an architecture for deploying a container service in Eucalyptus. As part of my investigations into containers, I have looked at the following platforms that provide the ability to manage container based services:
Each of these provide you with a symmetrical (all components run on all hosts) and scalable (hosts can be added after initial deployment) system for hosting your containerized workloads. They also include mechanisms for service discovery and load balancing. Deis and Flynn are both what I would call a “lightweight PaaS” akin to a private Heroku. Mesos, however, is a more flexible and open ended platform, which comes as a blessing and a curse. I was able to deploy many more applications in Mesos but it took me far longer to get a working platform up and running.
     Deis and Flynn are both “batteries included” type systems that once deployed allow you to immediately push your code or container image into the system and have it run your application. Deis and Flynn also install all of their dependencies for you through automated installers. Mesos on the other hand requires you to deploy its prerequisites on your own in order to get going, then requires you to install frameworks on top of it to make it able to schedule and run your applications.
     I wanted to make a Mesos implementation that felt as easy to make useful as Deis and Flynn. I have been working with chef-provisioning to deploy clustered applications for a while now so I figured I would use my previous techniques in order to automate the process of deploying a functional and working N node Mesos/Marathon cluster. Over the last month, I have also been able to play with Mesosphere’s DCOS so was able to get a better idea of what it takes to really make Mesos useful to end users. The “batteries included” version of Mesos is architected as follows:
Each of the machines in our Mesos cluster will run all of these components, giving us a nice symmetrical architecture for deployment. Mesos and many of its dependencies rely on a working Zookeeper as a distributed key value store. All of the state for the cluster is stored here. Luckily, for this piece of the deployment puzzle I was able to leverage the Chef community’s Exhibitor cookbook which got my ZK cluster up in a snap. Once Zookeeper was deployed, I was able to get my Mesos masters and slaves connected together and was able to see my CPU, memory and disk resources available within the Mesos cluster.
    Mesos itself does not handle creating applications as services so we need to deploy a service management layer. In my case, I chose Marathon as it is intended to manage long running services like the ones I was most interested in deploying (Elasticsearch, Logstash, Kibana, Chronos). Marathon is run outside of Mesos and acts as the bootstrapper for the rest of the services that we would like to use, our distributed init system.
     Once applications are deployed into Marathon it is necessary to have a mechanism to discover where other services are running. Although it is possible to pin particular services to particular nodes through the Marathon application definition, I would prefer not to have to think about IP addressing in order to connect applications. The preferred method of service discovery in the Mesos ecosystem is to use Mesos DNS and host it as a service in Marathon across all of your nodes. Each slave node can then use itself as a DNS resolver, wherein queries for services  get handled internally and all others are recursed to an upstream DNS server.
     Now that the architecture of the container service is laid out for you, you can get to deploying your stack by heading over to the README. This deployment procedure will not only deploy Mesos+Marathon but will also deploy a full ELK into the cluster to demonstrate connecting various services together in order to provide a higher order one.

Introducing HuevOS+RancherOS


Today is an exciting day in Santa Barbara. We are very pleased to introduce our latest innovation to the world of DevOps. 

HuevOS – the Docker-based open-source operating system for tomorrow’s IT and Dev/Ops professional.  HuevOS 1.0 (SunnySide) is the open-source/free-range/gluten-free solution that forms the perfect complement to RancherOS.  In addition, we’re delighted to begin development on our proprietary blend of Services and Language Software as a Service (SaLSaaS) which, when overlayed atop a HuevOS+RancherOS stack, provides a complete and delicious solution around which your whole day can be centered.

Try HuevOS+RancherOS today and let us know which SaLSaaS we should work on first to ensure your hunger for DevOps is quenched thoroughly.  

To get your first taste visit the following repository which includes our Chef Recipe and a Vagrantfile to get you HuevOS in short order:


If you already have your RancherOS host up its easy to add in our HuevOS to the mix via the Docker Registry:

docker pull viglesiasce/huevos; docker run huevos

A huge thank you to all involved in getting us to this point and being able to ship a 1.0 version of the HuevOS+RancherOS platform.

Happy clucking!!



Installing Apache Hadoop on Eucalyptus using Hortonworks Data Platform stack and Apache Ambari

Get your #hortonworks #hadoop running on #eucalyptus

shaon's blog

This post demonstrates Hadoop deployment on Eucalyptus using Apache Ambari on Horton Data Platform stack. Bits and pieces:

  1. Eucalyptus 4.0.0
  2. Hortonworks Data Platform (HDP) stack
  3. Apache Ambari
  4. 4 instance-store/EBS-backed instances
    1. 2 vcpus, 1024MB memory, 20GB disk space
    2. CentOS 6.5 base image

For this is demo we tried to use very minimum resources. Our Eucalyptus deployment topology looks like below, 1x (Cloud Controller + Walrus + Storage Controller + Cluster Controller) 1x (Node Controller + Object Storage Gateway) To meet our instance requirement, we changed the instance-type according to our need.

Preparation Run an instance of m1.xlarge or any other instance type that meets the above requirement. When the instance is running copy the keypair that is used to run this instance at .ssh/id_rsa, we will be using this same keypair for all the instances.

Run three more instance of same type with same keypair and image and copy their private IPs for later use. Add security…

View original post 200 more words


Pursuit of Quality at Eucalyptus

Overview of the extensive Eucalyptus quality toolchain.

shaon's blog

At Eucalyptus, if one thing we care about it is the quality of the product. Our goal is to deliver an AWS compatible software that just works. To ensure the highest level of quality we try to follow the optimum strategy possible in both Development and in QA.

We have adopted Agile Software Development model a while back and we love it. As a part of our strategy, there are several type of projects we open in Jira. After getting the green signal from Product Management, we get Epic and then after architectural discussion Story tickets are created with Sub-tasks for developers to work on. We also have Jira tickets for New Features which are basically similar to Story but comparatively smaller tasks and generally have no Sub-tasks. At this point one scrum master from each team works closely with the developers to keep track of the…

View original post 629 more words


Hobos and Vagrants and Quality

Flattering blog post by the homie gregdek. Go get your micro-qa on!

Greg DeKoenigsberg Speaks

I don’t know how many people have actually met Vic Iglesias, our Quality Hobo.  Here’s what he looks like in his natural habitat:

It’s really super important not to make Quality Hobo angry. Let me assure you from personal experience: no one wants that.

Here are some things that make Quality Hobo angry (and that you should, therefore, avoid):

  • Stealing Quality Hobo’s cigarettes, electronic or otherwise.
  • Remarking upon Quality Hobo’s resemblance to a certain celebrity.
  • Wasting Quality Hobo’s time with questions about which test cases are most recent.
  • Wasting Quality Hobo’s time by writing tests that don’t integrate into Eutester.
  • Wasting Quality Hobo’s time with questions on how to build your QA environment.
  • Wasting Quality Hobo’s time in any way at all.

Here’s what I’m saying: it was only a matter of time before our Hobo got mixed up with a Vagrant.

Vagrant is incredible, and Mitchell Hashimoto is incredible for creating it.  It’s the…

View original post 391 more words


Choose Your Own Adventure: Which Eucalyptus UI is Right for You?

UX Doctor

Open source projects can often lead to an embarrassment of riches that can really be confusing. This is especially true when it comes to tools you can use to interact with your Eucalyptus cloud.

Options include CLIs, SDKs, and different graphical user interfaces. Each has advantages and disadvantages, strengths and weaknesses, and is targeted to a specific kind of person.

Depending on what you want to do, your level of cloud knowledge, and the kind of user experience you desire, there is a Eucalyptus UI that will fit you perfectly. Let’s take a quick look at each and then you can Choose Your Own Adventure to help you find your ideal fit.


Eucalyptus User Console Dashboard

The Eucalyptus User Console is an open source project with development primarily done internally by Eucalyptus employees. We wanted to give our customers a graphical user interface for self-service deployment of virtual machines on Eucalyptus…

View original post 747 more words


Run Appscale on Eucalyptus


shaon's blog

Cloud computing is the use of computing resources (hardware and software) that are delivered as a service over a network (typically the Internet). – Wikipedia

According to Wikipedia currently there are few popular service models exist.

1. Infrastructure as a service (IaaS)
2. Platform as a service (PaaS)
3. Software as a service (SaaS)

So, I have an Eucalyptus cloud, which is great, serves as AWS-like IaaS platform. But now I want PaaS. And right here Appscale comes into play with full compatibility of Google App Engine (GAE) applications. In this post, we will install the popular open source PaaS framework Appscale on Eucalyptus, the AWS compatible open source IaaS platform.

0. Introduction
1. Resize Lucid image
2. Install Appscale from source
3. Install Appscale Tool
4. Bundle Appscale image
5. Run Appscale
6. Run an application on Appscale

Eucalyptus Cloud platform is open source software for building…

View original post 541 more words