If you’ve read our previous blog post, you’ll remember that we’ve begun work on using Kubernetes to provide a more scalable and robust Open edX service to our customers. While Kubernetes provides a rich set of functionality out of the box for orchestrating and managing containerized applications, we decided that Rancher was also worth a look.
Like Kubernetes, Rancher provides a framework for deploying containerized applications using Docker. Rancher has many similar features to Kubernetes, including service discovery, load balancing, and automatic scaling of containers. Best of all, it’s super simple to get started with Rancher. Rancher itself runs as a set of containers, so you just need pull and run the rancher/server and rancher/agent images.
Rancher comes with a slick web UI that allows you to configure and deploy an application. The Applications tab shows the components that make up your application, including any services and load balancers that you have created. From here, you can click on a specific container to see a view of the container’s details and resource usage. You can also view logs or execute a shell directly from the web UI, without having to SSH into the host machine.
To deploy an application with Rancher, you’ll create a number of services and link them together to allow them to communicate with one another. A service is responsible for managing a set of identical containers. Say, for example, that your application has a discussion forum that you’ve made into a Docker image. You could create a Rancher service for your forum that would allow it to scale independently of other components in your application. Using the Rancher web UI, you can create a new service and specify the number of copies, port mappings, links to other services, and advanced settings such as health checks.
Rancher also supports GitHub integration, making it straightforward to manage users and access to your Rancher projects. You can add either individual users or entire teams and restrict permissions to Owner, Member, or Readonly. Users are associated with a single Rancher environment, so you can follow the principle of least privilege and give access to only those who need it.
Open edX on Rancher
Open edX is complicated. The fact that it’s a rich, scalable web application means that Open edX has many different pieces that all have to communicate and work together. Beginning last year, we started doing work to separate these pieces into Docker images. Ultimately, we think that running Open edX as a set of containers will allow for easier scaling, greater robustness, and more efficient use of resources.
Although Docker is great for packaging containerized applications, it doesn’t provide out-of-the box functionality for orchestrating containers or facilitating communication between services. (We’ve tried Compose, Swarm, and Machine, but they’re not quite mature enough or feature-rich to use in production.) For these reasons, we’ve been exploring other open source solutions including Kubernetes and Rancher to deploy our containerized version of Open edX.
After we’d done the work to create Docker images for each major component of Open edX, it was pretty straightforward to deploy with Rancher. We simply had to create a service for each component, link the services that need to communicate, and configure a few settings. Below, you can see a view in Rancher that shows each service and how they are linked.
I’ve only spent a couple days playing around with Rancher, but my initial impressions are that Rancher is slick, easy to get started with, and has many of the advanced features required to run containerized applications in production, such as scaling, load balancing, and health checks. That being said, Rancher is still officially in beta while Kubernetes is production-ready and backed by over a decade of experience in managing containerized applications at Google. Kubernetes also has a few features missing from Rancher, such as secret management. We intend to continue our work with Kubernetes, but Rancher could provide a compelling alternative for some use cases.