Running Windows Containers in 2023

Felix Leven Senior Systems Engineer Microsoft & Citrix.
CERTIFIED EXPERT
All things Microsoft Admin, for 20 years plus, focusing on automation, infrastructure as code, devops, monitoring and reporting lately.
Published:
Updated:
A lot has changed since the first introduction of Windows containers, I'd like to provide you with an overview on using them in 2023.
A Quote from the Mirantis FAQ:
Docker Inc. was acquired by Mirantis in 2019 and Microsoft has announced that on April 30,2023, it will transfer support for Mirantis Container Runtime (formerly Docker Engine – Enterprise) to Mirantis.

and Quote, from Microsoft:
At the end of September 2022, Microsoft will no longer maintain the DockerMsftProvider API 
 
In this article I will only focus on running services on a Windows Server and not on running the desktop version. I like to encourage everyone to seriously start implementing services on Docker, even if you are a Windows shop or you just don't have any Linux experience yet. Don't worry, you can run everything you need on Windows.
 
As of today, we got three choices when it comes to running containers on Windows Server:
https://learn.microsoft.com/en-us/virtualization/windowscontainers/quick-start/set-up-environment?tabs=dockerce#windows-server-1:
 
  • Moby Project / Docker Community Edition (CE), it is free, but you might miss some features
 
  • Mirantis Container Runtime (formerly Docker Engine - Enterprise), it has all features available on Windows and you can get support if you need it.
 
  • ContainerD and NerdCli, I consider this a more advanced solution and it is not (yet) supported by GUI Interfaces like Portainer for example. This becomes more interesting, if you start moving to Kubernetes.

Whatever your choice might be, the setup and management process is really well documented including convenient scripts.
 
Quick Start 

Scripts to install Docker
 
After your first Docker host is up and running, you should pay attention to the demon.json file. It is not as feature rich as on Linux, but some options I consider very important and I wish someone would have told me about them before I started my journey.
 
Some options are missing in the Microsoft documentation, lets check Docker:
https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-configuration-file
 
"experimental", if set to true, it will expose metrics that can be collected by prometheus for monitoring. You could also get metrics using windows_exporter.
"data-root", you can move docker to another drive, else it defaults to c:\ProgramData\docker
"shutdown-timeout", lets you expand the time granted to shutdown all your containers before the processes are killed. Be warned, some containers need longer then the default 15 sec. to gracefully shutdown and a timeout of 30 - 60 might be a saver bet.
"group",  by default, only members of the local "Administrators" group can access the Docker Engine through the named pipe, add more security groups here.
 
My next recommendation to anyone, is to start using a reverse proxy as soon as possible and to build your stack with Docker Compose v2.
 
A reverse proxy saves you a lot of time by centralizing the entry point to your services, which tightens security and makes tasks like certificate management a lot easier. Users can access your services with a clean dedicated URL, without adding any port numbers to distinguish them.
To be able to implement a proxy you should also make use of Docker Compose. The new Docker-compose V2 is highly recommended over the earlier version, since it is a complete rewrite and therefore fixes many issues the old one had.
 
Also, Quote:
"From the end of June 2023 Compose V1 won’t be supported anymore and will be removed from all Docker Desktop versions."
 
By using Compose you can start or stop all your containers as a combined stack and it will automatically set up a default container network. This enables you to use a reverse proxy like Caddy and only this container will be exposed to the public, container to container traffic stays inside the default container network, that you earlier created by using Docker Compose.
For example, a monitoring front end like Grafana, should get the data you like to visualize, by communicating only through the internal container network to request the metrics from a database like Influx or Prometheus.
At first there might be exceptions from this rule, like SSH connections to your version control repository container, as you would need additional plug-ins compiled into your Caddy server, to enable it to also forward your SSH traffic.
Of course, all of this applies to a single node setup and will get rather complex when we start to use Docker swarm for example to scale our workload horizontally across multiples nodes, but until then, you will have a solid infrastructure to familiarize yourself with Docker.
 
One more example:
In all your configuration files you might refer to services like this:
 
Without a reverse proxy:
https://prometheus.mylab.internal:9090 -> external access to a container
 
Using a reverse proxy:
http://prometheus:9090 -> internal access to a container (in this case also skipping SSL entirely)
 
You will have to change a lot of configuration files, if you decide to implement a reverse proxy at a later point in time.
 
To wrap things up, I like to look at images and volumes and how approaching them on Windows might be different from the more established Linux Docker environments.
 

Images:

We have to talk about images and the truth is, after 4-5 years using Docker exclusively on Windows, I use two official images aside from the official OS base images. Jenkins Inbound Agent (but I put another layer on top of it) and Portainer business, which is a good example for using the Windows Nano image. Images for GiTea, Grafana, Prometheus, Nexus, Jenkins and all the different exporters, I build on my own.
 
Why is that ? Ready to ship container images are one of the most important selling points of Docker, aren't they ?
Well, I wanted to understand how a docker image is build, what binary's are downloaded and installed etc. and started to create my own images. This way, I am free to choose the right base image, Server Core or Nano Server and the OS Version.
The binary's that you like to run in your container need to be downloaded, extracted, moved, a configuration might be added (or a volume containing the configuration file) and everyone doing official images for you, has a different approach to execute all those steps, but I prefer to establish my own consistent way of writing my docker files.
Often, container maintainers don't know much about the Windows ecosystem or they just don't have the resources to build or test another platform (Windows is still not free as in freedom) and that’s why you have to build an image yourself from time to time.
Security might be another topic for creating your own images, if a container is based on IIS for example, be sure to add all your well known security best practices to the image, force a high TLS version, disable everything you don't need, no exceptions here!
 

Volumes:

There are many different mount types, the ones I use are:
  1. "Anonymous volumes", I consider this a temp drive, with a random name, that can be deleted if your container stops.
  2. "Named volumes", with a dedicated name, can be used to store persistent data.
  3. "Bind mounts", you mount an existing folder into your running container to read and write data.
  4. "named pipes", could be used by a container running Compose, to set up containers on your host.
 
Options 1. and 2. should be the the right choice, but I have to admit that I still recommend to use bind mounts to beginners. It was way easier for me to be able to quickly peek into the mounted folders, to see what is going on inside of the mounted volume. There were a lot of other advantages, I could check the security setting of files, edit a file or share the bind mounted folder with other colleagues. I still use bind mounts a lot, even this is not considered best practice.
 
I hope I could encourage more windows administrators to use containers on Windows, you can benefit a lot from moving services from VMs to containers. If you like to dig a little deeper in to the topic, I highly recommend the following experts to you:
 

Elton Stoneman:

When it all began on Windows, it was tremendously helpful to have someone with the right enthusiasm, at a time many people were asking why to run Docker on Windows at all. I advise you to read the books and Elton's blog and visit his GitHub repository, he did a very good job in explaining Docker on Windows to everyone!
 
Eltons's Blog and visit Elton on GitHub
 

Stefan Scherer:

I also need to mention the GitHub repo from Stefan Scherer, a retired Docker Captain, because you can learn a lot from all the different Docker builds he collected over the years.
 
Stefan's Blog and visit Stefan on GitHub
0
1,427 Views
Felix Leven Senior Systems Engineer Microsoft & Citrix.
CERTIFIED EXPERT
All things Microsoft Admin, for 20 years plus, focusing on automation, infrastructure as code, devops, monitoring and reporting lately.

Comments (0)

Have a question about something in this article? You can receive help directly from the article author. Sign up for a free trial to get started.