These are some more notes about using .Net Core with Docker containers in a microservice architecture.
A Docker container is just a process. It has a much smaller footprint than a virtual machine. VM includes the application, libraries, binaries and a full operating system. On the other hand, containers include the application and all libraries/binaries but only a slim down version of the OS. One of the key advantages of running applications in Docker is to answer that common problem by developers – ‘it runs on my machine but not in X’.
Installation
For Mac and Windows 10+ we use the Docker CE (or enterprise). For Windows 7/8 we use Docker Tools. The difference is that the docker client relies on virtualization software. Note that this is only for development systems such as Mac or Windows. In a production environment the docker client should be running directly on the OS and not in this virtual environment.
On older Windows 7/8 we do this by leveraging Oracle’s VirtualBox, which is what Docker Tools installs with. In Windows 10+ we have HyperV built into the OS so Docker runs off that. On server sides note that Docker client requires Windows Server 2016 or higher.
Credentials for Shared Drives
It is important to note that when working with Dockers CE for Windows, the Docker client must be given credentials to setup Shared Drives. This is configured though the client settings as shown below. Ensure that the credentials given are valid and able access the set drive. I found that the docker commands do not show good error messages if these credentials are incorrectly set. More information about this can be found here:
https://blogs.msdn.microsoft.com/stevelasker/2016/06/14/configuring-docker-for-windows-volumes/
Architecture and how it works
Docker relies on images – either created or those pre-made available through registries such as docker.hub. Docker takes a base image and layers the container application on top of it. The following sections describes this architecture and methodology further.
Layered File System
Docker uses a layered file system architecture. At the base, we have an OS layer (ex Ubuntu from docker.hub). This layer is read-only and cannot be modified. This means nothing can be written on this file system layer. Instead, a thin container layer sits above it and file system writes are done at that layer.
Note that Ubuntu base layer is actually composed of multiple layers (each represented by the hash shown above). These layers could include frameworks pre-installed with the OS, such as the .Net SDK or runtime. The set of these layers combine to create the base image. (the dotnet core base image includes a linux OS and Kestral webserver).
Containers and Volumes
Volumes are special directory in a container (aka data volume). These can be shared among containers. Updates to an image wont affect the volume, they are persisted even when container is removed. The Docker Host (the OS that container is running on top of) manages these volumes. Each volume is mounted by the Docker Host.
The following example is a command for starting a node image with a volume “-v” at /var/www. This path is relative to inside the container. To see where this maps on the external host, we can run the ‘inspect’ command. That gives us a “Mounts” section with the “Source” property indicating where the mounted volume is mapped to on the host. Note that once a volume is mapped, we can have two-way read and write to it. For example, we could drop source code in this area so that the container reads off of it to run the application. And in the reverse, the container could write to this area for log files and such.
docker run -p 8080:3000 -v /var/www node ... docker inspect mycontainer ==> "Mounts": [ "Name":"...", "Source":"/mnt/.../var/lib/docker/volumes/...", "Destination":"/var/www", "Driver":"local", "RW":true ]
In example below, we are specifying where on the host the mounted volume should be mapped to. The $(pwd) is the host location, the current working directory. When doing the “inspect” command, note that its using an alias of “/src” for the source path. The “-w” indicates the ‘working directory’ and where to run a command. Without designating the working directory, the “node npm start” command would be executed in the wrong directory and therefore throw an error. The final parameter is the command to run.
docker run -p 8080:3000 -v $(pwd):/var/www -w "/var/www" node npm start docker inspect mycontainer ==> "Mounts":[ "Name":"...", "Source":"/src", "Destination":"/var/www", "Driver":"local", "RW":true ] ... // Note that in powershell the command should include ${pwd} instead of $(pwd)
Below is another example where we use volume mounting and run an ASP.NET MVC application. The application uses a base image from Microsoft for aspnet-core. We run the docker command with “-it” option to make it interactive mode and use bash command to interact. Inside the container we are able to run the dotnet commands.
C:\Projects\docker> docker run -it -p 8080:80 -v ${pwd}:/app -w "/app" microsoft/dotnet:2.1-sdk /bin/bash root@a00327ed64c0:/app# dotnet restore Restoring packages for /app/docker.csproj... Generating MSBuild file /app/obj/docker.csproj.nuget.g.props. Generating MSBuild file /app/obj/docker.csproj.nuget.g.targets. Restore completed in 1.36 sec for /app/docker.csproj. root@a00327ed64c0:/app# dotnet build Microsoft (R) Build Engine version 15.7.179.6572 for .NET Core Copyright (C) Microsoft Corporation. All rights reserved. Restore completed in 58.1 ms for /app/docker.csproj. docker -> /app/bin/Debug/netcoreapp2.1/docker.dll docker -> /app/bin/Debug/netcoreapp2.1/docker.Views.dll Build succeeded. 0 Warning(s) 0 Error(s) Time Elapsed 00:00:08.23 root@a00327ed64c0:/app# dotnet run Using launch settings from /app/Properties/launchSettings.json... : Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[0] User profile is available. Using '/root/.aspnet/DataProtection-Keys' as key repository; keys will not be encrypted at rest. info: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[58] Creating key {ba0df553-56f6-4f83-986b-981e5f1a286e} with creation date 2018-10-09 01:48:54Z, activation date 2018-10-09 01:48:54Z, and expiration date 2019-01-07 01:48:54Z. warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35] No XML encryptor configured. Key {ba0df553-56f6-4f83-986b-981e5f1a286e} may be persisted to storage in unencrypted form. info: Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository[39] Writing data to file '/root/.aspnet/DataProtection-Keys/key-ba0df553-56f6-4f83-986b-981e5f1a286e.xml'. warn: Microsoft.AspNetCore.Server.Kestrel[0] Unable to bind to https://localhost:5001 on the IPv6 loopback interface: 'Cannot assign requested address'. warn: Microsoft.AspNetCore.Server.Kestrel[0] Unable to bind to http://localhost:5000 on the IPv6 loopback interface: 'Cannot assign requested address'. Hosting environment: Development Content root path: /app Now listening on: https://localhost:5001 Now listening on: http://localhost:5000 Application started. Press Ctrl+C to shut down.
Volumes can be managed by Docker by running the command without the -v mapping. For example:
docker run -p 8080:3000 -v /var/www node
In this case Docker figures out where to mount the volume and manages it itself. To remove the volume, we can add the ‘-v’ option when removing the container. The command below removes the container and the volume
docker rm -v [containerid]
Dockerfile
When developing application we want to bake the application with the base image into a final custom image. This image could then be published to a registry. An easy way to manage is using a Dockerfile. The Dockerfile gets read by the docker client to determine how the image is to be created. Based on that final image it creates, we can generate a container. As the dockerfile is processed it actually generates multiple intermediate images. Dockerfile are text files without an extension.
The basic construct of a Dockerfile contains the following:
- FROM – defines the base image
- MAINTAINER – author information
- RUN – commands to execute
- COPY – copy source code into a container (instead of mapping volumes)
- ENTRYPOINT – defines what kicks off the application in the container
- WORKDIR – area the application would be running from
- EXPOSE – ports to expose
- ENV – environment variables
- VOLUME – other volumes we could map (ie for logs)
Docker RUN vs CMD vs ENTRYPOINT
When working with Dockerfiles we often use the RUN, CMD, and ENTRYPOINT commands. The differences are as follows. The RUN command executes the given commands in a new layer (creates a new intermediary layer). It is often used to install required packages, libraries as well as the main application that is to be run inside the container. The CMD command is used to set default commands, which is executed when the container is run without command parameters. When the image is run with a command parameter, then the CMD commands are ignored. For example:
CMD echo "Hello!"
The above is ignored if we run the image with:
docker run -it <sampleImage> /bin/bash
The ENTRYPOINT command defines what is to be executed when the image runs. It is similar to the CMD command, however, it is always executed regardless of parameters passed in during the ‘docker run’. For example, the following will print “Hello John” when executed.
ENTRYPOINT ["/bin/echo", "Hello"] CMD ["World"] ... docker run -it <sampleImage> John ... >> Hello John
Each of the RUN, CMD and ENTRYPOINT commands can be executed in shell-form or exec-form. See section below for details on these different forms.
Docker Shell-Form vs Exec-Form commands
When writing out Docker commands in the Dockerfile you will notice two different form types. Example:
RUN dotnet restore vs RUN ["dotnet", "restore"]
The first command is a ‘shell form’ command – meaning it will be executed in the shell within the container. The second command is in ‘exec form’ and takes a JSON array as an argument (the array must use double quotes between the values, no single quotes). The ‘exec form’ does not invoke a command shell. The advantage with using the ‘exec form’ is it avoids issues with shell command parameters and any shell signal processing. Though these could also be advantages of using the ‘shell form’ – as it is useful when having chained shell commands (using pipes or &&), use of environment variables or specific shell parameters.
Dockerfile Examples
Below is a sample Dockerfile that defines a node image.
FROM node:latest LABEL name="John" ENV NODE_ENV=production ENV PORT=3000 COPY . /var/www WORKDIR /var/www RUN npm install EXPOSE $PORT ENTRYPOINT [ "npm", "start" ]
To create an image from the Docker, we can run the following command:
[/c/Projects/]$ docker build -f Dockerfile -t myproject/node . Sending build context to Docker daemon 6.966MB Step 1/10 : FROM node:latest ---> 8672b25e842c Step 2/10 : LABEL name="John" ---> Running in 0b0df04a54ae Removing intermediate container 0b0df04a54ae ---> 3817f3dbedc8 Step 3/10 : ENV NODE_ENV=production ---> Running in be42bab9b6f2 Removing intermediate container be42bab9b6f2 ---> 5731e0f1522c Step 4/10 : ENV PORT=3000 ---> Running in e63ac18b0e23 Removing intermediate container e63ac18b0e23 ---> 5b882a8fe990 Step 5/10 : COPY . /var/www ---> c5166d6b4c63 Step 6/10 : WORKDIR /var/www ---> Running in 7e1000e7d17a Removing intermediate container 7e1000e7d17a ---> 78148bf3b87b Step 7/10 : VOLUME [ "/var/www", "logs" ] #ignore - this step removed! ---> Running in 45c2fc18b4ea Removing intermediate container 45c2fc18b4ea ---> 2adf12fa99be Step 8/10 : RUN npm install ---> Running in 8db891c32264 audited 170 packages in 1.884s found 0 vulnerabilities Removing intermediate container 8db891c32264 ---> 372927d336b3 Step 9/10 : EXPOSE $PORT ---> Running in ef05d0e15e28 Removing intermediate container ef05d0e15e28 ---> 605aa5a62611 Step 10/10 : ENTRYPOINT [ "npm", "start" ] ---> Running in cb830cda7a7c Removing intermediate container cb830cda7a7c ---> b2f44f3c730c Successfully built b2f44f3c730c Successfully tagged myproject/node:latest SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.
Note above that there are several intermediate containers. Each instruction in the Dockerfile generates these intermediate containers. These do no show up in the docker image list but they are cached by the docker client. Therefore future builds will pull from the cache, making the build process faster. Run the ‘docker image’ command to see the result.
[/c/Projects/]$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE myproject/node latest b2f44f3c730c 3 days ago 680MB
Once the image is created, we can run it in a container with the following command. We also do a ‘docker ps’ to see it in the running state.
[/c/Projects/]$ docker run -d -p 8080:3000 myproject/node e10e70f3d99d699679d75993e0e214db19b0a983015e1d7aa60b47a2f50361e1 [/c/Projects/]$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e10e70f3d99d myproject/node "npm start" 10 seconds ago Up 7 seconds 0.0.0.0:8080->3000/tcp angry_hermann
Note that the container above was run with a ‘-d’ for ‘detached’ mode. If we wanted to execute something in that container we could do so with the ‘exec’ command. This only works for containers that are running.
[/c/Projects/]$ docker exec e10e node myscript.js
Finally we can clean everything up by stopping the container and removing it as well as the image we created.
[/c/Projects/]$ docker stop e10e [/c/Projects/]$ docker rm e10e e10e [/c/Projects/]$ docker rmi myproject/node Untagged: myproject/node:latest Deleted: sha256:b2f44f3c730c8eb638b963e67e540f407808c10eef550442ef25c07b211e72fa Deleted: sha256:605aa5a62611536eed51d667547d3740f0234ff9ded1f7a32d24fcd31ea7c4c2 Deleted: sha256:372927d336b3232de40f44f16a5ab968a6e8f93c9704018e60c815fc2d6b8160 Deleted: sha256:6e39b06bafd3c7e27f5a66d7561220b143d4e6e82dfd41e0d4b9f041cb503f8c Deleted: sha256:2adf12fa99be8f2e0231a5bbc9651703443f8d3ec523f4110fea13d9900af6e0 Deleted: sha256:78148bf3b87b1e2aed520795db37e77aa1003f5bf1a1f3780223e33f7fc0e7cb Deleted: sha256:c5166d6b4c632a08d3541a00fd5a9033099b2d32ed7093c9c1be48fb21f2553f Deleted: sha256:def663cb9678c432836a4d09360a361d70a43c06356acdd4b28e26337910135c Deleted: sha256:5b882a8fe990306113fad6b52e83274dabe89900e296a5b1be50d376cfa113b5 Deleted: sha256:5731e0f1522cd4b9bc0b98915fbaed908e7d6ea76e51275a783002e976c7112b Deleted: sha256:3817f3dbedc82cf40d28ac3dc88a7e2073cc9714cccc7d4323a8630f20e40195
Note that when we remove the image it also removes all the intermediate layers. To remove all the containers, here is a shortcut command. This basically loops through each item returned by “ps -a” and does a force remove:
docker rm -f $(docker ps -a -q)
We can also save images up on a registry such as Docker Hub. Use the ‘docker push’ command to do this.
When trying to remove a long list of images, we can use the following example. In this example I have several images that were created with names starting with ‘eshop*’. By running a simple awk and xargs, I’m able to string out the image Ids and feed it into the ‘docker rmi’ command.
[lee@macbook:/c/project/]$ docker images -a | grep eshop eshop/payment.api latest 3c41567e0d22 3 months ago 258MB eshop/ordering.signalrhub latest b7ce42a48f40 3 months ago 259MB eshop/mobileshoppingagg latest a7db9551abd5 3 months ago 260MB eshop/webshoppingagg latest 5941dfb59d0d 3 months ago 258MB eshop/ocelotapigw latest 35eb8a55a9ca 3 months ago 275MB eshop/identity.api dev 8fac9640161c 4 months ago 255MB eshop/ordering.api dev 8fac9640161c 4 months ago 255MB eshop/ordering.signalrhub dev 8fac9640161c 4 months ago 255MB eshop/payment.api dev 8fac9640161c 4 months ago 255MB eshop/webmvc dev 8fac9640161c 4 months ago 255MB eshop/webstatus dev 8fac9640161c 4 months ago 255MB eshop/locations.api dev 8fac9640161c 4 months ago 255MB eshop/mobileshoppingagg dev 8fac9640161c 4 months ago 255MB eshop/ordering.backgroundtasks dev 8fac9640161c 4 months ago 255MB eshop/marketing.api dev 8fac9640161c 4 months ago 255MB eshop/ocelotapigw dev 8fac9640161c 4 months ago 255MB eshop/webshoppingagg dev 8fac9640161c 4 months ago 255MB eshop/webspa dev 8fac9640161c 4 months ago 255MB eshop/basket.api dev 8fac9640161c 4 months ago 255MB eshop/catalog.api dev 8fac9640161c 4 months ago 255MB [lee@macbook:/c/project/]$ docker images -a | grep eshop | awk '{print $3}' 3c41567e0d22 b7ce42a48f40 a7db9551abd5 5941dfb59d0d 35eb8a55a9ca 8fac9640161c 8fac9640161c 8fac9640161c 8fac9640161c 8fac9640161c 8fac9640161c 8fac9640161c 8fac9640161c 8fac9640161c 8fac9640161c 8fac9640161c 8fac9640161c 8fac9640161c 8fac9640161c 8fac9640161c [lee@macbook:/c/project/]$ docker images -a | grep eshop | awk '{print $3}' | xargs docker rmi Untagged: eshop/payment.api:latest Deleted: sha256:3c41567e0d22b9b1361900668688ed1efa71623768ba594cac0ade36645a4450 Deleted: sha256:a0656a2b6fb5e2669aa77b5f64ce204cecbe958d56d3c3f7b6a8fe622e6dc889 Deleted: sha256:72dd021efd8d164f923958119ea7bb2e5fb25c6fb0f22eac8600f94b1578c85f Untagged: eshop/ordering.signalrhub:latest Deleted: sha256:b7ce42a48f40f4d0233cfc2f46611fe3318683604d4f456af95cf564afa94cf8 Deleted: sha256:efb983af1768c9b779b2d3b7df00044bcbbb213052a1c807113de0ae7241e93c Deleted: sha256:ae475a8a4dc5d984ea4b336fe4bbcc17d250734997397402ac2cac746b79d0fc Untagged: eshop/mobileshoppingagg:latest Deleted: sha256:a7db9551abd5181a84d45c611efcda0fc15fca360411acb215854cf409ba7ad5 Deleted: sha256:46c7d601d3f3967f7c6a0bd233e47e14df2dcb3b6e6d329ce2cc6e55f6b117a3 Deleted: sha256:0cd178b94b9386689d0b45fbeaf14706973f5562015204d558a09002344af703 Untagged: eshop/webshoppingagg:latest Deleted: sha256:5941dfb59d0d19ff4c25d948e97026ce381513b1631f49d7b63ef60aefbcb702 Deleted: sha256:80cb6dfec229045e6f7d770c8a6e0924090c52c0d65e34cd47acfb250477db8d Deleted: sha256:78610205c63898f5c88434f0562b3413f22a8f0dd6653cb545326c10f905dbec Untagged: eshop/ocelotapigw:latest Deleted: sha256:35eb8a55a9ca1b226cd2adb3a81b053d145020622e644e564b8a75ab5a8dba42 Deleted: sha256:cf7064b9bc18473a04e4f1a35bdf4f0909f121bc115f7f9a87ffc24af053c37b Deleted: sha256:8d2f108d94dfcc5c6ce336a4a22aeaff3586f2134bd937c71dee20d5b1508952 >>>>>>>>>>>>>>>>>>> Removing all images that dont have tags (says <none> on docker images list) [lee@macbook:/c/project/]$ docker rmi -f $(docker images -f "dangling=true" -q)
Multi-stage build with Dockerfile
Within a single dockerfile we can configure a multistage build process. This allows us to generate docker images with drastically smaller footprint size. Typically, docker images used for building like aspnetcore sdk are very large (2GB+). It is unnecessary to deploy these large images into production. Instead, we would only need to deploy the runtime image and use the sdk images for local development. This is where multi-stage build comes into play. Multi-stage build will initially build the image using the larger sdk image. Once it is built, it will copy the files into a runtime area and recreate the image with the runtime image. This way the final image is smaller.
The following is a template dockerfile from Visual Studio
FROM microsoft/dotnet:2.1-sdk AS build WORKDIR /app # copy csproj and restore as distinct layers COPY *.sln . COPY aspnetapp/*.csproj ./aspnetapp/ RUN dotnet restore # copy everything else and build app COPY aspnetapp/. ./aspnetapp/ WORKDIR /app/aspnetapp RUN dotnet publish -c Release -o out FROM microsoft/dotnet:2.1-aspnetcore-runtime AS runtime WORKDIR /app COPY --from=build /app/aspnetapp/out ./ ENTRYPOINT ["dotnet", "aspnetapp.dll"]
The example above also shows how we can do docker image caching to increase the performance of our docker builds. Above note that during the build process we copy the source project files and run the ‘dotnet restore’. If there are no changes to the project files (no new library references added or removed), then on subsequent docker builds the ‘dotnet restore’ step will take the cached intermediary object from the prior run. The is the same for the ‘dotnet publish’ step which takes into consideration the COPY command before it which copies the source code files. If the source code files have not changed, then no need to rebuild this image.
Docker Container Communication
There are two ways that docker containers can communicate with each other.
- Legacy Linking – easy to setup
- Bridge Network – newer option and utilizes a custom bridge network that only containers in that network could communicate with each other
Legacy Linking
This is an older option that has recently been replaced with Bridge Networks. However this can still be useful in development environments. There are three steps to this process.
1. Run a container with a name as shown below
docker run -d --name mynamedcontainer node
2. Link a running container by name. The example below is also using an alias name for the container it is linking to.
docker run -d -p 5000:5000 --link mynamedcontainer:myaliasname mysql
3. Repeat for additional containers. The container that is referenced/linked will be created first.
Bridge Network or Container Network
Bridge networks are basically isolated networks where containers in that network can access each other. It is a grouping strategy of multiple containers. This is done by first creating the custom bridge network and then running a container with reference to that network. When the running container references that network, it would also set its name so that other containers could reference it. In the example below, a mongodb container is created into the ‘mynameofnetwork’ bridged network and runs with the name of ‘mongodb’.
docker network create --driver bridge mynameofnetwork docker run -d --net=mynameofnetwork --name mongodb mongo
To view what networks are currently defined on the host, we use the ‘network’ command.
[/c/Projects]$ docker network ls NETWORK ID NAME DRIVER SCOPE 111994ef6faa bridge bridge local ae51308ae70d dockercompose11947621487798887032_default bridge local b2000de46bb4 host host local e84423404a16 none null local
Docker Compose
Docker compose provides a way of getting multiple containers running and can also be used to manage automation and container lifecycle. We use a docker-compose.yml file which contains docker commands that need to be executed.
Some of the key properties of a docker-compose file are as follows:
- version (older versions of docker-compose didnt require this but today it is)
- services – the service this will be providing, can be image
- build context
- environment variables
- image
- networks
- ports
- volumes
Example of this structure:
version:'2' services: node: build: context: . dockerfile: node.dockerfile networks: - nodeapp-network mongodb: image: mongo networks: - nodeapp-network networks: nodeapp-network driver: bridge
Note that there are no open or closing brackets in the docker-componse.yml file. However, the indentation is important.
To run and build the docker compose we can do the following commands. We can append the image name at the end of these commands to run specific services defined in the docker-compose.yml file. Without the appended names, it would run the complete docker-compose (all images).
docker-compose build // to build the containers docker-compose up // do a build to create then run them, all in one command docker-compose up --no-deps node // only node is brought up and disregards other dependencies
This automatically starts up the containers and links them up. Some more useful commands:
docker-compose up --no-deps node docker-compose start docker-compose stop // stops the containers docker-compose down // stops and removes the containers docker-compose down --rmi all --volumes // remove all images and volumes docker-compose logs docker-compose ps docker-compose rm
Example using Docker-Compose
To bring all this together I’m playing around with this sample application. It is composed of an NGINX proxy container and a Redis cache container followed by several nodejs and mongodb containers.
https://github.com/DanWahlin/CodeWithDanDockerServices
Once the project is downloaded, we set the environment variables as instructed in the README file. Then we follow these steps:
export APP_ENV=development export DOCKER_ACCT=solidfish npm install docker-compose build // The build sequence should go -> mongodb, redis, node, node2, node3, nginx (configured through the "depends_on" property in docker-compose) docker-compose up // the 6 containers will spin up docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 32b49fb5e8c3 solidfish/nginx "nginx -g 'daemon of…" 4 minutes ago Up 4 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp nginx 815ab1c26578 solidfish/node-codewithdan "pm2 start server.js…" 4 minutes ago Up 4 minutes 0.0.0.0:32773->8080/tcp node-codewithdan ccc6e546547f solidfish/node-codewithdan "pm2 start server.js…" 4 minutes ago Up 4 minutes 0.0.0.0:32772->8080/tcp node-codewithdan-3 73656edd5022 solidfish/node-codewithdan "pm2 start server.js…" 4 minutes ago Up 4 minutes 0.0.0.0:32771->8080/tcp node-codewithdan-2 84ce6183bfdd solidfish/mongo "/mongo_scripts/run.…" 4 minutes ago Up 4 minutes 0.0.0.0:27017->27017/tcp mongo 65aec6ddc100 solidfish/redis "redis-server /etc/r…" 4 minutes ago Up 4 minutes 0.0.0.0:32770->6379/tcp redis docker exec node-codewithdan node dbSeeder.js // running the seeder script inside the first node container Trying to connect to mongo/codeWithDan MongoDB database Initializing Data Seed data loaded! (node:30) DeprecationWarning: current URL string parser is deprecated, and will be removed in a future version. To use the new parser, pass option { useNewUrlParser: true } to MongoClient.connect. db connection open docker-compose down // all containers spun down
Docker System
The docker system command can be used to monitor the docker host system. For example, the command below shows us the volume disk usage.
lee@macbook:~/$ docker system df -v Images space usage: REPOSITORY TAG IMAGE ID CREATED SIZE SHARED SIZE UNIQUE SIZE CONTAINERS phpmyadmin/phpmyadmin latest c6ba363e7c9b 2 weeks ago 166MB 0B 166MB 1 wordpress latest 6e880d17852f 3 weeks ago 419.7MB 55.3MB 364.4MB 1 mysql 5.7 141eda20897f 3 weeks ago 372.2MB 55.3MB 316.9MB 1 volumemounting_console latest 48d1f66c1bdf 3 months ago 180.4MB 180.2MB 199.8kB 0 console latest cd3bbf494622 3 months ago 1.902GB 1.901GB 551.9kB 0 dotnet21sdk.vsdbg latest 04a7cec377e3 3 months ago 1.901GB 1.901GB 0B 0 docker.dotnet.debug latest 58abc9f37069 3 months ago 1.901GB 1.728GB 173.3MB 0 solidfish/nginx latest 135170118325 3 months ago 18.96MB 17.74MB 1.215MB 0 solidfish/node-codewithdan latest 1ef71785ebd9 3 months ago 98.56MB 70.32MB 28.24MB 0 solidfish/redis latest 2e4334513174 3 months ago 83.4MB 83.4MB 20B 0 solidfish/mongo latest 3dd2df37e47b 3 months ago 423.8MB 381.2MB 42.6MB 0 microsoft/dotnet 2.1-runtime 0b74f72810f3 3 months ago 180.2MB 180.2MB 0B 0 redis latest f1897cdc2c6b 4 months ago 83.4MB 83.4MB 0B 0 node alpine 7ca2f9cb5536 4 months ago 70.32MB 70.32MB 0B 0 node latest 462743bd5c7f 4 months ago 674.4MB 0B 674.4MB 0 microsoft/dotnet 2.1-sdk efa6f1f55357 4 months ago 1.728GB 1.728GB 0B 0 mongo latest 052ca8f03af8 4 months ago 381.2MB 381.2MB 0B 0 nginx alpine aae476eee77d 4 months ago 17.74MB 17.74MB 0B 0 microsoft/aspnetcore-build latest 06a6525397c2 6 months ago 2.018GB 0B 2.018GB 0 Containers space usage: CONTAINER ID IMAGE COMMAND LOCAL VOLUMES SIZE CREATED STATUS NAMES 34a8af267b09 wordpress:latest "docker-entrypoint.s…" 1 2B 4 seconds ago Up 2 seconds myapp_wordpress_1 2ce6668d043f phpmyadmin/phpmyadmin "/run.sh supervisord…" 0 33.7MB 4 seconds ago Up 2 seconds myapp_phpmyadmin_1 626a2820d9a6 mysql:5.7 "docker-entrypoint.s…" 1 4B 5 seconds ago Up 3 seconds myapp_db_1 Local Volumes space usage: VOLUME NAME LINKS SIZE 9605a2c21a3c2edc5a87e7d803dfc42782a4caa7c655d3a0f30353cb5dd121b8 0 38.66MB myapp_dbdata 1 221.6MB 88c0f9ad8c0636e7c758c9b040b20065d63f036cd40dea38b7a2af5b07f9d399 0 38.66MB 0e094566d18184b8eaf3ffcc4cae8cd8f3a1c67e5ef3901676ad31ba9276ed13 1 38.66MB 60508d9f5349df48635d69a60eccf25e4cb26ff76e0ddeba0660e0676e0c532d 0 38.66MB Build cache usage: 0B
The following command shows only shows what volumes exist on the docker host.
lee@macbook:~d$ docker volume ls DRIVER VOLUME NAME local 0e094566d18184b8eaf3ffcc4cae8cd8f3a1c67e5ef3901676ad31ba9276ed13 local 88c0f9ad8c0636e7c758c9b040b20065d63f036cd40dea38b7a2af5b07f9d399 local 9605a2c21a3c2edc5a87e7d803dfc42782a4caa7c655d3a0f30353cb5dd121b8 local 60508d9f5349df48635d69a60eccf25e4cb26ff76e0ddeba0660e0676e0c532d local myapp_dbdata
Docker Commit
The commit command creates an image based on a container. It can be useful to commit a container’s file changes or settings into a new image. This allows you to debug a container by running an interactive shell, or to export a working dataset to another server. Generally, it is better to use Dockerfiles to manage your images in a documented and maintainable way. Read more about valid image names and tags.
The commit operation will not include any data contained in volumes mounted inside the container.
By default, the container being committed and its processes will be paused while the image is committed. This reduces the likelihood of encountering data corruption during the process of creating the commit. If this behavior is undesired, set the –pause option to false.
The –change option will apply Dockerfile instructions to the image that is created.
Docker Tag
An image name is made up of slash-separated name components, optionally prefixed by a registry hostname. The hostname must comply with standard DNS rules, but may not contain underscores. If a hostname is present, it may optionally be followed by a port number in the format :8080. If not present, the command uses Docker’s public registry located at registry-1.docker.io by default. Name components may contain lowercase letters, digits and separators. A separator is defined as a period, one or two underscores, or one or more dashes. A name component may not start or end with a separator.
A tag name must be valid ASCII and may contain lowercase and uppercase letters, digits, underscores, periods and dashes. A tag name may not start with a period or a dash and may contain a maximum of 128 characters.
You can group your images together using names and tags, and then upload them to Share Images via Repositories.
Docker Cloud
When managing many containers and if considering hosting these into cloud providers like AWS or Azure, Docker Cloud is a tool we can use to link.
Docker Commands
List of common commands
Command | Description |
docker attach | Attach local standard input, output, and error streams to a running container |
docker build | Build an image from a Dockerfile |
docker checkpoint | Manage checkpoints |
docker commit | Create a new image from a container’s changes |
docker config | Manage Docker configs |
docker container | Manage containers |
docker cp | Copy files/folders between a container and the local filesystem |
docker create | Create a new container |
docker deploy | Deploy a new stack or update an existing stack |
docker diff | Inspect changes to files or directories on a container’s filesystem |
docker events | Get real time events from the server |
docker exec | Run a command in a running container |
docker export | Export a container’s filesystem as a tar archive |
docker history | Show the history of an image |
docker image | Manage images |
docker images | List images |
docker import | Import the contents from a tarball to create a filesystem image |
docker info | Display system-wide information |
docker inspect | Return low-level information on Docker objects |
docker kill | Kill one or more running containers |
docker load | Load an image from a tar archive or STDIN |
docker login | Log in to a Docker registry |
docker logout | Log out from a Docker registry |
docker logs | Fetch the logs of a container |
docker manifest | Manage Docker image manifests and manifest lists |
docker network | Manage networks |
docker node | Manage Swarm nodes |
docker pause | Pause all processes within one or more containers |
docker plugin | Manage plugins |
docker port | List port mappings or a specific mapping for the container |
docker ps | List containers |
docker pull | Pull an image or a repository from a registry |
docker push | Push an image or a repository to a registry |
docker rename | Rename a container |
docker restart | Restart one or more containers |
docker rm | Remove one or more containers |
docker rmi | Remove one or more images |
docker run | Run a command in a new container |
docker save | Save one or more images to a tar archive (streamed to STDOUT by default) |
docker search | Search the Docker Hub for images |
docker secret | Manage Docker secrets |
docker service | Manage services |
docker stack | Manage Docker stacks |
docker start | Start one or more stopped containers |
docker stats | Display a live stream of container(s) resource usage statistics |
docker stop | Stop one or more running containers |
docker swarm | Manage Swarm |
docker system | Manage Docker |
docker tag | Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE |
docker top | Display the running processes of a container |
docker trust | Manage trust on Docker images |
docker unpause | Unpause all processes within one or more containers |
docker update | Update configuration of one or more containers |
docker version | Show the Docker version information |
docker volume | Manage volumes |
docker wait | Block until one or more containers stop, then print their exit codes |
https://docs.docker.com/engine/reference/commandline/docker/#child-commands
Docker Compose Commands
List of common commands for docker-compose can be seen with the ‘-h’ help option:
build Build or rebuild services
bundle Generate a Docker bundle from the Compose file
config Validate and view the Compose file
create Create services
down Stop and remove containers, networks, images, and volumes
events Receive real time events from containers
exec Execute a command in a running container
help Get help on a command
images List images
kill Kill containers
logs View output from containers
pause Pause services
port Print the public port for a port binding
ps List containers
pull Pull service images
push Push service images
restart Restart services
rm Remove stopped containers
run Run a one-off command
scale Set number of containers for a service
start Start services
stop Stop services
top Display the running processes
unpause Unpause services
up Create and start containers
version Show the Docker-Compose version information
https://docs.docker.com/compose/reference/overview/#command-options-overview-and-help
Docker and .Net
You should use .NET Core, with Linux or Windows Containers, for your containerized Docker server application when:
- You have cross-platform needs. For example, you want to use both Linux and Windows Containers.
- Your application architecture is based on microservices.
- You need to start containers fast and want a small footprint per container to achieve better density or more containers per hardware unit in order to lower your costs.
You should use .NET Framework (traditional) for your containerized Docker server application when:
- Your application currently uses .NET Framework and has strong dependencies on Windows.
- You need to use Windows APIs that are not supported by .NET Core.
- You need to use third-party .NET libraries or NuGet packages that are not available for .NET Core.
Containers are commonly used in conjunction with a microservices architecture, although they can also be used to containerize web apps or services that follow any architectural pattern. A microservice is meant to be as small as possible: to be light when spinning up, to have a small footprint, to have a small Bounded Context (check DDD, Domain-Driven Design), to represent a small area of concerns, and to be able to start and stop fast. For those requirements, you will want to use small and fast-to-instantiate container images like the .NET Core container image.
You might want to use Docker containers just to simplify deployment, even if you are not creating microservices. For example, perhaps you want to improve your DevOps workflow with Docker—containers can give you better isolated test environments and can also eliminate deployment issues caused by missing dependencies when you move to a production environment. In cases like these, even if you are deploying a monolithic application, it makes sense to use Docker and Windows Containers for your current .NET Framework applications.
Since .NET Core 2.1, all the .NET Core images, including for ASP.NET Core are available at Docker Hub at the .NET Core image repo:
https://hub.docker.com/r/microsoft/dotnet/
Note that we can use different dotnet images for development/build vs production. For example – microsoft/dotnet:2.1-sdk vs microsoft/dotnet:2.1-aspnetcore-runtime. The runtime image is smaller and optimized for production environments.
A single repo/solution can contain platform variants, such as a Linux image and a Windows image. This feature allows vendors like Microsoft (base image creators) to create a single repo to cover multiple platforms (that is Linux and Windows). For example, the microsoft/aspnetcore repository available in the Docker Hub registry provides support for Linux and Windows Nano Server by using the same repo name. If you specify a tag, targeting a platform that is explicit like in the following cases:
- microsoft/aspnetcore:2.0.0-jessie = .NET Core 2.0 runtime-only on Linux
- microsoft/dotnet: 2.0.0-nanoserver = .NET Core 2.0 runtime-only on Windows Nano Server
Note that as of this post date, .Net Core 2 is a Current release (not LTS) and has an expected end-of-life at the end of 2018. .Net Core 2.1 is expected to transition into LTS at the end of 2018, thereafter going into support phase for 3 years by Microsoft.
When using Visual Studio 2017, many of the Docker setup is automated for you. The general workflow for creating docker based apps in Visual Studio are as follows.
- Setup Visual Studio project – Visual Studio 2017 supports Docker out-of-the-box. Visual Studio simply automates the docker command execution through the IDE. The development environment must also have Docker CE installed for Visual Studio to run the docker commands.
- Setup the Dockerfile – each Visual Studio project or service must have it’s Dockerfile. This can be done through the IDE by right-clicking on the project/solution –> “Add” –> “Docker Support”. This will add a Dockerfile at the project level and a Docker-compose file at the solution level. New projects can be created with these Docker files by checking the “Enable Docker Support” on project creation.
- For each of the service/containers we need to define the base image. When using Visual Studio, this will be automatically done for you when creating the Dockerfile. It is built based on the project settings (.net version, etc). On first execution the docker base image will be downloaded from the docker hub.
- Setup he docker-compose.yml file. This file is in the root of the main solution and used to configure related services that are to be deployed for the whole application. The docker-compose.yml file specifies not only what containers are being used, but how they are individually configured.
- Build and Run the docker application. This can be done through the Docker CLI or Visual Studio. When running through Visual Studio we can support debugging through the IDE.
- Test the docker application.
Sample Project using Visual Studio 2017
In Visual Studio 2017 the Docker support is baked in and can be selected when creating new solutions. Docker support can also be added to existing solutions. The following is an example of a pre-baked docker file that is automatically generated for Web API type solutions.
FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base WORKDIR /app EXPOSE 80 FROM microsoft/dotnet:2.1-sdk AS build WORKDIR /src COPY WidgetsCoreApi/WidgetsCoreApi.csproj WidgetsCoreApi/ RUN dotnet restore WidgetsCoreApi/WidgetsCoreApi.csproj COPY . . WORKDIR /src/WidgetsCoreApi RUN dotnet build WidgetsCoreApi.csproj -c Release -o /app FROM build AS publish RUN dotnet publish WidgetsCoreApi.csproj -c Release -o /app FROM base AS final WORKDIR /app COPY --from=publish /app . ENTRYPOINT ["dotnet", "WidgetsCoreApi.dll"]
The above Docker file is for the “WidgetsCoreApi” project. Also note that as of dotnet version 2.1, there is a new 2.1-alpine image available. This is a slim down Linux image that is capable of running dotnet version 2.1.
When doing this in Visual Studio 2017, the IDE will also generated a docker-compose file for the whole solution. This file would grow as new projects get added to the solution. An example of this is shown below.
version: '3.4' services: widgetscoreapi: image: ${DOCKER_REGISTRY}widgetscoreapi build: context: . dockerfile: WidgetsCoreApi/Dockerfile
By default, Compose reads two files, a docker-compose.yml and an optional docker-compose.override.yml file. As shown in Figure 6-11, when you are using Visual Studio and enabling Docker support, Visual Studio also creates an additional docker-compose.ci.build,yml file for you to use from your CI/CD pipelines like in VSTS.
By convention, the docker-compose.yml file contains your base configuration and other static settings. That means that the service configuration should not change depending on the deployment environment you are targeting. When targeting different environments, you should use multiple compose files. This lets you create multiple configuration variants depending on the environment. The values in the base docker-compose.yml file should not change because of different target deployment environments.
You can have additional configuration, but the important point is that in the base docker-compose.yml file, you just want to set the information that is common across environments. Then in the docker-compose.override.yml or similar files for production or staging, you should place configuration that is specific for each environment. Usually, the docker-compose.override.yml is used for your development environment.
To use multiple override files, or an override file with a different name, you can use the -f option with the docker-compose command and specify the files. Compose merges files in the order they are specified on the command line. The following example shows how to deploy with override files.
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
It is convenient, especially in production environments, to be able to get configuration information from environment variables. You reference an environment variable in your docker-compose files using the syntax ${MY_VAR}. The following line from a docker-compose.prod.yml file shows how to reference the value of an environment variable.
IdentityUrl=http://${ESHOP_PROD_EXTERNAL_DNS_NAME_OR_IP}:5105
The variable values are defined in an .env file. The docker-compose files support declaring default environment variables in the .env file. These values for the environment variables are the default values. But they can be overridden by the values you might have defined in each of your environments (host OS or environment variables from your cluster). You place this .env file in the folder where the docker-compose command is executed from.
# .env file ESHOP_EXTERNAL_DNS_NAME_OR_IP=localhost ESHOP_PROD_EXTERNAL_DNS_NAME_OR_IP=10.121.122.92
Docker-compose expects each line in an .env file to be in the format <variable>=<value>.
Further Topics
Versioning
.NET core supports API versioning out of the box .We can use the following snipet:
public void ConfigureServices( IServiceCollection services ) { services.AddMvc(); services.AddApiVersioning(); }
Database Connection Strings
Connection strings can be stored in each microservice inside a settings.json file (or other config file) – or better yet – can be stored on the docker-compose.yml file and let the deployment tools configure these. In production environments, we should be using Docker Swarm secretes management to have these connections strings set as environment variables to the individual containers.
Storing Secrets in Environment Variables and .NET Core Secret Manager
Secrets such as connectionstrings should be stored as environment variables on the machine being deployed to. The ASP.NET Core Secret Manager tool provides another method of keeping secrets out of source code. To use the Secret Manager tool, include a tools reference (DotNetCliToolReference) to the Microsoft.Extensions.SecretManager.Tools package in your project file. Once that dependency is present and has been restored, the dotnet user-secrets command can be used to set the value of secrets from the command line. These secrets will be stored in a JSON file in the user’s profile directory (details vary by OS), away from source code. Once that dependency is present and has been restored, the dotnet user-secrets command can be used to set the value of secrets from the command line. These secrets will be stored in a JSON file in the user’s profile directory (details vary by OS), away from source code.
<PropertyGroup> <UserSecretsId>UniqueIdentifyingString</UserSecretsId> </PropertyGroup>
When targeting different environments, you should use multiple compose files. This lets you create multiple configuration variants depending on the environment.
Unit Testing
With Docker Compose you can create and destroy that isolated environment very easily in a few commands from your command prompt or scripts, like the following commands:
docker-compose up -d ./run_unit_tests docker-compose down
To implement multiple tests for multiple containers, we can define it in the docker-compose.yml file that is used to deploy the application (or similar ones, like docker-compose.ci.build.yml), at the solution level so it expands the entry point to use dotnet test. By using another compose file for integration tests that includes your microservices and databases on containers, you can make sure that the related data is always reset to its original state before running the tests.
- Unit tests. These ensure that individual components of the application work as expected. Assertions test the component API.
- Integration tests. These ensure that component interactions work as expected against external artifacts like databases. Assertions can test component API, UI, or the side effects of actions like database I/O, logging, etc.
- Functional tests for each microservice. These ensure that the application works as expected from the user’s perspective.
- Service tests. These ensure that end-to-end service use cases, including testing multiple services at the same time, are tested. For this type of testing, you need to prepare the environment first. In this case, it means starting the services (for example, by using docker-compose up).
Continuous Deployment
Another benefit of Docker is that you can build your application from a preconfigured container, as shown in Figure 6-13, so you do not need to create a build machine or VM to build your application. You can use or test that build container by running it at your development machine. But what is even more interesting is that you can use the same build container from your CI (Continuous Integration) pipeline. This is based on the aspnetcore-build image so that it can compile and build your whole application from within that container instead of from your PC. This can be done with the following command:
docker-compose -f docker-compose.ci.build.yml up
The difference between the docker-compose build and docker-compose up commands is that docker-compose up both, builds the images and starts the containers.
Hosted Services / Background Services
Background tasks and scheduled jobs are something you might need to implement, eventually, in a microservice based application or in any kind of application. The difference when using a microservices architecture is that you can implement a single microservice process/container for hosting these background tasks so you can scale it down/up as you need or you can even make sure that it runs a single instance of that microservice process/container.
From a generic point of view, in .NET Core we called these type of tasks Hosted Services, because they are services/logic that you host within your host/application/microservice. Note that in this case, the hosted service simply means a class with the background task logic.
Note that in .net core 1.x and 2.0 we used IWebHost for these types of services. Not that these were for MVC or WebAPI based services. In .net core 2.1 the IHost was introduced for console based apps. A WebHost (base class implementing IWebHost) in ASP.NET Core 2.0 is the infrastructure artifact you use to provide Http server features to your process, such as if you are implementing an MVC web app or Web API service. It provides all the new infrastructure goodness in ASP.NET Core, enabling you to use dependency injection, insert middlewares in the Http pipeline, etc. and precisely use these IHostedServices for background tasks. A Host (base class implementing IHost), however, is something new in .NET Core 2.1. Basically, a Host allows you to have a similar infrastructure than what you have with WebHost (dependency injection, hosted services, etc.), but in this case, you just want to have a simple and lighter process as the host, with nothing related to MVC, Web API or Http server features.
API Gateway using Ocelot
Ocelot is a simple and lightweight API Gateway that you can deploy anywhere along with your microservices/containers. Note that we could have a single API Gateway for all microservices, or split the gateways to specific groups of microservices. For example, we could have a gateway for the mobile users and a separate one for web users. For medium- and large-size applications, using a custom-built API Gateway product is usually a good approach, but not as a single monolithic aggregator or unique central custom API Gateway unless that API Gateway allows multiple independent configuration areas for the multiple development teams with autonomous microservices
The main functionality of an Ocelot API Gateway is to take incoming http requests and forward them on to a downstream service, currently as another http request. For instance, let’s focus on one of the ReRoutes in the configuration.json:
{ "DownstreamPathTemplate": "/api/{version}/{everything}", "DownstreamScheme": "http", "DownstreamHostAndPorts": [ { "Host": "basket.api", "Port": 80 } ], "UpstreamPathTemplate": "/api/{version}/b/{everything}", "UpstreamHttpMethod": [ "POST", "PUT", "GET" ], "AuthenticationOptions": { "AuthenticationProviderKey": "IdentityApiKey", "AllowedScopes": [] } }
A single Docker container image with the Ocelot API Gateway can be used with different configuration.json file for each service it is forwarding to. In microservice scenarios, authentication is typically handled centrally. If you are using an API Gateway, the gateway is a good place to authenticate. After authentication, ASP.NET Core Web APIs need to authorize access. This process allows a service to make APIs available to some authenticated users, but not to all. Authorization can be done based on users’ roles or based on custom policy, which might include inspecting claims or other heuristics.
public class AccountController : Controller { public ActionResult Login() { } [Authorize] public ActionResult Logout() { }
Kubernetes Ingress and Ocelot API Gateway
As a definition, an Ingress is a collection of rules that allow inbound connections to reach the cluster services. An ingress is usually configured to provide services externally-reachable URLs, load balance traffic, SSL termination and more. Users request ingress by POSTing the Ingress resource to the API server. Clients still call the same base URL but the requests are routed to multiple API Gateways or BFF.
Note that the API Gateways are front-ends of facades only for the services or Web APIs but not for the web applications plus they might hide certain internal microservices. The ingress, however, is just redirecting http requests but not trying to hide anything.
Monitoring with Kubernetes
To monitor the availability of your microservices, orchestrators like Docker Swarm, Kubernetes, and Service Fabric periodically perform health checks by sending requests to test the microservices.
References
eShop OnContainers
https://github.com/dotnet-architecture/eShopOnContainers
.NET Containerized Applications
https://www.microsoft.com/net/download/thank-you/microservices-architecture-ebook
.Net Core 2.1 and Docker
https://github.com/dotnet/dotnet-docker/blob/master/samples/README.md
https://github.com/dotnet/dotnet-docker/blob/master/samples/aspnetapp/aspnetcore-docker-https.md
Keeping up with latest .NET Images
https://blogs.msdn.microsoft.com/dotnet/2018/06/18/staying-up-to-date-with-net-container-images/
.Net Microservices
https://blogs.msdn.microsoft.com/dotnet/2017/08/02/microservices-and-docker-containers-architecture-patterns-and-development-guidance/
Docker Hub .Net Core 2.1
https://hub.docker.com/r/microsoft/dotnet/
.Net Core for Multi-Container Projects
https://www.skylinetechnologies.com/Blog/Skyline-Blog/February_2018/how-to-use-dot-net-core-cli-create-multi-project
API Versioning by Scott Hanselman
http://www.hanselman.com/blog/ASPNETCoreRESTfulWebAPIVersioningMadeEasy.aspx
Docker Run vs Cmd vs Entrypoint
http://goinbigdata.com/docker-run-vs-cmd-vs-entrypoint/