In this blog post we explore Docker and some of the commands we’ll pay attention to RUN, CMD and ENTRYPOINT and explore how some of them may appear offer similar functionality yet have subtle differences.

Whether you’re a CTO, technical lead or senior developer tasked with looking after a team, we’re sure you’ll find some valuable nuggets of information.  After you’ve finished reading this article, you’ll have an awareness of what Docker is, the benefits of implementing it and be more informed about the commands we’ve just mentioned.

To give you an overview of the structure of this blog post, here are the main topics we’ll cover:


In a nutshell, Docker removes the endless environmental configuration conflicts that can occur when collaborating with co-workers and helps to reduce those “it works in my machine” conversations.

It can be used to run and maintain applications, side by side, in isolated containers.  Software development teams can use it to complement their existing agile and DevOps delivery practices and as a platform, Docker can run securely on Linux and Windows environments.

From a software developer’s perspective, there are many benefits to implementing Docker in your development lifecycle, so let’s explore some of these.

Benefits of Docker for Developers

Docker saves developers time and automates repetitive tasks often associated with provisioning new development environments which allow developers to focus on the task at hand, i.e. writing code.


Let’s explore these in more detail.

Pre-configured environment
If you’re a developer reading this, you’ll no doubt be familiar with the numerous configuration tasks that are associated when on-boarding a new developer.  Docker saves you time by allowing you to define your operation system and any required software libraries in a dockerfile.  Each time your environment is loaded, Docker ensure you have the relevant software and libraries needed by your application.

If you need to update the environment, all you need to do is update the container configuration and redeploy it.  This can save you time and money, naturally, these can also be shared with other developers and teams.

Source Controlled

As a developer, you will be checking code in/out of your source control repository all day.  You do the same with your dockerfile.  One benefit of this is that your development environment is aligned with your code.

It works on my machine!
Most developers have experienced this at some point in their careers, you’re working with another developer and need to integrate their class library or component with your codebase.  You install and it doesn’t work.  You let the developer know and the response is – “it works on my machine!”.
What ensues is the double if not triple checking of environmental paths and settings to ensure they match.  All of this takes time.
Docker reduces if not removes this headache as all required libraries are encapsulated within containers that can be shared.

A little like VMs
A Docker container behaves in a similar fashion to that of virtual machines but runs a streamlined version of the operating system.  Whilst the container still uses the host machines resources, you can restrict resources such as CPU, RAM and HDD space.  Unlike virtual machines, however, a Docker container will just use the resources it needs thereby allowing you to manage several contains on one host.

Continuous Integration (CI)
Continuous integration (or continues delivery on platforms such as Visual Studio Team Services allows you to trigger a build based on check-ins or run suites of automated tests. In the Visual Studio Marketplace, you can download a Docker extension that allows you to factor in Docker images to your existing agile DevOps CI workflow.

Microsoft .NET, Visual Studio Code, and Docker Extension
The Docker extension makes it easy to build your applications with Visual Studio and Visual Studio Code, just some of the benefits include:

  • Automatic Dockerfile and docker-compose.yml file generation
  • Syntax highlighting and hover tips for docker-compose.yml and Dockerfile files
  • IntelliSense (completions) for Dockerfile and docker-compose.yml files
  • Linting (errors and warnings) for Dockerfile files
  • Command Palette (F1) integration for the most common Docker commands (e.g. Build, Push)
  • Explorer integration for managing Images and Containers
  • Deploy images to the cloud by running the Azure CLI in a container

From a security point of view, Docker ensures that applications that are running on containers are segregated and isolated from each other.  No Docker container can “bleed” into processes running inside another container. From an architectural point of view, each container gets its own set of resources ranging from processing to network stacks.

Take me to the cloud
Most of the major cloud computing providers such as Amazon, Google and Microsoft have all added support for Docker.  For example, Docker containers can be run on an Amazon EC2 instance, Google Compute Engine instance.

The days of monolithic applications are coming to an end and developers are adopting service-oriented architecture and loosely coupled services that can be implemented and deployed independently of each other.  These building blocks can be assembled to form entire applications.  One could say we’ve entered the API economy from a software development perspective.

With Docker, you can select the best tool or stack for each service, the platform lets you isolate any potential conflicts and mitigates any risk of entering DLL hell.  Containers can be easily shared, deployed, updated independently and scaled instantly – regardless of the other services that make up your application.

The end to end security features help you implement and run services that execute on a “least privilege” model.  What this means is that services only get access to the resources they need – just at the right time.

In a nutshell
When an application is “dockerized”, all this complexity is encapsulated into containers that are easy to build, share and run.  No more time spent on installing 3rd party software and installing data access components or explaining onboarding procedures to new starts.

Dockerfiles can make your development process simpler, any dependencies are pulled as packaged Docker Images and anyone with Docker and an editor installed can build and debug the application quickly.  What all of this means is that you’re able to reduce the time it takes to ship your software products thereby saving you time and money.

Now that we’ve covered some of the benefits for software developers, it’s time to discuss some of the key components before moving on to exploring the commands RUN, CMD and ENTRYPOINT.

What is a Docker Container?

Containers are a way to bundle software in a format that can be run in an isolated manner on a shared operating system.
Containers shouldn’t be confused with VMs however, Containers don’t contain an Operating System installation.  They only contain libraries and settings required to make the software application run as expected – regardless of where the application is deployed.

They are available for both Linux and Windows-based apps and “containerized” software will always run the same, regardless of the environment.

Containers isolate software from its surroundings, for example, differences between development and staging environments and help reduce conflicts between teams running different software on the same infrastructure.


Are Containers just like Virtual Machines?

You could be forgiven for thinking that Containers are just like Virtual machines, consider the following diagrams which illustrate the differences between a Container and Virtual Machine:


The observant amongst you will have noticed that a Docker container doesn’t contain the operating system.
The following descriptions below, from the official Docker site, contain a further explanation of the differences:

Virtual Machine
Virtual machines run guest operating systems—note the OS layer in each box. This is resource intensive, and the resulting disk image and application state is an entanglement of OS settings, system-installed dependencies, OS security patches, and other easy-to-lose, hard-to-replicate ephemera.

Containers can share a single kernel, and the only information that needs to be in a container image is the executable and its package dependencies, which never need to be installed on the host system. These processes run like native processes, and you can manage them individually by running commands like docker ps—just like you would run ps on Linux to see active processes. Finally, because they contain all their dependencies, there is no configuration entanglement; a containerized app “runs anywhere.”

What is a Docker Layer?

A Docker Layer (or image layer) is a modification to a Docker Image, or an intermediate image.  Every command you invoke (COPY, RUN, FROM etc.) in your Dockerfile causes the previous image to change thereby creating a new layer.

For example, consider the following:
FROM ubuntu             #This has its own number of layers say “X”MAINTAINER FOO          #This is one layer RUN mkdir /tmp/foo      #This is one layer RUN apt-get install vim #This is one layer

This would create a final image where the total number of layers will be X+3.  Another way to think of this is as if you were staging changes when using Git.

What are Docker Images?

An Image is the “template” which contains the application and any associated binaries or libraries that are needed to build a Container.  The image contains a series of Docker instructions. Or, see the following description from the official site:

An image is a lightweight, stand-alone, executable package that includes everything needed to run a piece of software, including the code, a runtime, libraries, environment variables, and config files.

Every package added that gets added to the Image gets added as a new layer and the Container is basically the running instance of the Image.

What is a Dockerfile?

We mentioned dockerfile in that last section, you might be wondering what that is now!  The simplest explanation which was taken from the official Docker documentation site.

Docker can build images automatically by reading the instructions from a Dockerfile. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.

Here is an example of the structure of a Dockerfile:

# Comment

The instruction is not case-sensitive. However, convention (like SQL statements) is for them to be UPPERCASE to distinguish them from arguments more easily.  Here is an example of a simplistic Dockerfile

# Comment
RUN echo ‘we are running some # of cool things’

The Commands

Now that you have a basic understanding of what Docker is about and some of the key components and concepts are, it’s time to discuss RUN, CMD and ENTRYPOINT.

If you were to condense these commands into one sentence for each, you could say the following about each of them:

With this, you can execute commands.  You need to specify an image to derive the container from.  This command is mainly used for installing a new package.


CMD sets the default command (and/or parameters) whilst also allowing you to overwrite commands, pass in or bypass default parameters via the command prompt when Docker runs.

This command ultimately allows you to identify which executable should be run when a container is started from your image.


You can find more detailed information regarding parameters on the Docker documentation site here, but for now, let’s dive straight into the commands and explore how you can invoke them and what the differences are.

Shell and Exec

We should point out first that commands can be specified in two forms:

  • shell
  • exec

It’s important to understand these forms before we get into the detail of each command.

Shell form

<your instruction> <your command>

Here are some examples of this:

RUN apk –update add install ruby

CMD echo “Hello world”

ENTRYPOINT echo “Hello world”

When a command is executed in its shell form, under the hood, the following is called:

/bin/sh -c <command>

and regular shell processing occurs.  Consider the following Dockerfile:

ENV name Connor

ENTRYPOINT echo “Hello, $name”

If the Docker Container is invoked using the following syntax:

docker run -it <image>

The following output will be generated:

Hello, Connor

You can see the variable/parameter “name” (referenced in the ENTRYPOINT/echo statement) has simply been replaced with the value “Connor”

Exec Form
The syntax for this is:

<instruction> [“executable”, “param1”, “param2”, …]

And here are some examples:

RUN [“apk”, “add”, “ruby”]
CMD [“/bin/echo”, “Hello world”]
ENTRYPOINT [“/bin/echo”, “Hello world”]

When a command is invoked using the exec form, the executable is called directly – shell processing does not occur.
Consider the following example in an imaginary Dockerfile:

ENV name Connor
ENTRYPOINT [“/bin/echo”, “Hello, $name”]

When the container is invoked using the following:

docker run -it <image>

This time, the output is different.  You can see the variable/parameter name hasn’t been substituted.
Hello, $name

Now that we’ve explained important concepts and technologies, we can move onto the commands themselves.


If you’re new to Docker, this is probably one of the first commands you need to learn.  It’s the command used to launch Docker Containers. It’s also used to install a new package (if you’re a .NET developer, you can think of this as installing a NuGet package) on top of the operating system distribution.
When invoking RUN, your command gets executed and a new layer gets created.  You can invoke the RUN command using either the shell or exec form (we discussed both techniques earlier).
One example that demonstrates how the RUN command can be used is to install multiple packages onto your Docker image:

RUN apk –update && \

apt add ruby \
ruby-json \
ruby-nokogiri \

In the above code snippet, the following commands are all executed within the context of a single RUN command:

  • apk
  • update
  • apt add

This ensures that the latest packages will be installed, otherwise, if apk and add were invoked via discrete RUN commands then Docker would have reused a layer added by apt –update – this layer be too old and not what you desired.   Something to be mindful of.

Next up is CMD and ENTRYPOINT.  On the surface of it, both commands look like they offer the same functionality, i.e. executing commands at container run-time, but upon closer inspection, you’ll see they perform completely different tasks, so let’s explore them!


CMD allows you to configure a default command with default parameters that are invoked at runtime, and any command invoked via CMD is done so via sh -c.

What does this mean?
Consider the following, in this Dockerfile, we have told the container to send to PING requests to the IP address

FROM busyServer
CMD [“ping”, “-c”, “2”, “”]

The following command builds the container:

docker build -t cmd-example

When you run the container, two pings are sent to the IP address

admin@server tmp]# docker run –restart=always -i -t cmd-example

PING ( 56 data bytes
64 bytes from seq=0 ttl=54 time=6.163 ms
64 bytes from seq=1 ttl=54 time=6.044 ms
— ping statistics —
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 6.044/6.103/6.163 ms

If you’re curious, you can also see sh-c invoking the pign command by looking at the output of docker ps:

CONTAINER ID        IMAGE                           COMMAND
06ae12e66ac9        cmd-example                     “/bin/sh -c ‘\”ping\” \””

Next up in ENTRYPOINT, we’ll stick with the same IP ping example for continuity and to illustrate the differences between CMD.



As before, in our Dockerfile, we instruct a ping to the IP address to execute at runtime.  We then pass in parameters via CMD and in this example, we’re passing in parameters that will instruct CMD to run the PING command twice.

FROM busyserver
ENTRYPOINT [“/bin/ping”]
CMD [“”, “-c”, “2”]

Again, we build the container:

docker build -t entrypoint-example

Then RUN it, and this is key difference between CMD and ENTRYPOINT.  If you run the container as normal using the following command:

[admin@server tmp]# docker run –restart=always -i -t entrypoint-example

You’ll get the expected results:

PING ( 56 data bytes
64 bytes from seq=0 ttl=56 time=4.671 ms
64 bytes from seq=1 ttl=56 time=4.465 ms
— ping statistics —
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 4.294/4.425/4.671 ms

This is where things differ, by using ENTRYPOINT, you can pass in an alternative argument thereby overriding the arguments set via CMD (I’ve highlighted updated parameter in green).

[admin@server tmp]# docker run –restart=always -i -t entrypoint-example -c 2

Which returns:

PING ( 56 data bytes
64 bytes from seq=0 ttl=54 time=6.239 ms
64 bytes from seq=1 ttl=54 time=5.874 ms
— ping statistics —
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 5.874/6.056/6.239 ms

This time, when running the container, when you look via docker ps, you’ll see that sh -c is no longer being used but in fact, it’s the command defined by ENTRYPOINT.

CONTAINER ID        IMAGE                           COMMAND

06ae12e66ac9        entrypoint-example              “/bin/ping”

So, there you have it RUN, CMD and ENTRYPOINT, how to invoke them, their purpose and the differences between CMD and ENTRYPOINT.

The Future for Docker

It’s hard to say what the future holds for Docker as it’s a relatively young software project and other companies such as Vagrant are starting to offer rival container services.  That said, competition can be healthy and often results in better products and services for the community.


In this blog post, we’ve run through Docker, we’ve discussed Containers, Images, Dockerfiles and the benefits of using Docker for developers.

We’ve also discussed some of the commands, their similarities, and detailed and some of their differences.

We hope you’ve enjoyed reading this article and that it’s given you an insight as to how Docker could benefit your existing projects or processes.

If you want to explore Docker or learn more, you can find more information at the following URLs:

Main Docker Site

Running a .NET Core app in a Docker container

.NET and Docker

Docker and Linux Containers

Best Practices for writing Dockerfiles

Jamie Maguire

Software Architect | Consultant | Developer | Tech Author

Latest posts by Jamie Maguire (see all)