Handson introduction



Handson introduction

6 1


docker-introduction

docker-introduction

On Github npalm / docker-introduction

Handson introduction

Created by Niek Palm (2016)

  • Containers
  • Docker
  • Hands-on
  • Docker tools

Containers

The challenge

Results in NXN compatibility nightmare

Cargo transportation Pre 1960 ...

Also an NxN Matrix

Solution: Containers

Docker is a Container System for Code ...

Why containers matter

  • Content
  • Hardware agnostic
  • Content isolation
  • Automation
  • Efficient
  • Separation of concerns

VM vs Containers

Docker

Docker architecture

//: //#### Docker architecture

The Life of a Container

Conception

BUILD an Image from a Dockerfile

Birth

RUN (create+start) a container

Reproduction

COMMIT (persist) a container to a new imageRUN a new container from an image

Sleep

STOP or KILL a running container

Wake

START a stopped container

Death

RM (delete) a stopped container

Extinction

RMI a container image (delete image)

The Life of a Container by example.

 docker pull mongo:latest     # pull the mongo image from the registry
 docker inspect mongo:latest  # list information of the container
 docker run -p 27017:27017 \
        --name my-mongo -d \
        mongo:latest          # create and start a mongo container
 docker inspect my-mongo      # inspect the running container info
 docker logs -f my-mongo      # tail the log of the container
 docker stop my-mongo         # stop the container
 docker rm -v my-mongo        # remove the container
 docker rmi mongo:latest      # remove the image from the local repo

Docker stages

Information of a containter.

command description ps shows running containers logs fetch the logs of a containe inspect return low-level information on a container or image events gets events from container port list port mappings for the container top display the running processes of a container stats display a live stream of container(s) resource usage statistics

Dockerfile

Each Dockerfile is a script, composed of various commands and arguments listed successively to automatically perform actions on a base image in order to create a new one. Such script are used for organizing things and greatly help with deployments by simplifying the process start-to-finish.

Dockerfile by example

FROM dockerfile/java:oracle-java8
MAINTAINER Niek Palm <dev.npalm@gmail.com>

RUN apt-get install git -y

ADD service.jar

EXPOSE 8080

CMD ["java","-jar","/service.jar"]

Hands on Labs

Prerequisites

Some hints

  • You will find the slides here: http://npalm.github.io/docker-introduction/
  • Navigation through the slides is easy, just use your arrow keys. Left and right for going to the previous or next section and up and down for previous or next page. The space bar always give you the next page.
  • All code blocks are meant to execute by your self unless mention otherwise.
  • Have fun, take your time, and feel free to ask questions.

Tmux cheatsheet - Just handy

Tmux lets you switch easily between several programs in one terminal, detach them (they keep running in the background) and reattach them to a different terminal.

Ctrl-b c      : creates a new window
Ctrl-b n      : go to next window
Ctrl-b p      : go to previous window
Ctrl-b "      : split window top/bottom
Ctrl-b %      : split window left/right
Ctrl-b Alt-1  : rearrange windows in columns
Ctrl-b Alt-2  : rearrange windows in rows
Ctrl-b arrows : navigate to other windows
Ctrl-b d      : detach session
tmux attach   : reattach to session

Hands on Labs

Prerequisites - AWS

Docker

  • For the workshop we use an AWS instance.
  • Alternatively you can run the handson on your local machine
    • Linux : Install docker, docker-compose, git and your favourite editor
    • Mac / Windows : Install docker toolbox (includes git and virtual box)
    • VM : See the section Prerequisites - Vagrant

Linux/Mac Users: How to login to an AWS instance

  • Windows users: skip this slide
  • Download a key file (.pem) to access the AWS instances. 'Save link as' here to download the key and save it locally.
  • Open a terminal
  • Your key must not be publicly viewable for SSH to work.
chmod 400 <path>/HANDSON.pem
  • Connect via ssh add ServerAliveInterval=120 as option to prevent ssh timeout.
 ssh -o ServerAliveInterval=120 -i <path>/HANDSON.pem ubuntu@<ip-aws-instance>

Windows Users: How to login to an AWS instance

  • Download a ppk file (.ppk) needed to access the AWS instances. Click here to download the key and save it locally.
  • Download putty
  • Start putty and enter the host ip
  • Secect 'SSH -> Auth' on the left hand side, browse and select the downloaded private key file.
  • Select data on the left hand side under auto-login username enter the user ubuntu
  • Select connection in putty and set 'Sending of null packets to keep session active - Seconds between keepalives (0 to turn off)'' to 120 in order to prevent ssh timeout
  • To prevent ssh timeout in putty select connection. Under "Sending of null packets to keep session active - Seconds between keepalives (0 to turn off)", enter 120 in the text box.
  • Go back to the top and connect to your instance.

Hands on Labs

Prerequisites - Vagrant

Environment Introduction

  • We use a VirtualBox VM to setup a common environment.
  • We use Vagrant to automate the setup of the VM
  • The VirtualBox appliance contains
    • Ubuntu 14.04 Desktop
    • Tools: Docker, Docker Compose and Git (among others)
    • User: Vagrant, Password: Vagrant

Installation

  • Download and install VirtualBox and Vagrant
  • Once VirtualBox and Vagrant are installed you open an terminal
    • Create a new directory.
      mkdir docker-introduction
      cd docker-introduction
      
    • Initialize vagrant and download the images when not available locally.
      vagrant init npalm/ubuntu-1404-dev-desktop
      
    • Start the virtual environment for the workshop.
      vagrant up --provider virtualbox
      
  • We have now running Ubuntu linux VM running with all the needed packages for the workshop. The default user is vagrant with password vagrant.

Vagrant basics

  • Below some useful vagrant commands.
vagrant up               # Starts the VM
vagrant halt             # Stops the VM
vagrant destory          # Removes the VM
vagrant box list         # Shows all the local vagrant boxes
vagrant box remove <id>  # Removes a box

Hands on Lab 1

Hello world

Some notes

  • All steps assumes you have a command line open.
    • Almost all steps formatted as code can be executed.
      • \ is a line break and can be copied and pasted.
      • < ... > indicates you should replace the value
  • Since the VM includes a local registry cache some steps are slight different for a VM. These steps are prefixed with [VM].

Docker basics

  • Below some of the basic docker commands
docker help

# Partial output
Commands:
    images    List images
    logs      Fetch the logs of a container
    ps        List containers
    pull      Pull an image or a repository from a Docker registry
    rm        Remove one or more containers
    rmi       Remove one or more images
    run       Run a command in a new container
    start     Start a stopped container
    stop      Stop a running container

Run 'docker help' for all commands.
Run 'docker COMMAND --help' for more information on a command.

Pull an image

  • First we pull a base image. We use ubuntu 14.04 latest as base. See Ubuntu repo on docker registry
  • [VM]: The VM already containers the images.
    docker pull ubuntu
    
  • Now we have the image of ubuntu in our local repository, verify with the command:
    docker images
    
  • [VM]: You will see many images since all images are pre-fetched.

Start a docker container

  • Time for hello world.
  • With the next command you start an ubuntu container and execute the command to echo some famous string.
    docker run ubuntu echo "hello world"
    
  • Running the command above creates, starts and exits the ubuntu container.
  • Observe the output with commands below, remember you can get help by executing docker help or docker help ps
    docker ps
    docker ps -a
    
  • Remove the container
    docker rm <id or name>
    

Start a docker container

  • Start a container as deamon which prints the string "hello world" every second.
    docker run -d --name mycontainer ubuntu /bin/sh -c \
     "while true; do echo Hello world; sleep 1; done"
    
  • Inspect the logging
    docker logs -f mycontainer
    
  • Hit ctrl-c to exit the logging
  • Stop the container
    docker ps
    docker stop mycontainer
    docker ps
    docker ps -a
    

Update a docker image

  • Changes made in a container are persisted only in that container. At the moment the container is destroyed the change are lost too.
  • Commit the changes made in a container to an image, persists the change.
# Start our container
docker start mycontainer

# Exec the bash shell, this command gives access to our container
docker exec -i -t mycontainer /bin/bash

## You should see now something like:
> root@<id>:/# _

Update a docker image

  • Next we are going to
    • update the repositories
    • install the game cowsay
    • create a symbolic link
    • clean up
  • The next command is a composite of all these actions.
apt-get update && apt-get install cowsay -y && \
  ln /usr/games/cowsay /usr/bin/cowsay && rm -rf /var/lib/apt/lists/*

Update a docker image

  • Test the game is working.
    cowsay "Hello world"
    
  • Next we exit the container.
    exit
    
  • Our installed game is now available in the docker container with the name mycontainer. But not in the image that is used to create the container.
  • You can now execute the cowsay command in the same way as running the bash shell.
    docker exec -i -t mycontainer cowsay "Hello <name>"
    

Update a docker image

  • Commit your changes in the container to an (new) image.
    docker commit mycontainer <yourname>/ubuntu
    
  • Inspect your changes.
    docker diff mycontainer           # shows the added files
    docker history ubuntu             # shows the image history
    docker history <yourname>/ubuntu  # shows the image history
    
  • Remove the container.
    docker stop mycontainer \
         | xargs docker rm          # remove the container
    

Update a docker image

  • Now create a new container based on the newly created image and run the game.
    docker run --rm <yourname>/ubuntu cowsay "Hello world"
    
  • The next command shows that the game is not available in the ubuntu image.
    docker run --rm ubuntu cowsay "Hello world"
    
  • You can push your changes to the docker registry, for which you neet to create your own repository. But rememeber it is a bad practice to push manual build binaries into a repository.

Hands On Lab 2

Building a image

Docker building your own webserver

  • Below some of the docker commands
docker help

# Partial output

Commands:
    build     Build an image from a Dockerfile
    commit    Create a new image from a container's changes
    info      Display system-wide information
    inspect   Return low-level information on a container or image
    login     Register or log in to a Docker registry server
    logout    Log out from a Docker registry server
    port      Lookup the public-facing port that is NAT-ed to
              PRIVATE_PORT
    push      Push an image or a repository to a Docker registry server
    tag       Tag an image into a repository
    top       Lookup the running processes of a container

Building our own webserver

  • In this handson lab we will build our own webserver image that hosts some static pages.
  • Steps
    • Create some static content
    • Create a Dockerfile
    • Build a docker image
    • Run the image

Create some static content

  • Create an empty dir.
    mkdir lab2-web
    cd lab2-web
    
  • Add some static content, for example create a file index.html with some content.
<!DOCTYPE html>
<html><head>
<meta charset="UTF-8">
<title>Hello world</title></head>

<body>Hello world</body>

</html>

The Dockerfile

With a dockerfile you specify how an image is build, which files are added, and which command should executed when the container is started.

Build docker image

Create a file named Dockerfile and add the following content

FROM nginx
MAINTAINER <your name> <your.mail@domain.ext>

COPY index.html /usr/share/nginx/html/

Build docker image

  • Building the image based on a parent image will create only the diff image.
    • For building the image, you should specify a repository and tag.
    • Specify as repository "lab2/webapp" at leave the tag empty.
      docker build --tag lab2/webapp .
      
  • Check the result
    docker images
    

Run the image

  • First we have a look of the description of the image. Here you will see two ports are exposed, 80 and 443. We will use port 80 and map it to 8888. Start the container as deamon
    docker run -d --name myapp -p 8888:80 lab2/webapp
    
  • Test with a browser or curl. You have to point your browser to the host of the docker engine.
    • AWS: user your AWS instance ip address.
    • Mac or Windows: use the ip address of docker-machine.
    • Linux native: localhost
  • Clean up
    docker stop myapp | xargs docker rm
    

Mapping ports

  • When automating it does not work when you have to decide on design time which port you need to claim on the host.
  • You can let docker decide which port to claim by leaving the map on the host site empty. docker run -d -p 80 .... The result of this command is the container id. By using the command docker port <id> you can find the mapped port.
  • The next command combines all previous actions.
    docker port $(docker run -d --name myapp -p 80 lab2/webapp) | \
      cut -d\> -f2 | \
      xargs curl
    
  • Clean up
    docker stop myapp | xargs docker rm
    

Automated build

This section is optional

  • Less than 20 minutes to go, read the next slide and go to lab 3

Automated build

  • In Lab 1 we have build a docker container manually. In the first part of the second lab we automated our build using a Dockerfile. The next step is to automate the proces as whole.
  • The docker hub provides automated build. Follow the next steps to automate the docker build.
  • The next steps will guide you through setting up the build.
    • Create a git repo on GitHub or BitBucket, create an account if you don't have.
    • Create a dockerhub account if you do not have it yet.
    • Create an automated build on dockerhub.
    • Push the source code to your git repo.

Automated build (GIT)

  • Create a GitHub or BitBucket account (or use an existing if you have). We using a public git repo to host our code.
  • Create a new repository lab2-web.

Automated build (DockerHub)

The next step is to automate the build.

  • Go to the Docker hub: [dockerhub])(https://hub.docker.com/) and create an account.
  • Connect your GitHub (or BitBucket) to your Docker Hub account.
  • Create an automated build (top menu).
  • Use as name for the docker hub repo: lab2-web and choose create.
  • Next go to build settings:
    • Set Name to master, should be the default.
    • Set Docker Tag Name to latest, should be the default.
  • Click save changes.

Automated build

  • Commit and push your sources to the created git repo to trigger a build.
# Ensure you are in the directory lab2-web
echo '# lab2-web' >> README.md
git init
git add --all
git commit -m "Some comment"
git remote add origin <your git url>
git push -u origin master

Automated build

  • Observe the output on the build page on Docker Hub. Once the build is done create a container based on your new build image.
  • You could also trigger a build with a HTTP post. See the instructions on the build settings page.
docker run -d --name myapp -p 8888:80 \
    <docker-hub-account>/lab2-web

Same same but different

docker run -d -p 8888:80 --name myapp -v \
  <dir-to-webapp>:/usr/share/nginx/html nginx

Hands On Lab 3

Networking

Docker networking topology

  • none, no networking
  • bridge, each container has is own
  • joined, containers shares a single networking
  • hosts, use the host networking

Docker networking topology by Example

# Run a container with no network
docker run --rm --net none busybox:latest ifconfig

# Run a container in a bridged network
docker run --rm --net bridge busybox:latest ifconfig

# or (bridge is the default)
docker run --rm busybox:latest ifconfig

# joined
docker run --name joined1 -d --net none busybox:latest \
  nc -l 127.0.0.1:3333
docker run --rm -it --net container:joined1 busybox:latest netstat -al

# host
docker run --rm --net host busybox:latest ifconfig

Container linking

  • Docker has a linking system that allows you to link multiple containers together and send connection information from one to another.
  • When containers are linked, information about a source container can be sent to a recipient container.
  • To establish links, Docker relies on the names of your containers.
  • First we create a container for our database.
    # EXAMPLE ONLY
    docker run -d --name postgres <image> <command>
    
  • Secondly we link our database to our web container
    # EXAMPLE ONLY
    docker run -d --link postgres:db --name web <image> <command
    

Building a cluster

Next we build a simple cluster containing.

  • One node acting as proxy, nginx is used as proxy.
  • Three nodes acting as web server, nginx is used as web server.
  • One node acting as data store, redis is used as key value store.

Building a cluster - getting sources

Clone the following git repo.

cd
git clone https://github.com/npalm/simple-docker-cluster.git
cd simple-docker-cluster

Building a cluster - data store

  • For the data store we use redis, a fast in memory key store.
  • We can build a redis image our selves or use the offical one. An example is available in the redis directory.
# Search for the offical redis image in the docker registry
docker search redis

# download the image
docker pull redis

# inspect the image and look for the volumes listed
docker inspect redis
  • Using docker inspect you should be able to find the a section that describes the volumes. This is the part where the container is supposed to store the persistent data.

Building a cluster - data store

  • The volume for the redis conatiner /data is thet place where the container stores ithe data. If we do not specify the volume explicit docker will create a volume for us, a docker managed volume.
# start a redis container
docker run -d --name redis redis

# find the volume name and list the volumes.
docker inspect --format='{{range .Mounts}}{{.Name}}{{end}}' redis
docker volume ls

# remove the redis contaienr, -v will remove the volume as well.
docker rm -v -f redis
  • For our redis container we mount a volume explicit. We mount a directory from the host direct to the container.
# start the data store
mkdir .data \
docker run -d --name redis -v $(pwd)/.data:/data redis

Building a cluster - web layer

  • We use node.js as web server.
  • Have a look at the Dockerfile in the web directory. We use the official node image as base image.
# Build the image
docker build -t lab3/web web

# Start the container, not exposting the port is optional
docker run -d -p 8080:8080 --link redis:redis --name web1 lab3/web

# Test
curl http://localhost:8080

# Inspect logging
docker logs -f web1

# Add two more nodes
docker run -d --link redis:redis --name web2 lab3/web
docker run -d --link redis:redis --name web3 lab3/web

Building a cluster - proxy layter

  • We use nginx as proxy server.
  • Have a look at the nginx configuration file in the proxy directory.
  • We build the proxy server on top of the official docker image. Have a look at the Dockerfile.
# Build the image
docker build -t lab3/proxy proxy

# Start the container.
docker run -d -p 80:80 --name proxy \
  --link web1:web1 --link web2:web2 --link web3:web3 lab3/proxy

# Test your cluster and inspect the logging
# Open a new terminal
docker logs -f proxy

# Back to terminal one and fire some requests (or use the browser)
for i in {0..99}; do curl http://localhost; echo ""; done

Clean up

  • Docker ps shows you the conainters
  • Docker rm removes conainters
# -v removes inplicit mounts volumes
# -f force to remove running containers
# -q shows id only
# -a show all (also stpped ones)
docker rm -v -f $(docker ps -q -a)

Same same but different

# have a look at the compose file
cat docker-compose.yml

# build the conatiners
docker-compose build

# start the containers
docker-compose up

# execute in a new terminal window
for i in {0..99}; do curl http://localhost; echo ""; done

Docker tools

Docker toolbox

  • The Docker Toolbox is an installer to quickly and easily install and setup a Docker environment on your computer.
  • Available for both Windows and Mac, the Toolbox installs Docker Client, Machine, Compose (Mac only) and Kitematic.

Docker toolbox

Docker machine

Docker Machine makes it really easy to create Docker hosts on your computer, on cloud providers, and inside your data center. It creates servers, installs Docker on them, then configures the Docker client to talk to them.

Docker compose

Compose is a tool for defining and running complex applications with Docker. With Compose, you define a multi-container application in a single file, and then you spin your application up in a single command which does everything that needs to be done to get it running.

Docker compose

gitlabdb:
  image: postgres
  environment:
    - POSTGRES_USER=gitlab
    - POSTGRES_PASSWORD=password
gitlab:
  image: sameersbn/gitlab
  links:
    - gitlabdb:postgresql
  environment:
    - DB_USER=gitlab
    - DB_PASS=password
  ports:
    - "10080:80"
  volumes:
    - ./data/gitlab/data:/home/git/data

Docker swarm

Docker Swarm is native clustering for Docker. It turns a pool of Docker hosts into a single, virtual host.

What about Microsoft?

Container solution on Azure

THANKS

docker run -p 80:80 npalm/docker-introduction

Handson introduction Created by Niek Palm (2016)