OpenShift Enablement – Understanding the Product in a way to sell it – Vertical Slides



OpenShift Enablement – Understanding the Product in a way to sell it – Vertical Slides

0 0


ose-enablement


On Github amorena / ose-enablement

OpenShift Enablement

Understanding the Product in a way to sell it

Created by Jürgen Hoffmann / @jbossguy

and Cyrus Moazami-Vahid / cmoazami@redhat.com

Agenda

OpenShift v3 Changes

  • What is new in OpenShift v3
  • How is container technology changing IT
  • How is OpenShift different from kubernetes alone
  • What is the main advantage of OpenShift v3

Pitching OpenShift v3

  • OpenShift as Cloud Application Transformation
  • OpenShift scenarios
  • Application Cloud Readiness
  • Infrastructure Cloud Readiness
  • Role of Services

OpenShift v3 Demo

OpenShift v3

An evolution that will change everything

What is Docker

  • Designed to deliver applications faster
  • Separates Application from Infrastructure
  • Provides Tools to put Applications into Containers
  • Image Registry to host and distribute images
    • Locally or in the Cloud
Docker is an open platform for developing, shipping, and running applications. Docker is designed to deliver your applications faster. With Docker you can separate your applications from your infrastructure AND treat your infrastructure like a managed application. Docker helps you ship code faster, test faster, deploy faster, and shorten the cycle between writing code and running code. Docker does this by combining a lightweight container virtualization platform with workflows and tooling that help you manage and deploy your applications. At its core, Docker provides a way to run almost any application securely isolated in a container. The isolation and security allow you to run many containers simultaneously on your host. The lightweight nature of containers, which run without the extra load of a hypervisor, means you can get more out of your hardware. Surrounding the container virtualization are tooling and a platform which can help you in several ways: getting your applications (and supporting components) into Docker containers distributing and shipping those containers to your teams for further development and testing deploying those applications to your production environment, whether it be in a local data center or the Cloud.

What is Dockers' Use Case

  • Faster delivery of your applications
  • Deploying and scaling more easily
  • Achieving higher density and running more workloads
Docker is perfect for helping you with the development lifecycle. Docker allows your developers to develop on local containers that contain your applications and services. It can then integrate into a continuous integration and deployment workflow. For example, your developers write code locally and share their development stack via Docker with their colleagues. When they are ready, they push their code and the stack they are developing onto a test environment and execute any required tests. From the testing environment, you can then push the Docker images into production and deploy your code. Docker's container-based platform allows for highly portable workloads. Docker containers can run on a developer's local host, on physical or virtual machines in a data center, or in the Cloud. Docker's portability and lightweight nature also make dynamically managing workloads easy. You can use Docker to quickly scale up or tear down applications and services. Docker's speed means that scaling can be near real time. Docker is lightweight and fast. It provides a viable, cost-effective alternative to hypervisor-based virtual machines. This is especially useful in high density environments: for example, building your own Cloud or Platform-as-a-Service. But it is also useful for small and medium deployments where you want to get more out of the resources you have.

Major Components

Docker has 2 major components

  • Docker: the open source container virtualization platform
  • Docker Hub: our Software-as-a-Service platform for sharing and managing Docker containers.

Docker Architecture

The Docker daemon As shown in the diagram above, the Docker daemon runs on a host machine. The user does not directly interact with the daemon, but instead through the Docker client. The Docker client The Docker client, in the form of the docker binary, is the primary user interface to Docker. It accepts commands from the user and communicates back and forth with a Docker daemon.

Inside Docker

To understand Docker's internals, you need to know about three components

  • Docker images
  • Docker registries
  • Docker containers
Docker images A Docker image is a read-only template. For example, an image could contain an Ubuntu operating system with Apache and your web application installed. Images are used to create Docker containers. Docker provides a simple way to build new images or update existing images, or you can download Docker images that other people have already created. Docker images are the build component of Docker. Docker Registries Docker registries hold images. These are public or private stores from which you upload or download images. The public Docker registry is called Docker Hub. It provides a huge collection of existing images for your use. These can be images you create yourself or you can use images that others have previously created. Docker registries are the distribution component of Docker. Docker containers Docker containers are similar to a directory. A Docker container holds everything that is needed for an application to run. Each container is created from a Docker image. Docker containers can be run, started, stopped, moved, and deleted. Each container is an isolated and secure application platform. Docker containers are the run component of Docker.

Putting it all to work

So far we have learned that

You can build Docker images that hold your applications You can create Docker containers from those Docker images to run your applications You can share those Docker images via Docker Hub or your own registry

Let's look at how these elements combine together to make Docker work.

Docker Image

  • Docker Images are built from base images
  • Images are extended using a simple and descriptive set of instructions
    • Run a command
    • Add a File or Directory
    • Create an Environment Variable
    • What process to run when launching a container from this image

These Instructions are stored in a file called Dockerfile. Docker reads this Dockerfile when you request a build of an image, executes the instructions, and returns a final image.

How does a Docker Image work? We've already seen that Docker images are read-only templates from which Docker containers are launched. Each image consists of a series of layers. Docker makes use of union file systems to combine these layers into a single image. Union file systems allow files and directories of separate file systems, known as branches, to be transparently overlaid, forming a single coherent file system. One of the reasons Docker is so lightweight is because of these layers. When you change a Docker image—for example, update an application to a new version— a new layer gets built. Thus, rather than replacing the whole image or entirely rebuilding, as you may do with a virtual machine, only that layer is added or updated. Now you don't need to distribute a whole new image, just the update, making distributing Docker images faster and simpler. Every image starts from a base image, for example RHEL6, a base RHEL6 image, or fedora, a base Fedora image. You can also use images of your own as the basis for a new image, for example if you have a base Apache image you could use this as the base of all your web application images. These Images are ususally pulled or downloaded from dockerhub.

Docker Registry

The Docker registry is the store for your Docker images. Once you build a Docker image you can push it to a public registry i.e. Docker Hub or to your own registry running behind your firewall.

$ sudo docker push yourname/newimage

Using the Docker client, you can search for already published images

$ sudo docker search centos
NAME           DESCRIPTION                                     STARS     OFFICIAL   TRUSTED
centos         Official CentOS 6 Image as of 12 April 2014     88
tianon/centos  CentOS 5 and 6, created using rinse instea...   21
...

and then pull them down to your Docker host to build containers from them.

$ sudo docker pull centos
Pulling repository centos
0b443ba03958: Download complete
539c0211cd76: Download complete
511136ea3c5a: Download complete
7064731afe90: Download complete

Status: Downloaded newer image for centos

and build containers from them...

$ sudo docker build -t my_custom_image .

Kubernetes

Manage a cluster of Linux containers as a single system to accelerate Dev and simplify Ops.

Kubernetes is a system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications. Its APIs are intended to serve as the foundation for an open ecosystem of tools, automation systems, and higher-level API layers.

Is Kubernetes, then, a Docker "orchestration" system?

Yes and no.

Declarative vs. Imperative

Kubernetes establishes robust declarative primitives for maintaining the desired state requested by the user. We see these primitives as the main value added by Kubernetes. Self-healing mechanisms, such as auto-restarting, re-scheduling, and replicating containers require active controllers, not just imperative orchestration. Kubernetes is primarily targeted at applications comprised of multiple containers, such as elastic, distributed micro-services. It is also designed to facilitate migration of non-containerized application stacks to Kubernetes. It therefore includes abstractions for grouping containers in both loosely coupled and tightly coupled formations, and provides ways for containers to find and communicate with each other in relatively familiar ways. Kubernetes enables users to ask a cluster to run a set of containers. The system automatically chooses hosts to run those containers on. While Kubernetes's scheduler is currently very simple, we expect it to grow in sophistication over time. Scheduling is a policy-rich, topology-aware, workload-specific function that significantly impacts availability, performance, and capacity. The scheduler needs to take into account individual and collective resource requirements, quality of service requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, deadlines, and so on. Workload-specific requirements will be exposed through the API as necessary.

Kubernetes Cluster

A single Kubernetes cluster is not intended to span multiple availability zones. Instead, we recommend building a higher-level layer to replicate complete deployments of highly available applications across multiple zones.

Higher Level Layer? --> OpenShift

Architecture

A running Kubernetes cluster contains node agents (kubelet) and master components (APIs, scheduler, etc), on top of a distributed storage solution.

Pods

A pod (as in a pod of whales or pea pod) is a relatively tightly coupled group of containers that are scheduled onto the same host. It models an application-specific "virtual host" in a containerized environment. Pods serve as units of scheduling, deployment, and horizontal scaling/replication, share fate, and share some resources, such as storage volumes and IP addresses.

Heads Up

reveal.js is a framework for easily creating beautiful presentations using HTML. You'll need a browser with support for CSS 3D transforms to see it in its full glory.

Oh hey, these are some notes. They'll be hidden in your presentation, but you can see them if you open the speaker notes window (hit 's' on your keyboard).

Vertical Slides

Slides can be nested inside of other slides, try pressing down.

Basement Level 1

Press down or up to navigate.

Basement Level 2

Cornify

Basement Level 3

That's it, time to go back up.

Slides

Not a coder? No problem. There's a fully-featured visual editor for authoring these, try it out at http://slid.es.

Point of View

Press ESC to enter the slide overview.

Hold down alt and click on any element to zoom in on it using zoom.js. Alt + click anywhere to zoom back out.

Works in Mobile Safari

Try it out! You can swipe through the slides and pinch your way to the overview.

Marvelous Unordered List

  • No order here
  • Or here
  • Or here
  • Or here

Fantastic Ordered List

One is smaller than... Two is smaller than... Three!

Markdown support

For those of you who like that sort of thing. Instructions and a bit more info available here.

<section data-markdown>
  ## Markdown support

  For those of you who like that sort of thing.
  Instructions and a bit more info available [here](https://github.com/hakimel/reveal.js#markdown).
</section>

Transition Styles

You can select from different transitions, like: Cube - Page - Concave - Zoom - Linear - Fade - None - Default

Themes

Reveal.js comes with a few themes built in: Default - Sky - Beige - Simple - Serif - Night Moon - Solarized

* Theme demos are loaded after the presentation which leads to flicker. In production you should load your theme in the <head> using a <link>.

Global State

Set data-state="something" on a slide and "something" will be added as a class to the document element when the slide is open. This lets you apply broader style changes, like switching the background.

Custom Events

Additionally custom events can be triggered on a per slide basis by binding to the data-state name.

Reveal.addEventListener( 'customevent', function() {
	console.log( '"customevent" has fired' );
} );

Slide Backgrounds

Set data-background="#007777" on a slide to change the full page background to the given color. All CSS color formats are supported.

Image Backgrounds

<section data-background="image.png">

Repeated Image Backgrounds

<section data-background="image.png" data-background-repeat="repeat" data-background-size="100px">

Background Transitions

Pass reveal.js the backgroundTransition: 'slide' config argument to make backgrounds slide rather than fade.

Background Transition Override

You can override background transitions per slide by using data-background-transition="slide".

Clever Quotes

These guys come in two forms, inline: “The nice thing about standards is that there are so many to choose from” and block:

“For years there has been a theory that millions of monkeys typing at random on millions of typewriters would reproduce the entire works of Shakespeare. The Internet has proven this theory to be untrue.”

Pretty Code

function linkify( selector ) {
  if( supports3DTransforms ) {

    var nodes = document.querySelectorAll( selector );

    for( var i = 0, len = nodes.length; i < len; i++ ) {
      var node = nodes[i];

      if( !node.className ) {
        node.className += ' roll';
      }
    }
  }
}

Courtesy of highlight.js.

Intergalactic Interconnections

You can link between slides internally, like this.

Fragmented Views

Hit the next arrow...

... to step through ...

any type of view fragments This slide has fragments which are also stepped through in the notes window.

Fragment Styles

There's a few styles of fragments, like:

grow

shrink

roll-in

fade-out

highlight-red

highlight-green

highlight-blue

current-visible

highlight-current-blue

Spectacular image!

Export to PDF

Presentations can be exported to PDF, below is an example that's been uploaded to SlideShare.

Take a Moment

Press b or period on your keyboard to enter the 'paused' mode. This mode is helpful when you want to take distracting slides off the screen during a presentation.

Stellar Links

THE END

BY Hakim El Hattab / hakim.se