Red Hat Summit 2018 Summit Day 1

Today’s culture shock was breakfast, which included cinammon sugar donuts, which I do not think of as a traditional breakfast food. It was not as good as the OpenShift breakfast, let me tell you. Also, the pre-keynote DJ was good, I guess, but my ears hurt after the first quarter hour or so.

Keynote

Jim Whitehurst

The theme is “ideas worth exploring”; no one person or company has all the answers.

Paul Cormier

Various history. Draws a parallel between the proprietary Unix era - which ultimately bottlenecked on the ability to develop everything in silos - and proprietary public cloud providers ability to be all things to all people. Pitches RH tools as an abstraction across this problem. “Hybrid cloud is the only practical way forward.”

KVM is an example of where a core (the KVM virt) is wrappd up for multiple uses, like oVirt (traditional virtualisation) or OpenStack (cloud-style virt).

Burr Sutter

Demo pivoting a private OpenStack cloud into a hybrid model - x86_64 and Power kit from IBM, HP, and Dell. Showing OpenStack Director managing bare metal OpenShift deployments. Director uses a rules-driven profile engine to tag whether machines are storage, compute, or whatever; those roles are user-configured.

Amadeus (who serve 95% of the world’s scheduled network airline seats, with 1 trillion requests per second) talk about their application deployments. They use Terraform for API abstraction of the virt layer, Ansible for their deployments and Puppet for config management, with OpenShift and Couchbase, spread across US-East and US-West datacentres. It’s an as-a-service for third parties. The second application uses most of the same components, but also Kafka and public cloud components. This application is spread across Asia, Europe, and the US.

The demo is a fraud detection tool running on OpenShift, bursting out to the public cloud, and AMQ interconnect as the way of briding them together. AMQ can balance based on response times; the demo is rigged with a broken deployment on AWS as a segue into using Insights to troubleshoot the problem: analysing variances and creating playbooks to remedy the problems.

Citi have a talk about adopting hybrid cloud model; they’re solidly multi-cloud, as well as maintaining on-prem. Choices are driven by the data classification (confidential etc), response sensitivity, variability, and compliance.

Kubernetes-managed VMs

Taking a mix of Windows/VMWare VM, Linux/VMWare VM, migrate to KVM, and manage via k8s. Starts with CloudForms, which now has a built-in migration mechanism for manpping from one hypervisor to another. This covers storage and network mappings, as well.

The next step is to kick on and import a KVM (or VMware) VM into the kubernetes cluster; OpenShift will treat it like any other process managed by OpenShift, but wrapped in the container API. Services and routes can still be used to get to and from the VM.

CoreOS and Red Hat Panel

Reza Shafii, Joe Fernandes, Brandon Philips, Clayton Coleman

“Red Hat and CoreOS have knoen each other for a long time” partiularly in the kubernetes and container communities. Session is around how Tectonic and OpenShift will be combined. CoreOS will effectively be launching a reverse takeover of the OpenShift world; we get the option of running OpenShift of CoreOS instead of RHEL or RHEL Atomic.

Red Hat will be running Quay as a Red Hat product, both as-a-service at quay.io and standalone on-prem.

OpenShift & Tectonic + Operator Framework

Automated operations: self-service, automated backup, automated upgrades for the dev. They’re great, but they’re also very sticky and lock you into a single provider. Multicloud is great, but it’s also tricky to manage; kubernetes is the “ultimate abstraction” for multicloud in the form of k8s Operators.

Operator Framework is an upstream project to make it trivial to develop services that run on k8s with automated operations without having to be k8s hackers. Once you’ve got the operator, your operations - at the level of backup, upgrade, and so on - are taken care of via the lifecycle. Monitoring and metrics are pulled by Prometheus, which is also the basis for chargeback.

Now operators are being enabled on OpenShift, via an operators console. The traditional view from the OpenShift console is oriented to the user; the operator console is aimed at the cluster manager. This will be both the operator perspective for the people who manage the cluster, as well as the views of the k8s operators.

The k8s operators are managed via the Operator Lifecycle manager. You can install and manage the operators, either your own or via third parties. Red Hat have been partnering with ISVs to get them onboard (there are 60+ ISVs including e.g. Couchbase, Redis, and NetApp).

A key differentiator here is that the k8s operators allow you to add new as-a-service type services to your cloud, which is something that you can’t do in the public cloud at the moment; you get what the provider supplies, and if they don’t have a thing you’re out of luck. The Operator Framework is for all k8s installs.

“The Operator Framwork is like Ruby on Rails for ops.”

This is all real - the demo will be shown in one of the keynotes - but it’s not GA yet.

It is also the chargeback functionality, mapping the container runtimes onto the costs.

There’s Prometheus integration in the console, too.

OpenShift on CoreOS

  • OpenShift on RHEL remains for people who need to customise the base, who need broad hardware compatibility.
  • OpenShift on CoreOS gives fully immutable deployments for OpenShift, ease of day 2 ops, consistent management from the OS up to the applications.
  • Red Hat CoreOS is essentially going to be derived from a blend of RHEL Atomic and Container Linux.

Quay

  • Quay does vuln scanning, grographic replication, build image triggers, and image rollback/history.
  • OpenShift will continue to ship with a basic registry.
  • Other registries will continue to be supported.
  • Quay will be open sourced.

Application Portability Across Clouds with Kubernetes

Ivan Font and Lindsey Tulloch

Problem statement: “My application is running on Azure and I want to move it to e.g. AWS” - without losing data or taking a hit to uptime. They built a Pacman game with persistent storage of stores via MongoDB; the game was written with node.js, and Mongo, all containerised.

The PoC used:

  • Kubernetes clusters in GCP and Azure with the game deployed into it. AWS present, but with no resources running.
  • Managed via Federation-v2, which provides multi-cluster resource management.
    • You can set e.g. global defaults and cluster-specific overrides.
  • DNS load balancing.

The migration is a three-step process:

  • Use kubectl to move the resources to the new configuration; you specify the list of cluters where you want the pacman namespace to exist.
  • Use the load balancer - DNS in this case - to migrate to the new set of clusters.
  • Then once the cutover happens, spool down the old cluster by updating the list.

I got the high score. So there’s that. Oh, and it worked. The code is currently pre-alpha, so moderate expectation accordingly.

Ecosystem in a Hybrid World

Matt Hicks

Riffing on the keynote position that no one entity can solve all the problems. Introduces IBM’s VP of hybrid cloud. “WebSphere, MQ, DB2” support for being containerised on OpenShift (I was asking IBM about this a few years ago and got no-where. I’m glad they’re catching up, but personally, it’s too little, too late). “All the middleware is going to be certified on OpenShift with all the automation in there.”

IBM products will be able to take advantage of the platform capabilities such as autoscaling. It seems to use IBM Cloud Private to inject into OpenShift. The demo shows MQ using secrets. Cloud Private is needed for log aggregation and monitoring - it doesn’t use the native facilities.

Next up are Nike. They’ve been going hybrid for 18 months - one of the first things they learned was that having a consistent platform between the cloud or on-prem really makes the operations a lot easier.

Introducing “Trusted Operations”; is a certification program for the k8s operations images from vendors. Couchbase, Dynatrace, Black Duck, and Crunchy are already there. Operators are exposed through OpenShift now; you can filter for them in OCP. Operators can interact with the events native to the application - the team demoed the operator seeing that a Couchbase node wasn’t responding, so spun up a replacement.

Also extending OpenShift to do maintenance via the OpenShift console; e.g. draining nodes for maintenance work.

How cloud, API, and automation are changing financial services in a digital world

Simon Cashmore (Barclays), Jamil Mina (BMO Financial Group), Amine Boudali (Nordea), Jose Quarisma (Accenture), Gab Columbro (FINOS)

Barclays: Split into local and international businesses, with a support organisation servicing them. About a third of all UK transactions go through them; a lot of tech and process debt. Simon is describing their PaaS platform he outlined at the OpenShift Commons session: he emphasises that the service and support model has had as much design work as the technical component. They started on OpenShift v2 in 2014, and learned a lot that fed into their v3 platform.

One challenge is these sorts of platform have a shorter lifecycle than the financial model for the Bank can normally accommodate. Preparing a service offering, and helping people understand their terms and conditions have been a key part of the picture. The OpenShift team is a fully self-contained team so the users/projects don’t have to organise anything. The T&Cs make the RACIs clear - an important stake in the ground is that the platform team patch and knock things out regularly. They’ve resisted pressure to change things for self-described “very important customers”, which is important to run as a platform.

People who want their own images have to own the whole lifecycle, including governance and security posture; the platform team only maintain the shared images than everyone can use.

Share