Skip to content

docker

Metahub: Dynamic Registry Proxy

I won't say "Long time, no post" - but...

As I had some time at my hands the last couple of months, I was iterating on my idea on hardware optimization using manifest list from the last post Match Node-Specific Needs Using Manifest Lists.

ReCap

The gist is that hardware optimization with containers is a bit of a step back, as the kernel virtualization (aka containers) promises to provide isolation on-top of a Linux (or Windows) kernel without caring to much about the underlying host configuration.

Optimized Container Images for AI/ML and HPC

Containers gain more and more foothold as a lightweight mode of isolating different application relying on kernel features to not spin up emulated hardware - create (rather) heavy virtual machines. That worked great so far, as the resource isolation was only focusing on what the kernel can provide:

  • CPU cycles
  • Memory
  • Input/output to resources controlled by the kernel (e.g. network and filesystems)

Docker Datacenter in a Box

I've been working for Docker for a month now and it is already a fun ride. I joined just before the DockerConEU announcement two weeks back, that the Docker Enterprise Edition as well as the Docker Community Editions for Desktops (Docker4Mac/Docker4Win) will support Kubernetes in the future.

Doxy: A Docker Socket Proxy

Talking to security engineers I was asked how to secure a docker-socket, so that applications like metrics collector, are only able to access a subset of API endpoints.

When looking into it I was looking into the authorisation plugins already out there, but it as far as I understood them, they are only working on TCP sockets and rely on an SSL certificate providing informations about who is accessing them. Recently I tried to create a plugin using the newest plugin system, but that failed to some extend. The plugin system is currently in a transition to be used within the plugin framework and not be directly started at startup.

To circumvent this and get something to work with, I created a little golang tool, that creates a httputil.ReverseProxy, providing a proxy-socket, checking the request against some regular expressions and forwards granted requests to the docker socket on the behalf of the user.

Meet doxy:

Byfahrer: Terminate SSL for Docker SWARM

I like the idea and prospect of having only the plain Docker stack running, as it provides a nice experience from development to operations (I am talking about you: DevOps!). I can start with a single container, create a set of (unreplicated) services and try to make it work in a distributed setup - all on my little laptop and stay confident that it will work on a cluster as well.

M.E.L.I.G.: Log/Event/Metric Collection within Containers

Yesterdays (ok, late post - at the last) MeetUp was first and foremost about the Container Manifesto, which aims to foster understanding about how to build and run a Container.

Afterwards we figured that I missed 'Containers should start fast (thx Lukasz)' as an additional point - next time. :)

For today I will just put the video in here, a separat blog post might follow - even though I feel it is not that necessary, as no code was executed.

Docker 1.13 Prometheus end-point and qcollect

Docker 1.13 is on it's way and I like what comes to light.

The highlights from where I stand are:

  • service port publishing now as mode host or ingress, which allows for service ports to be outside of the IPVS load-balancer and just exposed on the SWARM node.
  • the load-balancer seems to honour established connections
  • experimental has an end-point /metrics, which exposes Prometheus formatted metrics.

And this last bit got me interested. So much, though, that I hacked a Prometheus collector into qcollect. :)

Hello World of qcollect

A while back I stumbled upon Fullerite, a GOLANG metrics collector, which can reuses the collectors of the python Diamond collector.

One of the issues I had was, that it is not using the event time, but the process time of collected metrics. Thus, if you want to bulk update collected metrics, they will all have the same timestamp of the time they are push to the metrics backend.

Consul as a (Docker) Service

After a couple of month being busy, it's time for a blog post about Docker Services.

As I stated often - I became a big fan of Consul for service orchestration, service discovery and as a K/V store in my docker stacks.

Since Docker Engine 1.11 the necessary DNS feature to be able to use a 127.0.0.1 address was somewhat kicked, so I had a hard nut to crack. My workaround was to not care about local resolution and use the consul servers as DNS resource. Anyway...

ISC2016 Workshop: Linux Containers to Optimise IT Infrastructure for HPC & BigData

This years 'Linux Container' workshop at the ISC 2016 is called: Docker: Linux Containers to Optimise IT Infrastructure for HPC & BigData.

It was held after the International Supercomputing Conference in Frankfurt on June 23rd at the Marriott hotel.

Unlike last year the focus was to provide actionable knowledge about the world of Linux Containers, discuss problems and possible solutions.

Consul Ambassador Pattern

Did I mention I love Open-Source software? I do - I really do! :) Using a pull request on Consul, I get closer to a nice setup: github isse

So what I am talking about here? As you readers should already know I use Consul as the backend for my service/node discovery stuff and user related topics.

FOSDEM #2 - IPoIB

This blob post goes through the different methods of connecting an HPC cluster in a box (within docker container) so that the network performance is worth the effort and all containers are addressable. It's going to talk about VXLAN, MACLAN and some pipework to glue them together.

Hello Sensu

Even though I like Consul a lot (it is the foundation of my stacks in terms of service/node discovery) it's most likely not a replacement for a monitoring framework with notification handlers, distributed checks and a nice dashboard.

I assume that most of the readers have used NAGIOS at some point and decide to hate-love it. It works, but only kinda... :)

Simple CEPH container with CEPH-fuse clients

Since I like to play around with Docker volumes one day, I have to get used to CEPH somehow. :)

I started by creating a single container that hosts all ceph-daemons needed and which pushes the necessary information to consuls key/value store: qnib/ceph-mono.

A little stack to demonstrate it could be find - as usual - in my stack repository:

SLURM cluster with auto generated Dashboards

As promised in my last post here's a blog post about the QNIBTerminal powered SLURM stack with auto generated dashboards. I started writing it two weeks ago, embarrassing - sorry for the delay. As a reminder I'll keep the date.

The stack looks like this: stack_overview

For those following my blog most of the stack should look familiar.

Parse your apache2 logs with qnib/elk

If you are looking for an excuse to use logstash your local webserver is low hanging fruit.

Someone accesses your website and your web server will store some details about the visit:

10.10.0.1 - - [29/Oct/2014:18:42:18 +0100] "GET / HTTP/1.1" 200 2740 "-" "Mozilla/5.0 (iPhone; CPU iPhone OS 8_1 like Mac OS X) AppleWebKit/600.1.4 (KHTML, like Gecko) Version/8.0 Mobile/12B411 Safari/600.1.4"
10.10.0.1 - - [29/Oct/2014:18:42:19 +0100] "GET /css/main.css HTTP/1.1" 200 2805 "http://qnib.org/" "Mozilla/5.0 (iPhone; CPU iPhone OS 8_1 like Mac OS X) AppleWebKit/600.1.4 (KHTML, like Gecko) Version/8.0 Mobile/12B411 Safari/600.1.4"
10.10.0.1 - - [29/Oct/2014:18:42:19 +0100] "GET /pics/second_strike_trans.png HTTP/1.1" 200 29636 "http://qnib.org/" "Mozilla/5.0 (iPhone; CPU iPhone OS 8_1 like Mac OS X) AppleWebKit/600.1.4 (KHTML, like Gecko) Version/8.0 Mobile/12B411 Safari/600.1.4"