OCI containers and Python

I’ve recently pushed Python bindings to the crun git repo. Handling JSON data is extremely easy in Python and makes it a great candidate to quickly customize the container configuration and launch it. Keep in mind that crun is only an OCI runtime for running a container, it doesn’t take care of pulling and storing images. You’ll need a different tool for that.

For more details, this is the commit that adds the Python bindings: https://github.com/giuseppe/crun/commit/a2b5faf88a4abde241ad813afdf6688e38fb7ca2

By default the Python bindings are not built, you’ll need to specify –with-python-bindings to the configure script to build them.

Once you have the Python bindings, a simple script can be used to customize and launch a container:


import python_crun
import json

# the spec() function generates a default configuration for
# running the OCI container, parse directly the JSON. 
spec = json.loads(python_crun.spec())

# Let's change the rootfs of the container to point to the correct
# path.  This is where the image is exploded on disk.
spec['root']['path'] = '/home/gscrivano/example-container/rootfs'

# We don't want the container to do much, just greet us
spec['process']['args'] = ['/bin/echo', 'hello from a container']

# From the customized configuration, we create the OCI container.
ctr = python_crun.load_from_memory(json.dumps(spec))

# The context says what ID the container will have, and optionally allows
# to tweak other settings as well such as the state root, using systemd
# for the cgroups
ctx = python_crun.make_context("test-container")

# We don't want to print any warning.
python_crun.set_verbosity(python_crun.VERBOSITY_ERROR)

# And finally run the container
python_crun.run(ctx, ctr)

Store it in a launch_container.py file.

Now, from the crun build root directory:


# PYTHONPATH=.libs python ./launch_container.py
hello from a container

C is a better fit for tools like an OCI runtime

I’ve spent some of the last weeks working on a replacement for runC, the most used/known OCI runtime for running containers. It might not be very well known, but it is a key component for running containers. Every Docker container ultimately runs through runC.

Having containers running through some common specs allow some pieces to be replaced without having any difference in behavior.

The OCI runtime specs describe how a container looks like once it is running, for instance it lists all the mount points, the capabilities left to the process, the process that must be executed, the namespaces to create and so on.

While the rest of the containers ecosystem is written in Go, from Docker to Kubernetes, I think that for such a low level tool C still makes more sense. runC itself uses C for its lower level tasks forking itself once the configuration done and setting up the environment in C before launching the container process.

I’ve tried running sequentially 100 times a container that runs only /bin/true and the results are quite good:

|                                       | crun      | runC      |  %    |
| 100 /bin/true (no network namespace)  | 0m4.449s  | 0m7.514s  | 40.7% |
| 100 /bin/true (new network namespace) | 0m15.850s | 0m18.986s | 16.5% |

Most of the time for running a container seems to be in the creation of a network namespace. I had expected some costs in the Go->C process handling but I am surprised by the results when the network namespace is not used as crun is almost double as fast as runC.

For the parsing of the OCI spec file crun uses https://github.com/giuseppe/libocispec.

crun is still experimental and some features are missing, but if you are intested you can take a look here: https://github.com/giuseppe/crun/ and open a PR if you have any improvements

OpenShift on system containers

It is still an ongoing work not ready for production, but the upstream version of OpenShift origin has already an experimental support for running OpenShift Origin using system containers. The “latest” Docker image for origin, node and openvswitch, the 3 components we need, are automatically pushed to docker.io, so we can use these for our test. The rhel7/etcd system container image instead is pulled from the Red Hat registry.

This demo is based on these blog posts www.projectatomic.io/blog/2016/12/part1-install-origin-on-f25-atomic-host/ and www.projectatomic.io/blog/2016/12/part2-install-origin-on-f25-atomic-host/ with some differences for the provision of the VMs and obviously running system containers instead of Docker containers.

The files used for the provision and the configuration can also be found here: https://github.com/giuseppe/atomic-openshift-system-containers, if you find it easier than copy/paste from a web browser.

In order to give it a try, we need the latest version of openshift-ansible for the installation. Let’s use a known commit that worked for me.


$ git clone https://github.com/openshift/openshift-ansible.git
$ git checkout a395b2b4d6cfd65e1a2fb45a75d72a0c1d9c65bc

To provision the VMs for the OpenShift cluster, I’ve used this simple Vagrantfile:


BOX_IMAGE = "fedora/25-atomic-host"
NODE_COUNT = 2

# Workaround for https://github.com/openshift/openshift-ansible/pull/3413 (which is not yet merged while writing this)
SCRIPT = "sed -i -e 's|^Defaults.*secure_path.*$|Defaults    secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin|g' /etc/sudoers"

Vagrant.configure("2") do |config|
  config.vm.define "master" do |subconfig|
    subconfig.vm.hostname = "master"
    subconfig.vm.network :private_network, ip: "10.0.0.10"
  end
  
  (1..NODE_COUNT).each do |i|
    config.vm.define "node#{i}" do |subconfig|
      subconfig.vm.hostname = "node#{i}"
      subconfig.vm.network :private_network, ip: "10.0.0.#{10 + i}"
    end
  end

  config.vm.synced_folder "/tmp", "/vagrant", disabled: 'true'
  config.vm.provision :shell, :inline  => SCRIPT
  config.vm.box = BOX_IMAGE

  config.vm.provider "libvirt" do |v|
    v.memory = 1024
    v.cpus = 2
  end

  config.vm.provision "shell" do |s|
    ssh_pub_key = File.readlines(ENV['HOME'] + "/.ssh/id_rsa.pub").first.strip
    s.inline = <<-SHELL
      mkdir -p /home/vagrant/.ssh /root/.ssh
      echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys
      echo #{ssh_pub_key} >> /root/.ssh/authorized_keys
      lvextend -L10G /dev/atomicos/root
      xfs_growfs -d /dev/mapper/atomicos-root
    SHELL
  end
end

The Vagrantfile will provision three virtual machines based on the `fedora/25-atomic-host` image. One machine will be used for the master node, the other two will be used as nodes. I am using static IPs for them so that it is easier to refer to them from the Ansible playbook and to require DNS configuration.

The machines can finally be provisioned with vagrant as:


# vagrant up --provider libvirt

At this point you should be able to login into the VMs as root using your ssh key:


for host in 10.0.0.{10,11,12};
do
    ssh -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no [email protected]$host "echo yes I could login on $host"
done


yes I could login on 10.0.0.10
yes I could login on 10.0.0.11
yes I could login on 10.0.0.12

Our VMs are ready. Let’ install OpenShift!

This is the inventory file used for openshift-ansible, store it in a file origin.inventory:


# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes
etcd

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_user=root
ansible_become=yes
ansible_ssh_user=vagrant
containerized=true
openshift_image_tag=latest
openshift_release=latest
openshift_router_selector='router=true'
openshift_registry_selector='registry=true'
openshift_install_examples=False

deployment_type=origin

###########################################################
#######SYSTEM CONTAINERS###################################
###########################################################
system_images_registry=docker.io
use_system_containers=True
###########################################################
###########################################################
###########################################################

# enable htpasswd auth
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
openshift_master_htpasswd_users={'admin': '$apr1$zgSjCrLt$1KSuj66CggeWSv.D.BXOA1', 'user': '$apr1$.gw8w9i1$ln9bfTRiD6OwuNTG5LvW50'}

# host group for masters
[masters]
10.0.0.10 openshift_hostname=10.0.0.10

# host group for etcd, should run on a node that is not schedulable
[etcd]
10.0.0.10 openshift_ip=10.0.0.10

# host group for worker nodes, we list master node here so that
# openshift-sdn gets installed. We mark the master node as not
# schedulable.
[nodes]
10.0.0.11 openshift_hostname=10.0.0.11 openshift_schedulable=true openshift_node_labels="{'region': 'primary', 'router':'true'}"
10.0.0.12 openshift_hostname=10.0.0.12 openshift_schedulable=true openshift_node_labels="{'region': 'primary', 'registry':'true'}"

The new configuration required to run system containers is quite visible in the inventory file. `use_system_containers=True` is required to tell the installer to use system containers, `system_images_registry` specifies the registry from where the system containers must be pulled.

And we can finally run the installer, using python3, from the directory where we forked ansible-openshift:


$ ansible-playbook -e 'ansible_python_interpreter=/usr/bin/python3' -v -i origin.inventory ./playbooks/byo/config.yml

After some time, if everything went well, OpenShift should be installed.

To copy the oc client to the local machine I’ve used this command from the directory with the Vagrantfile:


$ scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i .vagrant/machines/master/libvirt/private_key [email protected]:/usr/local/bin/oc /usr/local/bin/

As non root, let’s login into the cluster:


$ oc login --insecure-skip-tls-verify=false 10.0.0.10:8443  -u user -p OriginUser



Login successful.

You don't have any projects. You can try to create a new project, by running

    oc new-project 




$ oc new-project test


Now using project "test" on server "https://10.0.0.10:8443".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git

to build a new example application in Ruby.




$ oc new-app https://github.com/giuseppe/hello-openshift-plus.git



--> Found Docker image 1f8ec11 (6 days old) from Docker Hub for "fedora"

    * An image stream will be created as "fedora:latest" that will track the source image
    * A Docker build using source code from https://github.com/giuseppe/hello-openshift-plus.git will be created
      * The resulting image will be pushed to image stream "hello-openshift-plus:latest"
      * Every time "fedora:latest" changes a new build will be triggered
    * This image will be deployed in deployment config "hello-openshift-plus"
    * Ports 8080, 8888 will be load balanced by service "hello-openshift-plus"
      * Other containers can access this service through the hostname "hello-openshift-plus"
    * WARNING: Image "fedora" runs as the 'root' user which may not be permitted by your cluster administrator

--> Creating resources ...
    imagestream "fedora" created
    imagestream "hello-openshift-plus" created
    buildconfig "hello-openshift-plus" created
    deploymentconfig "hello-openshift-plus" created
    service "hello-openshift-plus" created
--> Success
    Build scheduled, use 'oc logs -f bc/hello-openshift-plus' to track its progress.
    Run 'oc status' to view your app.

After some time, we can see our service running:


oc get service



NAME                   CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
hello-openshift-plus   172.30.204.140           8080/TCP,8888/TCP   46m

Are we really running on system containers? Let’s check it out on master and one node:

(The atomic command upstream has a breaking change so with future versions of atomic we will need -f backend=ostree to filter system containers, as clearly ostree is not a runtime)


for host in 10.0.0.{10,11};
do
    ssh -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no [email protected]$host "atomic containers list --no-trunc -f runtime=ostree"
done




   CONTAINER ID  IMAGE                                     COMMAND                                    CREATED          STATE     RUNTIME   
   etcd          192.168.1.13:5000/rhel7/etcd              /usr/bin/etcd-env.sh /usr/bin/etcd         2017-02-23 11:01 running   ostree    
   origin-master 192.168.1.13:5000/openshift/origin:latest /usr/local/bin/system-container-wrapper.sh 2017-02-23 11:10 running   ostree    
   CONTAINER ID IMAGE                                          COMMAND                                    CREATED          STATE     RUNTIME   
   origin-node 192.168.1.13:5000/openshift/node:latest        /usr/local/bin/system-container-wrapper.sh 2017-02-23 11:17 running   ostree    
   openvswitch 192.168.1.13:5000/openshift/openvswitch:latest /usr/local/bin/system-container-wrapper.sh 2017-02-23 11:18 running   ostree

And to finally destroy the cluster:


vagrant destroy

use bubblewrap as an unprivileged user to run systemd images

bubblewrap is a sandboxing tool that allows unprivileged users to run containers. I was recently working on a way to allow unprivileged users, to take advantage of bubblewrap to run regular system images that are using systemd. To do so, it was necessary to modify bubblewrap to keep some capabilities in the sandbox.

Capabilities are the way, since Linux 2.2, that the kernel uses to split the root power into a finer grained set of permissions that each thread can have. Together with Linux namespaces it is fine to leave unprivileged users the possibility to use some of them. To give an example, CAP_SETUID, which allows the calling process to make manipulations of process UIDs, is fine to be used in a new user namespace as the set of permitted UIDs is restricted to those UIDs that exist in the new user namespace.

The changes required in bubblewrap are not yet merged upstream. In the rest of post I will refer to the modified bubblewrap simply as bubblewrap.

The patches for bubblewrap are available here: https://github.com/giuseppe/bubblewrap/compare/privileged-systemd, this is the version used for the test. There is already a pull request for these changes to get merged in.

The set of capabilities that bubblewrap leaves in the process is regulated with –cap-add, new namespaces are required to use these caps. The special value ALL, adds all the caps that are allowed by bubblewrap.

A development version of systemd is required to run in the modified bubblewrap. There are patches in systemd upstream that allows systemd to run without requiring CAP_AUDIT_* and to not fail when setgroups is disabled, as it is the case when running inside bubblewrap (to address CVE-2014-8989). The setgroups restriction may be lifted in future in some cases, this is still under discussion.

For my tests, I’ve used Docker to compose the container, in the following Dockerfile there are no metadata directives as anyway they are not used when exporting the rootfs.


FROM fedora
RUN dnf -y install httpd; dnf clean all; systemctl enable httpd.service

To compose the container and export its content to a directory rootfs, you can do as root:


docker build -t httpd .
docker start --name=httpd httpd
mkdir rootfs
cd rootfs
docker export httpd | tar xf -
rootfs=$(pwd)

To install the latest systemd, once you’ve cloned its repository, from the source directory you can simply do:


./autogen.sh
make -j $(nproc)
make install DESTDIR=$(rootfs)

to install it in the container rootfs.

If the files /etc/subuid and /etc/subgid are present, the first interval of additional UIDs and GIDs for the unprivileged user invoking bubblewrap is used to set the additional users and groups available in the container. This is required for the system users needed for systemd.

At this point, everything is in place and we can use bubblewrap to run the new container as an unprivileged user:


bwrap --uid 0 --gid 0 --bind rootfs / --sys /sys  --proc /proc --dev /dev --ro-bind /sys/fs/cgroup /sys/fs/cgroup --bind /sys/fs/cgroup/systemd /sys/fs/cgroup/systemd --ro-bind /sys/fs/cgroup/cpuset /sys/fs/cgroup/cpuset --ro-bind /sys/fs/cgroup/hugetlb /sys/fs/cgroup/hugetlb --ro-bind /sys/fs/cgroup/devices /sys/fs/cgroup/devices --ro-bind /sys/fs/cgroup/cpu,cpuacct /sys/fs/cgroup/cpu,cpuacct --ro-bind /sys/fs/cgroup/freezer /sys/fs/cgroup/freezer --ro-bind /sys/fs/cgroup/pids /sys/fs/cgroup/pids --ro-bind /sys/fs/cgroup/blkio /sys/fs/cgroup/blkio --ro-bind /sys/fs/cgroup/net_cls,net_prio /sys/fs/cgroup/net_cls,net_prio --ro-bind /sys/fs/cgroup/perf_event /sys/fs/cgroup/perf_event --ro-bind /sys/fs/cgroup/memory /sys/fs/cgroup/memory --bind /sys/fs/cgroup/systemd /sys/fs/cgroup/systemd  --tmpfs /dev/shm --mqueue /dev/mqueue --dev-bind /dev/tty /dev/tty --chdir / --setenv PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin --setenv TERM xterm --setenv container docker  --tmpfs /var --tmpfs /run --tmpfs /tmp --tmpfs /var/www/html --tmpfs /var/log/httpd --bind rootfs/etc /etc  --hostname systemd --unshare-pid --unshare-net --unshare-ipc --unshare-user --unshare-uts --remount-ro / --cap-add ALL --no-reaper /usr/lib/systemd/systemd --system

systemd uses the signal SIGRTMIN+3 to terminate its execution, to kill the bubblewrap container, you can use kill -37 $PID, where $PID is the systemd process in the container.

Brainfuc**d brainf**k

Every programmer at some point gets in touch with the Brainfuck programming language and how surprising is that very few instructions are needed to have a Turing complete language, 6 is the case of Brainfuck (plus other 2 for I/O operations).

I have recently found an old project of mine that I have used to learn how to write a GCC frontend, it took a while to adapt it to work with a newer GCC version. The code is available on github. The only positive side of this project, if any, is that it can be easily used as a starting point on how to add a frontend to GCC, or in this case, to compile a Brainfuck interpreter written in Brainfuck!

I don’t remember how I got to this code, except that I helped myself with some C preprocessor macros and and I remember one important spec: the input is NUL terminated. Looking at the code is not very helpful. This is probably one of the cases that the compiled version is more understandable than the code itself.

This is the interpreter in brainfuck-interpreter.bf


>>>>>+>>,[>>>+>>,]>>>>>>>>+>>+<<<<<<<<<<<<[<<<<<]>>>>>[>>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>[<<->>[-<+>]]<[->+<]<[[>>>>>]>>>>>>>[>>>>>]<<<.<<<<-[+<<<<<-]+<<<<<<<<<<<<<<<[<<<<<]>>>>>-][-]+>>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>[<<->>[-<+>]]<[->+<]<[[>>>>>]>>>>>>>[>>>>>]<<<,<<<<-[+<<<<<-]+<<<<<<<<<<<<<<<[<<<<<]>>>>>-][-]+>>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>[<<->>[-<+>]]<[->+<]<[[>>>>>]>>>>>>>[>>>>>]<<<+<<<<-[+<<<<<-]+<<<<<<<<<<<<<<<[<<<<<]>>>>>-][-]+>>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>[<<->>[-<+>]]<[->+<]<[[>>>>>]>>>>>>>[>>>>>]<<<-<<<<-[+<<<<<-]+<<<<<<<<<<<<<<<[<<<<<]>>>>>-][-]+>>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>[<<->>[-<+>]]<[->+<]<[[>>>>>]>>>>>>>[>>>>>]<<<<<-<<-[+<<<<<-]+<<<<<<<<<<<<<<<[<<<<<]>>>>>-][-]+>>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>[<<->>[-<+>]]<[->+<]<[[>>>>>]>>>>>>>[>>>>>]+<<-[+<<<<<-]+<<<<<<<<<<<<<<<[<<<<<]>>>>>-][-]+>>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>[<<->>[-<+>]]<[->+<]<[>>>+[[-<<<<<+>>>>>]<<<<<<<<+>>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>[<<->>[-<+>]]<[->+<]<[>>>+<<<-][-]+>>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>[<<->>[-<+>]]<[->+<]<[>>>-<<<-][-]+>>>]<<<-][-]+>>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>[<<->>[-<+>]]<[->+<]<[[>>>>>]>>>>>>>[>>>>>]<<<[-<+>]<[-<<<-[+<<<<<-]+<<<<<<<<<<<<<<<[<<<<<]>>>>>>>>+<<<[>>>>>]>>>>>>>[>>>>>]<<<+<]<<<-[+<<<<<-]+<<<<<<<<<<<<<<<[<<<<<]>>>>>>>>>+<[->[-]<]>[-<+>]<[[-]<<<->>>>>>>>>+[<<-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>[<<->>[-<+>]]<[->+<]<[>>>>-<<<<-][-]+>>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>-<+>[<<->>[-<+>]]<[->+<]<[>>>>+<<<<-][-]+[-]>>>>[->>>>>+<<<<<]>>>>>]<<<<<<]<<<[-]+-][-]+[-]>>>>>]

Pfiuuuu. Hopefully we won’t have to debug anything in the code above.

Assuming you already have compiled GCC with the brainfuck frontend (there are instructions on the github project page on how to do it) and that you are able to compile brainfuck files:


$ gcc brainfuck-interpreter.bf -o brainfuck-interpreter

You should have got an executable at this point in the current directory: brainfuck-interpreter. It can be used to interpret an easier program, let’s try with the usual “Hello World!” stuff. The code is short enough that we can feed it straight from stdin to the interpreter. The terminal NUL byte is very important or the interpreter will just crash. I/O for Brainfuck doesn’t handle errors and EOF 🙂


$ printf "++++++++[>++++[>++>+++>+++>+<<<<-]>+>+>->>+[<]<-]>>.>---.+++++++..+++.>>.<-.<.+++.------.--------.>>+.>++.\0" | ./brainfuck-interpreter
Hello World!

ostree-docker-builder

rpm-ostree, used together with OStree, is a powerful tool to generate immutable images for .rpm based systems, why not to use it for generating Docker images as well?

rpm-ostree already supports the generation of a Docker container tree, that can be feed to Docker almost as it is; ostree-docker-builder instead is a new tool to make this task simpler.

The following JSON description is enough to create an Emacs container using rpm-ostree based on Fedora-22.


{  
    "ref": "fedora-atomic/f22/x86_64/emacs",  
    "repos": ["fedora-22"],  
    "container": true,  
    "packages": ["emacs"]  
}

It references the fedora-22 repo. Be sure that in the same directory as the .json file there is a .repo file which contains the definition for fedora-22fedora-22.repo that looks like:


[fedora-22]  
name=Fedora 22 $basearch  
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=fedora-22&arch=$basearch  
enabled=0  
gpgcheck=0  
metadata_expire=1d

These two files are enough to generate an OStree commit, assuming the first file is called emacs.json, and that repo is a valid OStree repository as:


sudo rpm-ostree --repo=repo compose tree emacs.json

At this point, once we get a commit for the fedora-atomic/f22/x86_64/emacs branch we can use ostree-docker-builder to create the Docker image. The code for the program is on github at: https://github.com/giuseppe/ostree-docker-builder.


sudo ostree-docker-builder --repo=repo -c emacs fedora-atomic/f22/x86_64/emacs --entrypoint=/usr/bin/emacs-24.5

ostree-docker-builder accepts some arguments that change how the Dockerfile, which is provided to build the Docker image, is generated.

In the example above we use –entrypoint to set the ENTRYPOINT in the Dockerfile, more information can be found in the Docker documentation: https://docs.docker.com/reference/builder/

If everything works as expected, the image should be ready after that command and we can run it as:


sudo docker run --rm -ti emacs

Repeating the same command twice won’t have any effect, unless –force is specified, if there is no new OStree commit available, ostree-docker-builder stores this information in the image itself using a Docker LABEL.

Tagging

Another feature is the automatic tagging of images, when –tag is specified, the built image will be tagged as the name provided as argument to –tag and automatically pushed to the configured Docker registry.

Advantages of ostree-docker-builder

There are mainly two advantages in using ostree-docker-builder instead of a Dockerfile:

  • The same tool to generate both the OS image and the containers
  • Use OStree to track what files were changed, added or removed. If there are no differences then no image is created

Special thanks to Colin Walters for his suggestions while experimenting ostree-docker-builder and how to take advantage of the OStree checksum.

Summer of Code 2015 for wget

coming as a surprise, this year we have got 4 students to work full-time during the summer on wget. More than all the students who have ever worked for wget before during a Summer of Code!

The accepted projects cover different areas: security, testing, new protocols and some speed-up optimizations. Our hope is that we will be able to use the new pieces as soon as possible, this is why we ask students to keep their code always rebased on top of the current wget development version.

Improve Wget’s security
The project aims at adding HSTS support in wget and enhance FTP security through FTPS.
Speed up Wget’s Download Mechanism
Support two performance enhancements: conditional GET requests and TCP Fast Open.
HTTP/2 support
Basic HTTP/2 support on top of Nghttp2.
FTP Server Test Suite
Augment the tests suite with FTP tests.

Create a QCOW2 image for Fedora 22 Atomic

This tutorial shows how to create a QCOW2 image that can directly imported via virt-install to test out Fedora 22 Atomic starting from a custom OStree repo.

To create the image, we are going to use both rpm-ostree and rpm-ostree-toolbox. Ensure they are installed as well as Docker, libvirtd and Vagrant-libvirt.

The first phase consists in generating the OStree repo that is going to be used by the image. We can use directly the files from the fedora-atomic project as:


git clone --branch=f22 https://git.fedorahosted.org/git/fedora-atomic.git
ostree --repo=repo init --mode=archive-z2
rpm-ostree-toolbox treecompose -c fedora-atomic/config.ini --ostreerepo repo # Creates a new repo

At the end of this process, we have a new OStree repository which contains the tree of a Fedora 22 Cloud.

The second phase is more tricky and requires some manual customizations. Also it requires Docker, Vagrant-libvirt and libvirtd.

To use the repository that we have created in the first phase, we need to spawn an OStree HTTP daemon that will serve the files.

We do it by running:


cd repo
ostree -d trivial-httpd -p - # It will print the TCP port it is listening on

As the comment above says, OStree will print to stout the port where the server is listening on. Take note of it as we will need it later.

We are almost ready to create the QCOW2 image. For the unattended installation of the operating system, we need the fedora-cloud-atomic.ks file from the spin-kickstarts.git project.


git clone --branch=f22 https://git.fedorahosted.org/git/spin-kickstarts.git
cp spin-kickstarts/fedora-cloud-atomic.ks .

At this point, modify ./fedora-cloud-atomic.ks to point to our OStree repository.

This is how I modified the file to point to the OStree repo accessible at http://192.168.125.225:37375/. Use the correct settings for your machine, and the port used to serve the OStree repository that we noted before.


--- spin-kickstarts/fedora-cloud-atomic.ks	2015-04-17 15:41:17.124330230 +0200
+++ fedora-cloud-atomic.ks	2015-04-20 00:52:12.990728422 +0200
@@ -33,14 +33,14 @@
 logvol / --size=3000 --fstype="xfs" --name=root --vgname=atomicos
 
 # Equivalent of %include fedora-repo.ks
-ostreesetup --nogpg --osname=fedora-atomic --remote=fedora-atomic --url=http://kojipkgs.fedoraproject.org/mash/atomic/22/ --ref=fedora-atomic/f22/x86_64/docker-host
+ostreesetup --nogpg --osname=fedora-atomic --remote=fedora-atomic --url=http://192.168.125.225:37375/ --ref=fedora-atomic/f22/x86_64/docker-host
 
 reboot
 
 %post --erroronfail
 # See https://github.com/projectatomic/rpm-ostree/issues/42
 ostree remote delete fedora-atomic
-ostree remote add --set=gpg-verify=false fedora-atomic 'http://dl.fedoraproject.org/pub/fedora/linux/atomic/22/'
+ostree remote add --set=gpg-verify=false fedora-atomic 'http://192.168.125.225:37375/'
 
 # older versions of livecd-tools do not follow "rootpw --lock" line above
 # https://bugzilla.redhat.com/show_bug.cgi?id=964299

Now we are really ready to generate the image:


rpm-ostree-toolbox imagefactory -c fedora-atomic/config.ini -o output -i kvm -k fedora-cloud-atomic.ks --tdl fedora-atomic/fedora-atomic-22.tdl --ostreerepo repo

If everything goes as expected, the image file will be under output/images.


ls output/images
fedora-atomic-f22.qcow2.gz  SHA256SUMS

At this point it can be imported through virt-install as (atomic0cidata.iso is a CD iso which contains the cloud-init initialization data):


gunzip output/images/fedora-atomic-f22.qcow2.gz
virt-install --name f22-cloud --ram 2048 --import --disk path=output/images/fedora-atomic-f22.qcow2 --os-type=fedora-21 --graphics spice --disk path=atomic0cidata.iso,device=cdrom

This command will create a new VM named f22-cloud with 2G of RAM using the QCOW2 image we’ve generated.

Have fun!

How to deploy a WordPress Docker container using docker-compose

These are the steps to setup the current website in a Docker container:



wget -O- https://github.com/docker/compose/releases/download/1.2.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose

mkdir wordpress
cd wordpress

Then create a file fig.yml which contains:


db:
  image: mysql:5.5
  environment:
    MYSQL_ROOT_PASSWORD: "A VERY STRONG PASSWORD"
web:
  image: wordpress:latest
  ports:
    - "80:80"
  links:
    - db:mysql
EOF

This description takes advantage of a Docker feature to bind together two or more containers: Docker links. We use it to make the WordPress container depends on another container with MySQL 5.5.

The docker-compose up command will read fig.yml, download the needed data and deploy the two containers.


/usr/local/bin/docker-compose up

Et voilà, the port 80 of the host which runs the container will be forwarded to the port 80 of the WordPress container.