<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://efrecon.github.io/atom.xml" rel="self" type="application/atom+xml" /><link href="https://efrecon.github.io/" rel="alternate" type="text/html" /><updated>2025-11-22T18:50:44+00:00</updated><id>https://efrecon.github.io/atom.xml</id><title type="html">DIT</title><subtitle>A random mix of Docker, IoT and Tcl</subtitle><author><name>Emmanuel Frécon</name><email>efrecon@gmail.com</email></author><entry><title type="html">Careful Docker Cleaning</title><link href="https://efrecon.github.io/careful-docker-cleaning/" rel="alternate" type="text/html" title="Careful Docker Cleaning" /><published>2020-07-14T00:00:00+00:00</published><updated>2020-07-14T00:00:00+00:00</updated><id>https://efrecon.github.io/careful-docker-cleaning</id><content type="html" xml:base="https://efrecon.github.io/careful-docker-cleaning/"><![CDATA[<p>As time goes by, Docker will leave “remains” behind and it is good practice to
clean away old cruft from time to time. While this happens mostly of dev
machines, as goals and requirements shift quickly, it also happens on production
servers. Unused old images will be left behind, perhaps dynamic or named volumes
issuing from a debugging session, etc. The documented solution to this is to run
<a href="https://docs.docker.com/engine/reference/commandline/system_prune/"><code class="language-plaintext highlighter-rouge">docker system prune</code></a> from time to time.</p>

<p>Wether you are operating <a href="http://cloudscaling.com/blog/cloud-computing/the-history-of-pets-vs-cattle/">cattle or pets</a>, you will probably want
to automate Docker resource cleanup, and using <a href="https://docs.docker.com/engine/reference/commandline/system_prune/">prune</a> is a point of no return,
meaning it could even lead to possible data loss when pruning volumes. The
open-source <a href="https://github.com/YanziNetworks/docker-prune">docker-prune</a> project tries to provide an alterative that is more
conservative in the pruning decisions that it takes. For recurring operations, I
recommend <a href="https://github.com/efrecon/dockron">dockron</a>.</p>

<p><a href="https://github.com/YanziNetworks/docker-prune">docker-prune</a> is a POSIX shell script that will prune exited containers,
dangling volumes and dangling images with the following twist:</p>

<h2 id="containers">Containers</h2>

<p>All exited, dead and <em>stale</em> containers will be removed, and this provides
filtering capabilities similar to the <a href="https://docs.docker.com/engine/reference/commandline/container_prune/">prune</a> command. Containers that
have a name that was automatically generated by Docker at creation time are
automatically selected. In addition, when removing containers, the script can
consider only a subset of the containers.</p>

<p>Exited and dead containers are as reported by Docker. Stale containers are
containers that are created but have not moved to any other state after a given
timeout.</p>

<p>In addition, it is possible to forcedly remove ancient, but still running
containers. This might be a dangerous operation, and it is turned off by
default.</p>

<h2 id="images">Images</h2>

<p>All dangling and orphan images will be removed. This also provides filtering
capabilities similar to the <a href="https://docs.docker.com/engine/reference/commandline/image_prune/">prune</a> command. When removing images, the
script will only consider images that were created a long time ago (6 months by
default).</p>

<p>Dangling images are layers that have no relationship to any tagged images.
Orphan images are images that are not used by any container, whichever state the
container is in (including created or exited state).</p>

<h2 id="volumes">Volumes</h2>

<p>All “empty” dangling volumes will be removed. The script will count the files
inside the volumes, only removing the ones which have less than an optional
number of files, which defaults to <code class="language-plaintext highlighter-rouge">0</code>. In addition, the script will is able to
focus on subsets of the dangling volumes. Volumes that have a name that was
automatically generated are automatically selected. File count is achieved
through mounting the volumes into a temporary <a href="https://hub.docker.com/_/busybox">busybox</a> container.</p>

<h2 id="fun-fact">Fun Fact</h2>

<p>The script dynamically parses the official go <a href="https://github.com/moby/moby/blob/master/pkg/namesgenerator/names-generator.go">implementation</a> to detect
containers which names were automatically generated. Out of this implementation,
<a href="https://github.com/moby/moby/blob/3f3676484459a9f5ec287f09735cc018a74f3cc5/pkg/namesgenerator/names-generator.go#L844">this</a> is probably the best line of code ever written!</p>]]></content><author><name>Emmanuel Frécon</name><email>efrecon@gmail.com</email></author><summary type="html"><![CDATA[As time goes by, Docker will leave “remains” behind and it is good practice to clean away old cruft from time to time. While this happens mostly of dev machines, as goals and requirements shift quickly, it also happens on production servers. Unused old images will be left behind, perhaps dynamic or named volumes issuing from a debugging session, etc. The documented solution to this is to run docker system prune from time to time.]]></summary></entry><entry><title type="html">Docker Volumes on the cheap</title><link href="https://efrecon.github.io/docker-volumes-on-the-cheap/" rel="alternate" type="text/html" title="Docker Volumes on the cheap" /><published>2019-01-31T00:00:00+00:00</published><updated>2019-01-31T00:00:00+00:00</updated><id>https://efrecon.github.io/docker-volumes-on-the-cheap</id><content type="html" xml:base="https://efrecon.github.io/docker-volumes-on-the-cheap/"><![CDATA[<p>Docker <a href="https://docs.docker.com/storage/">volumes</a> plugins have an <a href="https://docs.docker.com/engine/extend/plugins_volume/#volume-plugin-protocol">API</a>. When creating a volume plugin, in
addition to implement the API, you will also have to implement the volume
<a href="https://docs.docker.com/engine/extend/#developing-a-plugin">API</a>. On the other hand, there are a large number of <a href="https://github.com/libfuse/libfuse">fuse</a>-based
implementations for remote storages of various sorts. This post is about
leveraging these implementations as pseudo-volumes, while performing the mount
in a container. This comes at the cost and complexity of sharing a well-known
directory on the host between the container that will perform the mount, with
the container(s) that use the mount.</p>

<h2 id="capabilities">Capabilities</h2>

<p>The first step to manage and understand is how to give away enough rights to the
container that will perform the mount so that other processes on the host
(outside that container) will be able to access the files and directories. This
can be achieved through using the following options for docker <a href="https://docs.docker.com/engine/reference/run/">run</a>:</p>

<ul>
  <li>Give the fuse device to your container through <code class="language-plaintext highlighter-rouge">--device /dev/fuse</code></li>
  <li>Raise its capabilities to bypass some of the security layers through:
<code class="language-plaintext highlighter-rouge">--cap-add SYS_ADMIN</code> and <code class="language-plaintext highlighter-rouge">--security-opt "apparmor=unconfined"</code></li>
  <li>
    <p>Mount the local host file system into the container in a way that it can be
shared back with other processes and containers using
<a href="https://docs.docker.com/storage/bind-mounts/#configure-bind-propagation"><code class="language-plaintext highlighter-rouge">rshared</code></a>.</p>

  </li>
</ul>

<h2 id="unmounting">Unmounting</h2>

<p>You also want to properly unmount when the container gracefully terminates. This
is achieved through a combination of <a href="https://github.com/krallin/tini">tini</a> and proper <code class="language-plaintext highlighter-rouge">trap</code> in the shell. The
reason for this is that your are likely to interface the <a href="https://github.com/libfuse/libfuse">fuse</a> implementation
through some sort of <a href="https://docs.docker.com/engine/reference/builder/#entrypoint">entrypoint</a></p>

<h3 id="tini">tini</h3>

<p>Use <a href="https://github.com/krallin/tini">tini</a> as the main <a href="https://docs.docker.com/engine/reference/builder/#entrypoint">entrypoint</a> in your <code class="language-plaintext highlighter-rouge">Dockerfile</code> and arrange to give it
the <code class="language-plaintext highlighter-rouge">-g</code> option so that it properly propagates signals to the entire mounting
system in the container. <a href="https://github.com/krallin/tini">tini</a> exists as an Alpine <a href="https://pkgs.alpinelinux.org/package/edge/community/x86_64/tini">package</a> which can be
helpful in keeping down the size of the image.</p>

<h3 id="cleanup">Cleanup</h3>

<p>In order to clean properly, you can arrange for a shell to be called at the end
of your <a href="https://docs.docker.com/engine/reference/builder/#entrypoint">entrypoint</a> implementation, perhaps after having checked that the mount
performed as it should. This shell will trap the <code class="language-plaintext highlighter-rouge">INT</code> and <code class="language-plaintext highlighter-rouge">TERM</code> signals,
unmount the volume (lazily) and propagate these to the mount process. Without
<a href="https://github.com/krallin/tini">tini</a>, you would never have received these signals, by Docker construction and
design.</p>

<h2 id="examples">Examples</h2>

<p>I have made available two example images following these principles:</p>

<ul>
  <li><a href="https://cloud.docker.com/u/efrecon/repository/docker/efrecon/webdav-client">webdav-client</a> mounts WebDAV resources and leverages the full potential of
<a href="http://savannah.nongnu.org/projects/davfs2">davfs2</a>, being able to leverage all the options supported by <a href="http://savannah.nongnu.org/projects/davfs2">davfs2</a>. This
is in contrast to the <a href="https://github.com/fentas/docker-volume-davfs">davfs</a> volume which also uses <a href="http://savannah.nongnu.org/projects/davfs2">davfs2</a> under the hood,
but misses some of the configuration options.</li>
  <li>
    <p><a href="https://cloud.docker.com/u/efrecon/repository/docker/efrecon/s3fs">s3fs</a> mounts a remote S3 bucket using the fuse <a href="https://github.com/s3fs-fuse/s3fs-fuse">implementation</a> with the same
name. It supports all the official versions of the original fuse projects
through <a href="https://cloud.docker.com/repository/docker/efrecon/s3fs/tags">tags</a></p>

  </li>
</ul>]]></content><author><name>Emmanuel Frécon</name><email>efrecon@gmail.com</email></author><summary type="html"><![CDATA[Docker volumes plugins have an API. When creating a volume plugin, in addition to implement the API, you will also have to implement the volume API. On the other hand, there are a large number of fuse-based implementations for remote storages of various sorts. This post is about leveraging these implementations as pseudo-volumes, while performing the mount in a container. This comes at the cost and complexity of sharing a well-known directory on the host between the container that will perform the mount, with the container(s) that use the mount. Capabilities The first step to manage and understand is how to give away enough rights to the container that will perform the mount so that other processes on the host (outside that container) will be able to access the files and directories. This can be achieved through using the following options for docker run: Give the fuse device to your container through --device /dev/fuse Raise its capabilities to bypass some of the security layers through: --cap-add SYS_ADMIN and --security-opt "apparmor=unconfined" Mount the local host file system into the container in a way that it can be shared back with other processes and containers using rshared. Unmounting You also want to properly unmount when the container gracefully terminates. This is achieved through a combination of tini and proper trap in the shell. The reason for this is that your are likely to interface the fuse implementation through some sort of entrypoint tini Use tini as the main entrypoint in your Dockerfile and arrange to give it the -g option so that it properly propagates signals to the entire mounting system in the container. tini exists as an Alpine package which can be helpful in keeping down the size of the image. Cleanup In order to clean properly, you can arrange for a shell to be called at the end of your entrypoint implementation, perhaps after having checked that the mount performed as it should. This shell will trap the INT and TERM signals, unmount the volume (lazily) and propagate these to the mount process. Without tini, you would never have received these signals, by Docker construction and design. Examples I have made available two example images following these principles: webdav-client mounts WebDAV resources and leverages the full potential of davfs2, being able to leverage all the options supported by davfs2. This is in contrast to the davfs volume which also uses davfs2 under the hood, but misses some of the configuration options. s3fs mounts a remote S3 bucket using the fuse implementation with the same name. It supports all the official versions of the original fuse projects through tags]]></summary></entry><entry><title type="html">All tags of a Docker (Hub) image</title><link href="https://efrecon.github.io/all-image-tags/" rel="alternate" type="text/html" title="All tags of a Docker (Hub) image" /><published>2019-01-09T00:00:00+00:00</published><updated>2019-01-09T00:00:00+00:00</updated><id>https://efrecon.github.io/all-image-tags</id><content type="html" xml:base="https://efrecon.github.io/all-image-tags/"><![CDATA[<p>The registry behind the Docker <a href="https://cloud.docker.com/">hub</a> has an <a href="https://docs.docker.com/registry/spec/api/">API</a>. You can use this API to list
out all the tags for a given existing image using code similar to the following.
This heavily deviates from the <a href="http://www.googlinux.com/list-all-tags-of-docker-image/index.html">original</a> to enable the use of <code class="language-plaintext highlighter-rouge">curl</code>
(preferred) or <code class="language-plaintext highlighter-rouge">wget</code> and without using <code class="language-plaintext highlighter-rouge">jq</code>, but rather relying on standard
linux command-line tooling.</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># This is the image that you wish to list the tags for</span>
<span class="nv">im</span><span class="o">=</span><span class="s2">"abiosoft/caddy"</span>

<span class="k">if</span> <span class="o">[</span> <span class="nt">-z</span> <span class="s2">"</span><span class="si">$(</span><span class="nb">echo</span> <span class="s2">"</span><span class="nv">$im</span><span class="s2">"</span> | <span class="nb">grep</span> <span class="nt">-o</span> <span class="s1">'/'</span><span class="si">)</span><span class="s2">"</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nv">hub</span><span class="o">=</span><span class="s2">"https://registry.hub.docker.com/v2/repositories/library/</span><span class="nv">$im</span><span class="s2">/tags/"</span>
<span class="k">else
    </span><span class="nv">hub</span><span class="o">=</span><span class="s2">"https://registry.hub.docker.com/v2/repositories/</span><span class="nv">$im</span><span class="s2">/tags/"</span>
<span class="k">fi</span>

<span class="c"># Get number of pages</span>
<span class="k">if</span> <span class="o">[</span> <span class="nt">-z</span> <span class="s2">"</span><span class="si">$(</span><span class="nb">command</span> <span class="nt">-v</span> curl<span class="si">)</span><span class="s2">"</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nv">first</span><span class="o">=</span><span class="si">$(</span>wget <span class="nt">-q</span> <span class="nt">-O</span> - <span class="nv">$hub</span><span class="si">)</span>
<span class="k">else
    </span><span class="nv">first</span><span class="o">=</span><span class="si">$(</span>curl <span class="nt">-sL</span> <span class="nv">$hub</span><span class="si">)</span>
<span class="k">fi
</span><span class="nv">count</span><span class="o">=</span><span class="si">$(</span><span class="nb">echo</span> <span class="nv">$first</span> | <span class="nb">sed</span> <span class="nt">-E</span> <span class="s1">'s/\{\s*"count":\s*([0-9]+).*/\1/'</span><span class="si">)</span>
<span class="nv">pagination</span><span class="o">=</span><span class="si">$(</span><span class="nb">echo</span> <span class="nv">$first</span> | <span class="nb">grep</span> <span class="nt">-Eo</span> <span class="s1">'"name":\s*"[a-zA-Z0-9_.-]+"'</span> | <span class="nb">wc</span> <span class="nt">-l</span><span class="si">)</span>
<span class="nv">pages</span><span class="o">=</span><span class="si">$(</span><span class="nb">expr</span> <span class="nv">$count</span> / <span class="nv">$pagination</span> + 1<span class="si">)</span>

<span class="c"># Get all tags one page after the other</span>
<span class="nv">tags</span><span class="o">=</span>
<span class="nv">i</span><span class="o">=</span>0
<span class="k">while</span> <span class="o">[</span> <span class="nv">$i</span> <span class="nt">-le</span> <span class="nv">$pages</span> <span class="o">]</span> <span class="p">;</span>
<span class="k">do
    </span><span class="nv">i</span><span class="o">=</span><span class="si">$(</span><span class="nb">expr</span> <span class="nv">$i</span> + 1<span class="si">)</span>
    <span class="k">if</span> <span class="o">[</span> <span class="nt">-z</span> <span class="s2">"</span><span class="si">$(</span><span class="nb">command</span> <span class="nt">-v</span> curl<span class="si">)</span><span class="s2">"</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
        </span><span class="nv">page</span><span class="o">=</span><span class="si">$(</span>wget <span class="nt">-q</span> <span class="nt">-O</span> - <span class="s2">"</span><span class="nv">$hub</span><span class="s2">?page=</span><span class="nv">$i</span><span class="s2">"</span><span class="si">)</span>
    <span class="k">else
        </span><span class="nv">page</span><span class="o">=</span><span class="si">$(</span>curl <span class="nt">-sL</span> <span class="s2">"</span><span class="nv">$hub</span><span class="s2">?page=</span><span class="nv">$i</span><span class="s2">"</span><span class="si">)</span>
    <span class="k">fi
    </span><span class="nv">ptags</span><span class="o">=</span><span class="si">$(</span><span class="nb">echo</span> <span class="nv">$page</span> | <span class="nb">grep</span> <span class="nt">-Eo</span> <span class="s1">'"name":\s*"[a-zA-Z0-9_.-]+"'</span> | <span class="nb">sed</span> <span class="nt">-E</span> <span class="s1">'s/"name":\s*"([a-zA-Z0-9_.-]+)"/\1/'</span><span class="si">)</span>
    <span class="nv">tags</span><span class="o">=</span><span class="s2">"</span><span class="k">${</span><span class="nv">ptags</span><span class="k">}</span><span class="s2"> </span><span class="nv">$tags</span><span class="s2">"</span>
<span class="k">done</span>

<span class="c"># Once here, the variable tags should contain the list of all tags for the image.</span>
</code></pre></div></div>

<p>This code can be used in <a href="https://docs.docker.com/docker-hub/builds/advanced/#custom-build-phase-hooks">hooks</a> to write complex build/push instructions. Such
instructions can be used to automatically enhanced a standard image with
additional features in a future-proof way: for every new version of the original
image, watching it will ensure that your enhanced version will get built.</p>]]></content><author><name>Emmanuel Frécon</name><email>efrecon@gmail.com</email></author><summary type="html"><![CDATA[The registry behind the Docker hub has an API. You can use this API to list out all the tags for a given existing image using code similar to the following. This heavily deviates from the original to enable the use of curl (preferred) or wget and without using jq, but rather relying on standard linux command-line tooling. # This is the image that you wish to list the tags for im="abiosoft/caddy" if [ -z "$(echo "$im" | grep -o '/')" ]; then hub="https://registry.hub.docker.com/v2/repositories/library/$im/tags/" else hub="https://registry.hub.docker.com/v2/repositories/$im/tags/" fi # Get number of pages if [ -z "$(command -v curl)" ]; then first=$(wget -q -O - $hub) else first=$(curl -sL $hub) fi count=$(echo $first | sed -E 's/\{\s*"count":\s*([0-9]+).*/\1/') pagination=$(echo $first | grep -Eo '"name":\s*"[a-zA-Z0-9_.-]+"' | wc -l) pages=$(expr $count / $pagination + 1) # Get all tags one page after the other tags= i=0 while [ $i -le $pages ] ; do i=$(expr $i + 1) if [ -z "$(command -v curl)" ]; then page=$(wget -q -O - "$hub?page=$i") else page=$(curl -sL "$hub?page=$i") fi ptags=$(echo $page | grep -Eo '"name":\s*"[a-zA-Z0-9_.-]+"' | sed -E 's/"name":\s*"([a-zA-Z0-9_.-]+)"/\1/') tags="${ptags} $tags" done # Once here, the variable tags should contain the list of all tags for the image. This code can be used in hooks to write complex build/push instructions. Such instructions can be used to automatically enhanced a standard image with additional features in a future-proof way: for every new version of the original image, watching it will ensure that your enhanced version will get built.]]></summary></entry><entry><title type="html">Solving Dependencies in Docker Swarm</title><link href="https://efrecon.github.io/dependencies-docker-swarm/" rel="alternate" type="text/html" title="Solving Dependencies in Docker Swarm" /><published>2018-06-11T00:00:00+00:00</published><updated>2018-06-11T00:00:00+00:00</updated><id>https://efrecon.github.io/dependencies-docker-swarm</id><content type="html" xml:base="https://efrecon.github.io/dependencies-docker-swarm/"><![CDATA[<p>When/If moving from Docker Compose files to Stack files in Docker Swarm, you
might have problems solving dependencies and especially starting order as
<a href="https://docs.docker.com/compose/compose-file/#depends_on">depends_on</a> is not
supported when
<a href="https://docs.docker.com/engine/reference/commandline/stack_deploy/">deploying</a>
a stack in swarm mode.  One solution is to arrange for your services to
implement a network server and wait for this port to be opened and respond
before going on with running a given service.</p>

<p>Let’s run roughly through the example of running <a href="https://grafana.com/">grafana</a>
in a scalable way, for example using Redis and Postgres for, respectively,
runtime and persistent data.  To make this more complex, let’s suppose that the
TOML configuration file of grafana is the result of another service.  A regular
Grafana Docker installation neither has an entrypoing, nor a run command as it
is mostly <a href="http://docs.grafana.org/installation/docker/">configured</a> using a
combination of environment variables and the TOML configuration file.  To
arrange for the other services to be ready, you could have the following
(snippet) entrypoint:</p>

<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code>    <span class="na">entrypoint</span><span class="pi">:</span> <span class="pi">&gt;-</span>
      <span class="s">wait-for.sh pg_grafana:5432 -t 120 -v --</span>
        <span class="s">wait-for.sh redis:6379 -t 120 -v --</span>
          <span class="s">wait-for.sh grafana-init:8080 -t 120 -v --</span>
            <span class="s">/run.sh</span>
</code></pre></div></div>

<p>In that example, <code class="language-plaintext highlighter-rouge">pg_grafana</code>, <code class="language-plaintext highlighter-rouge">redis</code> and <code class="language-plaintext highlighter-rouge">grafana-init</code> are the name of the
services that implement (respectively) the postgres database, Redis and the
initialisation of the TOML configuration. Creating them is (almost) out of the
scope of this post… The implementation for
<a href="https://gist.github.com/efrecon/86456960e2110b287632fd7f42c1cd31">wait-for.sh</a>
is available as a gist. It deviates slightly from the
<a href="https://github.com/Eficode/wait-for">original</a> as it prefers <code class="language-plaintext highlighter-rouge">nc</code> for trying to
establish connection to remote servers, but is also able to use pure bash
<a href="https://www.tldp.org/LDP/abs/html/devref1.html">constructs</a> whenever <code class="language-plaintext highlighter-rouge">bash</code> is
available.</p>

<p><code class="language-plaintext highlighter-rouge">grafana-init</code> is a bit special as it typically generates a configuration file
depending on a number of parameters.  In a regular installation, such a service
would “die” once it had created the configuration file. However, instead it can
wait forever once it has performed its initialisation job. I have experimented
with relaying another external service with <code class="language-plaintext highlighter-rouge">socat</code>, as this is available in
most distributions. I typically call the TOML generator with a commane-line
option similar to <code class="language-plaintext highlighter-rouge">-r 8080:icanhazip.com:80</code>, you will notice that <code class="language-plaintext highlighter-rouge">8080</code> is the
port that the grafana service was waiting for. The implementation is as follows,
provided the content of the <code class="language-plaintext highlighter-rouge">-r</code> switch is contained in the variable <code class="language-plaintext highlighter-rouge">RELAY</code>.
Even if the remote service was not available, initialisation would still work as
<code class="language-plaintext highlighter-rouge">socat</code> would still respond on <code class="language-plaintext highlighter-rouge">8080</code> on incoming client requests.</p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">if</span> <span class="o">[</span> <span class="nt">-n</span> <span class="s2">"</span><span class="nv">$RELAY</span><span class="s2">"</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nv">lport</span><span class="o">=</span><span class="si">$(</span><span class="nb">echo</span> <span class="s2">"</span><span class="k">${</span><span class="nv">RELAY</span><span class="k">}</span><span class="s2">"</span>|cut <span class="nt">-d</span>: <span class="nt">-f1</span><span class="si">)</span>
    <span class="nv">remote</span><span class="o">=</span><span class="si">$(</span><span class="nb">echo</span> <span class="s2">"</span><span class="k">${</span><span class="nv">RELAY</span><span class="k">}</span><span class="s2">"</span>|cut <span class="nt">-d</span>: <span class="nt">-f2</span><span class="si">)</span>
    <span class="nv">rport</span><span class="o">=</span><span class="si">$(</span><span class="nb">echo</span> <span class="s2">"</span><span class="k">${</span><span class="nv">RELAY</span><span class="k">}</span><span class="s2">"</span>|cut <span class="nt">-d</span>: <span class="nt">-f3</span><span class="si">)</span>
    socat TCP4-LISTEN:<span class="k">${</span><span class="nv">lport</span><span class="k">}</span>,su<span class="o">=</span>nobody,fork,reuseaddr TCP4:<span class="k">${</span><span class="nv">remote</span><span class="k">}</span>:<span class="k">${</span><span class="nv">rport</span><span class="k">}</span>
<span class="k">fi</span>
</code></pre></div></div>]]></content><author><name>Emmanuel Frécon</name><email>efrecon@gmail.com</email></author><summary type="html"><![CDATA[When/If moving from Docker Compose files to Stack files in Docker Swarm, you might have problems solving dependencies and especially starting order as depends_on is not supported when deploying a stack in swarm mode. One solution is to arrange for your services to implement a network server and wait for this port to be opened and respond before going on with running a given service.]]></summary></entry><entry><title type="html">Docker Secrets in Grafana</title><link href="https://efrecon.github.io/docker-secrets-with-grafana/" rel="alternate" type="text/html" title="Docker Secrets in Grafana" /><published>2018-06-11T00:00:00+00:00</published><updated>2018-06-11T00:00:00+00:00</updated><id>https://efrecon.github.io/docker-secrets-with-grafana</id><content type="html" xml:base="https://efrecon.github.io/docker-secrets-with-grafana/"><![CDATA[<p>As my <a href="https://github.com/grafana/grafana-docker/pull/166">PR</a> has now been
merged, the latest official Grafana Docker
<a href="https://hub.docker.com/r/grafana/grafana/">image</a> (still in beta) has now
support for Docker secrets.  Using this version, you should be able to write
stack files similar to the following (shortened) one, provided you have the
password for the main administation user stored in the file at
<code class="language-plaintext highlighter-rouge">config/grafana/admin.pwd</code>.</p>

<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">version</span><span class="pi">:</span> <span class="s1">'</span><span class="s">3.3'</span>

<span class="na">services</span><span class="pi">:</span>
  <span class="na">grafana</span><span class="pi">:</span>
    <span class="na">image</span><span class="pi">:</span> <span class="s">grafana/grafana:5.2.0-beta1</span>
    <span class="na">environment</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="s">GF_SECURITY_ADMIN_PASSWORD_FILE=/run/secrets/admin.pwd</span>
    <span class="na">deploy</span><span class="pi">:</span>
      <span class="na">restart_policy</span><span class="pi">:</span>
        <span class="na">delay</span><span class="pi">:</span> <span class="s">10s</span>
        <span class="na">max_attempts</span><span class="pi">:</span> <span class="m">10</span>
        <span class="na">window</span><span class="pi">:</span> <span class="s">60s</span>
      <span class="na">replicas</span><span class="pi">:</span> <span class="m">1</span>
    <span class="na">logging</span><span class="pi">:</span>
      <span class="na">driver</span><span class="pi">:</span> <span class="s2">"</span><span class="s">json-file"</span>
      <span class="na">options</span><span class="pi">:</span>
        <span class="na">max-size</span><span class="pi">:</span> <span class="s2">"</span><span class="s">1m"</span>
        <span class="na">max-file</span><span class="pi">:</span> <span class="s2">"</span><span class="s">10"</span>
    <span class="na">healthcheck</span><span class="pi">:</span>
      <span class="na">test</span><span class="pi">:</span> <span class="s">curl --fail http://localhost:3000/ || exit </span><span class="m">1</span>
      <span class="na">interval</span><span class="pi">:</span> <span class="s">1m</span>
      <span class="na">timeout</span><span class="pi">:</span> <span class="s">10s</span>
      <span class="na">retries</span><span class="pi">:</span> <span class="m">3</span>
    <span class="na">secrets</span><span class="pi">:</span>
      <span class="pi">-</span>
        <span class="na">source</span><span class="pi">:</span> <span class="s">admin-passwd</span>
        <span class="na">target</span><span class="pi">:</span> <span class="s">/run/secrets/admin.pwd</span>
        <span class="na">mode</span><span class="pi">:</span> <span class="m">0444</span>

<span class="na">secrets</span><span class="pi">:</span>
  <span class="na">admin-passwd</span><span class="pi">:</span>
    <span class="na">file</span><span class="pi">:</span> <span class="s">config/grafana/admin.pwd</span>
</code></pre></div></div>

<p>For any environment variable that starts with <code class="language-plaintext highlighter-rouge">GF_</code> and ends with <code class="language-plaintext highlighter-rouge">_FILE</code>, the
Grafana Docker image will read the content of the file that it points at and
arrange for the environment variable with the same name but without the trailing
<code class="language-plaintext highlighter-rouge">_FILE</code> to be set before the main grafana process is started.  Using a trailing
<code class="language-plaintext highlighter-rouge">_FILE</code> is in line with other official images such as
<a href="https://hub.docker.com/_/postgres/">postgres</a> or
<a href="https://hub.docker.com/_/wordpress/">wordpress</a> for example.</p>]]></content><author><name>Emmanuel Frécon</name><email>efrecon@gmail.com</email></author><summary type="html"><![CDATA[As my PR has now been merged, the latest official Grafana Docker image (still in beta) has now support for Docker secrets. Using this version, you should be able to write stack files similar to the following (shortened) one, provided you have the password for the main administation user stored in the file at config/grafana/admin.pwd.]]></summary></entry><entry><title type="html">Accessing Containers from Remote</title><link href="https://efrecon.github.io/accessing-containers-from-remote/" rel="alternate" type="text/html" title="Accessing Containers from Remote" /><published>2018-04-16T00:00:00+00:00</published><updated>2018-04-16T00:00:00+00:00</updated><id>https://efrecon.github.io/accessing-containers-from-remote</id><content type="html" xml:base="https://efrecon.github.io/accessing-containers-from-remote/"><![CDATA[<p>Sometimes (oftentimes?), when debugging Docker based setup, you wish you could
access a service running in a container from the outside. Typically, these
would be services that are internal to the cluster architecture, but should not
be exposed for remote access, e.g. a database, a pub/sub queue, etc. Luckily,
the only thing you really need is access to a remote SSH server and accept to
(temporarily) add an SSH client to your container. Here is how, documented for
an <a href="https://hub.docker.com/_/alpine/">alpine</a> container, but steps will be the
same for containers based on other distributions.</p>

<h2 id="exporting-the-service">Exporting the Service</h2>

<p>Start by jumping into your container using docker
<a href="https://docs.docker.com/engine/reference/commandline/exec/">exec</a>.</p>

<div class="language-console highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gp">$</span><span class="w"> </span>docker <span class="nb">exec</span> <span class="nt">-it</span> &lt;containerID&gt; ash
</code></pre></div></div>

<p>Then from within the container, make sure to install an SSH <strong>client</strong>. No need
for a server here.</p>

<div class="language-console highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gp">$</span><span class="w"> </span>apk add <span class="nt">--no-cache</span> openssh-client
</code></pre></div></div>

<p>Once installation has succeeded, establish a reverse tunnel onto your host.
Provided that you have a server, within your container, that listens on port
<code class="language-plaintext highlighter-rouge">8088</code>, a command similar to the following will make the same port available at
the remote host and return back to the command line of the container (this is
the meaning of <code class="language-plaintext highlighter-rouge">-fNT</code>). The command will require you to login at the remote
host, etc. Note that if you did not want to use the same port at the remote
host, you would change the first port number in the argument to <code class="language-plaintext highlighter-rouge">-R</code> below.</p>

<div class="language-console highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gp">$</span><span class="w"> </span>ssh <span class="nt">-fNT</span> <span class="nt">-R8088</span>:localhost:8088 emmanuel@&lt;yourhost&gt;
</code></pre></div></div>

<p>It is a good idea to keep the session into the container running, so you can
easily return when cleaning up.</p>

<h2 id="accessing-the-service">Accessing the Service</h2>

<p>Now, login to the remote SSH host. You should be able to access the service
running in the container directly under the port <code class="language-plaintext highlighter-rouge">8088</code>. You can check that it
works using something like <code class="language-plaintext highlighter-rouge">nc</code> for example:</p>

<div class="language-console highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gp">$</span><span class="w"> </span>nc <span class="nt">-v</span> localhost 8088
</code></pre></div></div>

<p>Provided that you have a working Docker environment on that host, you can even
use the port for developing from <strong>within</strong> a container running on that remote
SSH host. This involves sharing the host network with the container and,
perhaps, mounting your development directory into the container. Running the
following command would give you an Alpine based container with your
development directory mounted on <code class="language-plaintext highlighter-rouge">/data</code> and where you are able to access
<code class="language-plaintext highlighter-rouge">8088</code> from the container that you wanted to debug.</p>

<div class="language-console highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gp">$</span><span class="w"> </span>docker run <span class="nt">-it</span> <span class="nt">--rm</span> <span class="nt">--network</span><span class="o">=</span>host <span class="nt">-v</span> <span class="sb">`</span><span class="nb">pwd</span><span class="sb">`</span>:/data alpine
</code></pre></div></div>

<h2 id="cleaning-up">Cleaning Up</h2>

<p>To clean up, apart from leaving the debugging container described in the
previous section, you should return to the initial container, kill the <code class="language-plaintext highlighter-rouge">ssh</code>
process running there are remove the ssh client package from the distribution.</p>]]></content><author><name>Emmanuel Frécon</name><email>efrecon@gmail.com</email></author><summary type="html"><![CDATA[Sometimes (oftentimes?), when debugging Docker based setup, you wish you could access a service running in a container from the outside. Typically, these would be services that are internal to the cluster architecture, but should not be exposed for remote access, e.g. a database, a pub/sub queue, etc. Luckily, the only thing you really need is access to a remote SSH server and accept to (temporarily) add an SSH client to your container. Here is how, documented for an alpine container, but steps will be the same for containers based on other distributions.]]></summary></entry><entry><title type="html">concocter v1.0</title><link href="https://efrecon.github.io/concocter-v1.0/" rel="alternate" type="text/html" title="concocter v1.0" /><published>2018-04-11T00:00:00+00:00</published><updated>2018-04-11T00:00:00+00:00</updated><id>https://efrecon.github.io/concocter-v1.0</id><content type="html" xml:base="https://efrecon.github.io/concocter-v1.0/"><![CDATA[<p><a href="https://github.com/efrecon/concocter">concocter</a> has just reached an official
<a href="https://github.com/efrecon/concocter/releases/tag/1.0">v1.0</a> release.
<code class="language-plaintext highlighter-rouge">concocter</code> is my own take on the init process in containers. It offers features
already found in other solutions such as <a href="http://supervisord.org/">supervisord</a>
or <a href="https://github.com/jwilder/docker-gen">docker-gen</a> with enough twists for
justifying the effort of writing yet another tool in a similar vein.</p>

<h2 id="rationale">Rationale</h2>

<p>The rationale of <code class="language-plaintext highlighter-rouge">concocter</code> is to acquire variables from a number of remote of
local sources, to dynamically generate (configuration) files with the content of
these variables and to launch one or several processes once the files have been
generated. <code class="language-plaintext highlighter-rouge">concocter</code> can be placed in the background to continuously perform
these tasks, thus being able to regenerate the file as soon as a variable has
changed. In that case, <code class="language-plaintext highlighter-rouge">concocter</code> will restart the process, or request it to
reload its configuration using regular signals.</p>

<p><code class="language-plaintext highlighter-rouge">concocter</code> has support for a range of sources for these variables. This
includes information about other containers running on the same Docker host, but
also the content of files (good for integration with Docker secrets), or the
content of a external HTTP(S) resources. The latter faciliates the use of, for
example, an internal key-value store with an RESTish interface for the storage
and access of cluster or project wide configuration variables. Using <code class="language-plaintext highlighter-rouge">concocter</code>
together with a capable reverse-proxy server such as
<a href="https://www.nginx.com/">nginx</a> or <a href="https://caddyserver.com/">caddy</a> provides
ways to automatically proxy containers carrying “instructions” as Docker labels
or environment variables.</p>

<h2 id="discussion-and-community">Discussion and Community</h2>

<p>The <a href="https://github.com/efrecon/concocter/issues">github</a> project provides
support for issue tracking and enhancement proposals.</p>]]></content><author><name>Emmanuel Frécon</name><email>efrecon@gmail.com</email></author><summary type="html"><![CDATA[concocter has just reached an official v1.0 release. concocter is my own take on the init process in containers. It offers features already found in other solutions such as supervisord or docker-gen with enough twists for justifying the effort of writing yet another tool in a similar vein.]]></summary></entry><entry><title type="html">Git LFS on Ubuntu</title><link href="https://efrecon.github.io/git-lfs-on-ubuntu/" rel="alternate" type="text/html" title="Git LFS on Ubuntu" /><published>2018-04-11T00:00:00+00:00</published><updated>2018-04-11T00:00:00+00:00</updated><id>https://efrecon.github.io/git-lfs-on-ubuntu</id><content type="html" xml:base="https://efrecon.github.io/git-lfs-on-ubuntu/"><![CDATA[<p>Git <a href="https://git-lfs.github.com/">LFS</a> enables the storage of large files out of the main repository,
replacing them by an index and reference to remote storage instead.</p>

<h2 id="installing-on-ubuntu">Installing on Ubuntu</h2>

<p>To install LFS on Ubuntu 16.04+, perform the following. This is almost as
advertised on the LFS home page and at <a href="https://packagecloud.io/github/git-lfs/install#bash-deb">PackageCloud</a>, with the slight tweak
that you need to actually install the extra package, since the <code class="language-plaintext highlighter-rouge">bash</code> script
will only add the repository.</p>

<div class="language-console highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="go">curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
sudo apt-get install git-lfs
git lfs install
</span></code></pre></div></div>

<h2 id="why">Why</h2>

<p>My repository of <a href="https://github.com/efrecon/tclkit/">tclkits</a> uses LFS storage to store binaries that were
(cross-)compiled by the excellent <a href="http://kitcreator.rkeene.org/kitcreator">KitCreator</a>. This permit other projects to
depend on those binaries in a lightweight form. For an example, have a look how
this <a href="https://github.com/efrecon/concocter/blob/master/make/make.tcl">make</a> script uses a <a href="https://github.com/efrecon/concocter/blob/master/make/bin/bootstrap.dwl">contract</a> to download these very binaries into a
location that is under the git repository, but kept out of revision control.</p>

<p>The rationale for hosting those externally to the <a href="https://github.com/efrecon/concocter/">concocter</a> project is to be
able to offer the binaries and downloads as a service to the community as there
seem to be few remotely hosted binaries with support for TLS.</p>]]></content><author><name>Emmanuel Frécon</name><email>efrecon@gmail.com</email></author><summary type="html"><![CDATA[Git LFS enables the storage of large files out of the main repository, replacing them by an index and reference to remote storage instead.]]></summary></entry><entry><title type="html">Plexgear AC600 Nano</title><link href="https://efrecon.github.io/plexgear-AC600-nano/" rel="alternate" type="text/html" title="Plexgear AC600 Nano" /><published>2018-03-24T00:00:00+00:00</published><updated>2018-03-24T00:00:00+00:00</updated><id>https://efrecon.github.io/plexgear-AC600-nano</id><content type="html" xml:base="https://efrecon.github.io/plexgear-AC600-nano/"><![CDATA[<p>I recently bought a <a href="https://www.kjell.com/se/sortiment/dator-natverk/natverk/tradlost-natverk/tradlosa-natverkskort/plexgear-tradlost-usb-natverkskort-433-mb-s-p61345">Plexgear AC600 Nano</a>
USB wifi dongle to bring some life to an old Intel NUC that was lying around
unused. Unfortunately, and unlike as advertised, it does not work under the
latest version of Ubuntu. The linux <a href="https://www.kjell.com/se/.mvc/Document/Zip?id=a446cdf9-f637-41b2-83a7-a89700fb816a">drivers</a>
provided by the vendor do not work with the latest version of the kernel.</p>

<p>Inside the wifi is a RealTek chipset 8821AU. Here is a rough guide for making
it working with the latest kernel version. These have been tested with a beta
of Ubuntu 18.04 LTS, but should be ok for prior versions as well:</p>

<ol>
  <li>Prepare for being able to compile additional stuff for your kernel: <code class="language-plaintext highlighter-rouge">sudo
apt install dkms build-essentials linux-headersXXX</code> where XXX matches the
kernel that you have.</li>
  <li>Install <code class="language-plaintext highlighter-rouge">git</code> if you don’t already have it, you might want to run this from
some git UI if you prefer.</li>
  <li>Clone this <a href="https://github.com/abperiasamy/rtl8812AU_8821AU_linux.git">repository</a>.
Note that there are a number of “competing” repos at github trying to solve
the same problems, and not all of them seem to be working.</li>
  <li>Change directory to the one of the repository above.</li>
  <li>Issue <code class="language-plaintext highlighter-rouge">sudo make -f Makefile.dkms install</code>.</li>
  <li>Load the driver <code class="language-plaintext highlighter-rouge">sudo modprobe -i rtl8812au</code>.</li>
  <li>Verify that have a new network interface: <code class="language-plaintext highlighter-rouge">sudo lshw -c network</code>.</li>
</ol>

<p>You should now be able to connect to any nearby wifi using the regular gnome
tools (or from the command-line if you prefer). These instructions install
DKMS so your drivers will be recompiled for each (new) version of the kernel.</p>]]></content><author><name>Emmanuel Frécon</name><email>efrecon@gmail.com</email></author><summary type="html"><![CDATA[I recently bought a Plexgear AC600 Nano USB wifi dongle to bring some life to an old Intel NUC that was lying around unused. Unfortunately, and unlike as advertised, it does not work under the latest version of Ubuntu. The linux drivers provided by the vendor do not work with the latest version of the kernel.]]></summary></entry><entry><title type="html">Docker API in Tcl</title><link href="https://efrecon.github.io/Docker-API-in-Tcl/" rel="alternate" type="text/html" title="Docker API in Tcl" /><published>2018-03-12T00:00:00+00:00</published><updated>2018-03-12T00:00:00+00:00</updated><id>https://efrecon.github.io/Docker-API-in-Tcl</id><content type="html" xml:base="https://efrecon.github.io/Docker-API-in-Tcl/"><![CDATA[<p>The main goal of my Docker API
<a href="https://github.com/efrecon/docker-client">implementation</a> in
<a href="https://www.tcl.tk/">Tcl</a> is to cover most of the official
<a href="https://docs.docker.com/reference/api/docker_remote_api/">API</a> while providing
a programming interface that feels Tcl-ish. To that end, it builds upon the
Tk-syle of programming that creates a context object and then creates a command
with the same name as the object to perform most further operations.</p>

<p>The implementation was lagging behind and a
<a href="https://github.com/efrecon/docker-client/commit/1bbf418258006ebfaf9e081af244ef3ef139c0fd">recent</a>
restructuring have started to bring it to par with the currernt state of the
Docker API itself. The restructuring matches the Docker CLI
<a href="https://github.com/moby/moby/pull/26025">restructuring</a> that happened with the
<a href="https://docs.docker.com/release-notes/docker-engine/#1130-2017-01-18">1.13.0</a>
version. Recent additions to the Tcl command set provide bridges to the
following sub-command trees of the API and CLI:</p>

<ul>
  <li><a href="https://docs.docker.com/engine/reference/commandline/container/">container</a></li>
  <li><a href="https://docs.docker.com/engine/reference/commandline/image/">image</a></li>
  <li><a href="https://docs.docker.com/engine/reference/commandline/service/">service</a></li>
  <li><a href="https://docs.docker.com/engine/reference/commandline/secret/">secret</a></li>
  <li><a href="https://docs.docker.com/engine/reference/commandline/config/">config</a></li>
  <li><a href="https://docs.docker.com/engine/reference/commandline/node/">node</a></li>
  <li><a href="https://docs.docker.com/engine/reference/commandline/volume/">volume</a></li>
</ul>

<p>There still remain a number of commands, but these new implementations provide a
<a href="https://github.com/efrecon/docker-client#api-principles">consistent</a> API that
is hopefully easier to work with than the previous uncategorised API, itself
loosely modelled after the previous CLI model. Work with the integration of
these new API calls is driven by the necessity to integrate some of these calls
in <a href="https://github.com/efrecon/dockron">dockron</a> so as to be able to perform
regular tasks on services within a Swarm, e.g. scale up/down at known times,
restart services, etc.</p>

<p>(Funny fact of the day… Apparently, this API implementation made its way to
<a href="https://news.ycombinator.com/item?id=9196178">HN</a> soon after I announced it the
first time!)</p>]]></content><author><name>Emmanuel Frécon</name><email>efrecon@gmail.com</email></author><summary type="html"><![CDATA[The main goal of my Docker API implementation in Tcl is to cover most of the official API while providing a programming interface that feels Tcl-ish. To that end, it builds upon the Tk-syle of programming that creates a context object and then creates a command with the same name as the object to perform most further operations.]]></summary></entry></feed>