Skip to content

Commit

Permalink
Merge pull request #3 from ocramz/master
Browse files Browse the repository at this point in the history
Batch building&testing
  • Loading branch information
oweidner authored Jul 10, 2016
2 parents d76de5b + 7bfce30 commit 51364af
Show file tree
Hide file tree
Showing 7 changed files with 458 additions and 40 deletions.
22 changes: 22 additions & 0 deletions .travis.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
sudo:
required

language:
c

services:
- docker

env:
- NNODES=2

before_install:
# # update Docker
- sudo apt-get update
- sudo apt-get -y -o Dpkg::Options::="--force-confnew" install docker-engine wget
- wget https://github.com/docker/compose/releases/download/1.7.1/docker-compose-`uname -s`-`uname -m`
- sudo mv docker-compose-`uname -s`-`uname -m` /usr/local/bin/docker-compose
- sudo chmod +x /usr/local/bin/docker-compose

script:
- make main
48 changes: 31 additions & 17 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,23 @@
#

FROM ubuntu:14.04
# FROM phusion/baseimage

MAINTAINER Ole Weidner <[email protected]>

ENV DEBIAN_FRONTEND noninteractive
ENV USER mpirun

ENV DEBIAN_FRONTEND=noninteractive \
HOME=/home/${USER}


RUN apt-get update -y && \
apt-get upgrade -y && \
apt-get install -y openssh-server python-mpi4py python-numpy \
python-virtualenv python-scipy gcc gfortran openmpi-checkpoint binutils
apt-get install -y --no-install-recommends openssh-server python-mpi4py python-numpy python-virtualenv python-scipy gcc gfortran openmpi-checkpoint binutils && \
apt-get clean && apt-get purge && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

RUN mkdir /var/run/sshd
RUN echo 'root:mpirun' | chpasswd
RUN echo 'root:${USER}' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config

# SSH login fix. Otherwise user is kicked off after login
Expand All @@ -25,29 +31,37 @@ RUN echo "export VISIBLE=now" >> /etc/profile
# Add an 'mpirun' user
# ------------------------------------------------------------

RUN adduser --disabled-password --gecos "" mpirun && \
echo "mpirun ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
ENV HOME /home/mpirun
RUN adduser --disabled-password --gecos "" ${USER} && \
echo "${USER} ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers

# ------------------------------------------------------------
# Set-Up SSH with our Github deploy key
# ------------------------------------------------------------

RUN mkdir /home/mpirun/.ssh/
ADD ssh/config /home/mpirun/.ssh/config
ADD ssh/id_rsa.mpi /home/mpirun/.ssh/id_rsa
ADD ssh/id_rsa.mpi.pub /home/mpirun/.ssh/id_rsa.pub
ADD ssh/id_rsa.mpi.pub /home/mpirun/.ssh/authorized_keys
ENV SSHDIR ${HOME}/.ssh/

RUN mkdir -p ${SSHDIR}

ADD ssh/config ${SSHDIR}/config
ADD ssh/id_rsa.mpi ${SSHDIR}/id_rsa
ADD ssh/id_rsa.mpi.pub ${SSHDIR}/id_rsa.pub
ADD ssh/id_rsa.mpi.pub ${SSHDIR}/authorized_keys

RUN chmod -R 600 /home/mpirun/.ssh/* && \
chown -R mpirun:mpirun /home/mpirun/.ssh


RUN chmod -R 600 ${SSHDIR}* && \
chown -R ${USER}:${USER} ${SSHDIR}

# ------------------------------------------------------------
# Copy Rosa's MPI4PY example scripts
# Copy MPI4PY example scripts
# ------------------------------------------------------------

ADD mpi4py_benchmarks /home/mpirun/mpi4py_benchmarks
RUN chown mpirun:mpirun /home/mpirun/mpi4py_benchmarks
ENV TRIGGER 1

ADD mpi4py_benchmarks ${HOME}/mpi4py_benchmarks
RUN chown ${USER}:${USER} ${HOME}/mpi4py_benchmarks



EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
41 changes: 27 additions & 14 deletions Readme.md
Original file line number Diff line number Diff line change
@@ -1,51 +1,64 @@
## docker.openmpi

Travis CI: [![Build Status](https://travis-ci.org/ocramz/docker.openmpi.svg?branch=master)](https://travis-ci.org/ocramz/docker.openmpi)

With the code in this repository, you can build a Docker container that provides
the OpenMPI runtime and tools along with various supporting libaries,
including the MPI4Py Python bindings. The container also runs an OpenSSH server
so that multiple containers can be linked together and used via `mpirun`.


## Start an MPI Container Cluster
## MPI Container Cluster with `docker-compose`

While containers can in principle be started manually via `docker run`, we suggest that your use
[Docker Compose](https://docs.docker.com/compose/), a simple command-line tool
to define and run multi-container applications. We provde a sample `docker-compose.yml`
file in the repository:
to define and run multi-container applications. We provide a sample `docker-compose.yml` file in the repository:

```
mpi_head:
image: openmpi
ports:
- "22"
links:
- mpi_worker
- mpi_node
mpi_node:
image: openmpi
```
(Note: the above is docker-compose API version 1)

The file defines an `mpi_head` and an `mpi_node`. Both containers run the same `openmpi` image.
The only difference is, that the `mpi_head` container exposes its SHH server to
The only difference is, that the `mpi_head` container exposes its SSH server to
the host system, so you can log into it to start your MPI applications.


## Usage

The following command will start one `mpi_head` container and three `mpi_node` containers:

```
$> docker-compose scale mpi_head=1 mpi_worker=3
```
Once all containers are running, figure out the host port on which Docker exposes the SSH server of the `mpi_head` container:
Once all containers are running, you can login into the `mpi_head` node and start MPI jobs with `mpirun`. Alternatively, you can execute a one-shot command on that container with the `docker-compose exec` syntax, as follows:

```
$>
```
docker-compose exec --privileged mpi_head mpirun -n 2 python /home/mpirun/mpi4py_benchmarks/all_tests.py
----------------------------------------- ----------- --------------------------------------------------
1. 2. 3.

Breaking the above command down:

1. Execute command on node `mpi-head`
2. run on 2 MPI ranks
3. Command to run (NB: the Python script needs to import MPI bindings)

## Testing

You can spin up a docker-compose cluster, run a battery of MPI4py tests and remove the cluster using a recipe provided in the included Makefile (handy for development):

make main

Now you know the port, you can login to the `mpi_head` conatiner. The username is `mpirun`:

> TODO: Password
## Credits

```
$> ssh -p 23227 mpirun@localhost
```
This repository draws from work on https://github.com/dispel4py/ by O. Weidner and R. Filgueira
39 changes: 32 additions & 7 deletions docker-compose.yml
Original file line number Diff line number Diff line change
@@ -1,10 +1,35 @@
mpi_head:
image: openmpi
ports:
- "22"
links:
- mpi_node
build: .
# image: openmpi
ports:
- "22"
links:
- mpi_node

mpi_node:
image: openmpi
mpi_node:
build: .
# image: openmpi


# version: "2"

# services:
# mpi_head:
# build: .
# # image: openmpi
# ports:
# - "22"
# links:
# - mpi_node
# networks:
# - net

# mpi_node:
# build: .
# # image: openmpi
# networks:
# - net

# networks:
# net:
# driver: bridge
36 changes: 36 additions & 0 deletions makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
AUTH=ocramz
NAME=docker-openmpi
TAG=${AUTH}/${NAME}

export NNODES=4

.DEFAULT_GOAL := help

help:
@echo "Use \`make <target>\` where <target> is one of"
@echo " help display this help message"
@echo " build build from Dockerfile"
@echo " rebuild rebuild from Dockerfile (ignores cached layers)"
@echo " main build and docker-compose the whole thing"

build:
docker build -t $(TAG) .

rebuild:
docker build --no-cache -t $(TAG) .

main:
# 1 worker node
docker-compose scale mpi_head=1 mpi_node=1
docker-compose exec --privileged mpi_head mpirun -n 1 python /home/mpirun/mpi4py_benchmarks/all_tests.py
docker-compose down

# 2 worker nodes
docker-compose scale mpi_head=1 mpi_node=2
docker-compose exec --privileged mpi_head mpirun -n 2 python /home/mpirun/mpi4py_benchmarks/all_tests.py
docker-compose down

# ${NNODES} worker nodes
docker-compose scale mpi_head=1 mpi_node=${NNODES}
docker-compose exec --privileged mpi_head mpirun -n ${NNODES} python /home/mpirun/mpi4py_benchmarks/all_tests.py
docker-compose down
Loading

0 comments on commit 51364af

Please sign in to comment.