This repository contains a dockerfile that builds from source the OpenCV and FFMPEG libraries and which also installs Tensorflow.
Dockerfile contains the following:
- Tensorflow
<version>
- OpenCV (latest - build from source)
- FFMPEG (latest - build from source)
In this repository, we provide a GPU-based dockerfile with the previously mentioned libraries. You can build the docker image and container with some command options provided at the Makefile.
All variables related to the image building process are defined there, such as: image name, volume being shared
between host and container, docker registry,
Tensorflow tag name (from Tensorflow Docker Hub), etc.
In order to define the version of Tensorflow you can change the variable TAG_VERSION_TF_GPU
or TAG_VERSION_TF_GPU_LOCAL
in the Makefile
.
Example: TAG_VERSION_TF_GPU=1.13.2-gpu-py3
will build Tensorflow from
the tag tensorflow/tensorflow:1.13.2-gpu-py3
.
The commands available in the Makefile are:
build-image
: builds the docker image passing the user information for the image. The default Dockerfile considered isgpu.Dockerfile
.run-container
: creates the container with the image generated by the previous command sharing some ports between the host and container, and also executing with the--userns=host
option. This option is helpful when you need to share the user network permissions between host and container, such as when executing experiments in clusters.run-local-container
: similar to therun-container
option but without the--userns=host
option.upload-image
: uploads the docker image to the registry defined in theMakefile
. Example of the upload:docker push kbogdan/tensorflow-opencv-py3:1.13.2_latest
.clean
: removes the docker images related to our build.
For example, to build the docker image run make build-image
, which executes the following:
docker build \
--build-arg TAG_VERSION=$(TAG_VERSION_TF_GPU) \
--build-arg UID=`id -u` \
--build-arg GID=`id -g` \
--build-arg USER_NAME=${USER} \
--build-arg GROUP=`id -ng ${USER}` \
-t $(REGISTRY_URL)/$(SVC_GPU):$(VERSION) -f $(GPU_DOCKER_FILE) .
Notes:
- We also have a
LOCAL
option in the Makefile. This is implemented in order to be able to use a different Tensorflow tag when using the container locally.
Additional Tools
- Removing intermediary docker containers: to remove the intermediary docker containers, usually generated by intermediary steps in the docker build of the container, or by errors in the process, you can use this script.
Our build process by default creates the docker environment with an user associated with it. This prevents us to run the container as a root user and mess things up.
It seems that there are different ways to enable this option, but the only one that worked for my local experiments
and within an external network cluster (with the --userns=host
option) was this one.
This is enabled by these two steps:
- First, add the user information to the docker image with:
# Arguments to be passed to the image
ARG UID
ARG GID
ARG USER_NAME
ARG GROUP
# Adds the user information to the image
RUN groupadd --gid $GID $GROUP && \
useradd --create-home --shell /bin/bash --uid $UID --gid $GID $USER_NAME && \
adduser $USER_NAME sudo && \
su -l $USER
USER $USER_NAME
- Then, pass the required arguments to the image build process (currently added in the
build-image
command).
docker build \
... \
--build-arg UID=`id -u` \
--build-arg GID=`id -g` \
--build-arg USER_NAME=${USER} \
--build-arg GROUP=`id -ng ${USER}` \
...
In addition, you can run a container that shares the local host display (X11) with the container.
Note that this feature can raise security issues.
The simple way to do this includes adding the following lines to the docker run
command:
--env="DISPLAY"
--env="QT_X11_NO_MITSHM=1"
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw"
which results in the command:
docker run -it $(docker_gpu) \
--env="DISPLAY" \
--env="QT_X11_NO_MITSHM=1" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
--userns=host \
-p 6010:6006 -p 8890:8888 \
-v ${shell cd ../ && pwd}:$(MY_INSIDE_VOLUME_GPU) \
--name $(NAME_CONTAINER) $(REGISTRY_URL)/$(SVC_GPU):$(VERSION) /bin/bash
Beyond that, we also need to run xhost +local:root
in the local host.
You can check here for other options and some discussion on the security issues
associated with that.
Uploading and downloading files from Google Drive
One option is to use gdrive to upload and download files from the docker container. After installing it, you need to configure it to link to your account. You can add the installation in the Dockerfile with:
WORKDIR /tmp/
RUN wget -O gdrive-linux-x64 "https://docs.google.com/uc?id=0B3X9GlR6EmbnQ0FtZmJJUXEyRTA&export=download" &&\
chmod +x gdrive-linux-x64 &&\
install gdrive-linux-x64 /usr/local/bin/gdrive
Update: another good option is rclone. You can check how to install rclone here (Google Drive options).
- Repository docker-tensorflow-opencv3