![Gitter](https://badges.gitter.im/Join Chat.svg)
- selenium server grid with 2 nodes (chrome & firefox)
- mkv video recording
- VNC access (useful for debugging the container)
- google-chrome-stable
- google-chrome-beta: no longer provided but can still be found here
- google-chrome-unstable: no longer provided but can still be found here
- firefox stable latest
- firefox stable last 18 versions can be found here
- fluxbox or openbox (lightweight window managers)
The purpose of this project is to have Selenium running as simple and as fast as possible.
Note SeleniumHQ/docker-selenium and this one share the same purpose however both projects have diverged considerably in the last two years, some major differences are:
- both browsers and also the grid are on the same container in this repo
- support for video recording
- support for customizing the screen size
- support for ssh access that can be particularly useful for tunneling support
- this image size is considerably larger (around 2.5GB) than the official one which is around 300MB
- process manager: this image uses supervisord while the official uses bash
- release flow: TravisCI docker pushes vs docker.com automated builds in the official repo
Even though both projects share the same purpose is good to have alternatives, see also for example docker-alpine-selenium. Letting more than 1 docker-selenium project grow to be able to learn from each other's success or failures ultimately impacts the final users positively. This doesn't discard that at some point all selenium maintainers will sit together a sprint to coordinate some major changes and cleanup open issues and perhaps we might merge N similar projects in the future.
If you don't require a real browser PhantomJS might be enough for you. Electron allows to use the latest Chromium/V8 which might be equivalent to running in Chrome however still requires a display so xvfb is needed. You can also use a paid service like Sauce Labs or BrowserStack, note they offer free open source accounts and straightforward integration with Travis CI. You can also configure xvfb yourself but it involves some manual steps and doesn't include video recording, nor does PhantomJS nor Electron. A new chromium headless project looks very promising so might we worth to take a look though as of now leaves video recording out of scope there and Firefox also out of scope.
This project is normally tested in the last version of Docker and docker-compose and also in the release candidates. To figure out the currently used specific versions it surely works on, see file .travis.yml example values:
docker --version #=> 1.11.2
docker-compose --version #=> 1.7.1
If you need to use docker-machine to run docker (like for example on a Mac before the Docker native version 1.12), you also need to install VirtualBox and then run these commands to get started:
docker-machine create --driver virtualbox default
eval "$(docker-machine env default)"
You will need to run the second eval
command for every new terminal window.
-
Pull the image and run the container
docker pull elgalu/selenium #upgrades to latest if a newer version is available docker run -d --name=grid -p 4444:24444 -p 5900:25900 \ -e TZ="US/Pacific" -v /dev/shm:/dev/shm elgalu/selenium
-
Wait until the grid starts properly before starting the tests (Optional but recommended)
docker exec grid wait_all_done 30s
After this, Selenium will be up and ready to accept clients at http://localhost:4444/wd/hub
. The grid's available browsers can be viewed by opening the console at http://localhost:4444/grid/console
.
If you are using Mac (OSX) or Microsoft Windows localhost
won't work unless you are in Docker Beta (version >= 1.12) If you are using Docker version <= 1.11 please find out the correct IP through docker-machine ip default
.
Notes:
- The new default
VNC_PASSWORD=no
will make it VNC passwordless accessible. - Once this docker feature is in place
wait_all_done
won't be necessary anymore.
Shutdown gracefully
docker exec grid stop
docker stop grid
Shutdown immediately, no mercy
docker rm -vf grid
See docker-compose
See jenkins
This image is designed to run one test on each docker container but if you still want to run multiple tests in parallel you can still do so, there are some ways to do this:
-
The recommended way is via docker-compose and you should replace
mock
with your web service under test within the docker-compose.yml file.export SELENIUM_HUB_PORT=4444 VNC_FROM_PORT=40650 VNC_TO_PORT=40700 VIDEO=false docker-compose -p grid scale mock=1 hub=1 chrome=3 firefox=3
-
The (not recommended) way is by increasing
MAX_INSTANCES
andMAX_SESSIONS
which now defaults to 1.docker run -d --name=grid -p 4444:24444 -p 5900:25900 \ -v /dev/shm:/dev/shm -e VNC_PASSWORD=hola \ -e MAX_INSTANCES=20 -e MAX_SESSIONS=20 \ elgalu/selenium
The drawback is that all tests will run on the same desktop meaning the video recording will only capture the browser in the foreground but it's in the roadmap to make all this transparent, see issues #78 and #77.
Another problem with increasing MAX_INSTANCES
& MAX_SESSIONS
is focus issues. So in this case is better scale up/down via docker-compose.
If you are in Mac, you need to get the correct IP of the docker machine. One of these two commands should work to get it:
docker-machine ip default
or former:
boot2docker ip
You can also ssh into the machine as long as SSH_AUTH_KEYS="$(cat ~/.ssh/id_rsa.pub)"
is correct.
docker run --rm -ti --name=grid -p=4444:24444 -p=5900:25900 -p=22222:22222 \
-e SSHD=true \
-e SSH_AUTH_KEYS="$(cat ~/.ssh/id_rsa.pub)" \
-v /dev/shm:/dev/shm elgalu/selenium
Then
ssh -p 22222 -o StrictHostKeyChecking=no application@localhost
Include -X
in ssh command if you want to redirect the started GUI programs to your host, but for that you also need to pass -e SSHD_X11FORWARDING=yes
docker run --rm -ti --name=grid -p=4444:24444 -p=5900:25900 -p=22222:22222 \
-e SSHD=true -e SSHD_X11FORWARDING=yes \
-e SSH_AUTH_KEYS="$(cat ~/.ssh/id_rsa.pub)" \
-v /dev/shm:/dev/shm elgalu/selenium
Then
ssh -X -p 22222 -o StrictHostKeyChecking=no application@localhost
echo $DISPLAY #=> localhost:10.0
That's is useful for tunneling, or else you can stick with docker exec
to get into the instance with a shell:
docker exec -ti grid bash
Supervisor can expose an HTTP server but is not enough to bind the ports via docker run -p
, so in this case you need to FWD ports with ssh -L
ssh -p 22222 -o StrictHostKeyChecking=no -L localhost:29001:localhost:29001 application@localhost
You can set a custom screen size at docker run time by providing SCREEN_WIDTH
and SCREEN_HEIGHT
environment variables:
docker pull elgalu/selenium
docker run -d --name=grid -p 4444:24444 -p 5900:25900 \
-v /dev/shm:/dev/shm -e VNC_PASSWORD=hola \
-e SCREEN_WIDTH=1920 -e SCREEN_HEIGHT=1480 \
elgalu/selenium
docker exec grid wait_all_done 10s
open vnc://:hola@localhost:5900
You can control and modify the timezone on a container by using the TZ environment variable through the docker run
command, e.g. by adding -e TZ="US/Pacific"
docker run --rm -ti --name=grid -p 4444:24444 -p 5900:25900 \
-e TZ="US/Pacific" -e VNC_PASSWORD=hola \
-v /dev/shm:/dev/shm elgalu/selenium
Examples:
docker run ... -e TZ="US/Pacific" ...
docker exec grid date
#=> Fri May 20 06:04:58 PDT 2016
docker run ... -e TZ="America/Argentina/Buenos_Aires" ...
docker exec grid date
#=> Fri May 20 10:04:58 ART 2016
docker run ... -e TZ="Europe/Berlin" ...
docker exec grid date
#=> Fri May 20 15:04:58 CEST 2016
This feature was available in previous versions, please go to 2.47.1m to use it.
To configure which Chrome flavor you want to use (stable, beta, unstable), just pass -e CHROME_FLAVOR=beta
to docker run
. Default is stable
.
This feature was available in previous versions, please go to 2.47.1m to use it.
To configure which Firefox version to use, first check available versions in the CHANGELOG. Then pass -e FIREFOX_VERSION=38.0.6
to docker run
. Default is the latest number of the available list.
Step by step guide at docs/videos.md
If you create the container with -e VIDEO=true
it will start recording a video through the vnc connection run upon start.
It is recommended to create first a local folder videos
in your current directory, and mount the videos directory for
an easy transfer with -v $(pwd)/videos:/videos
.
Once your tests are done you can either manually stop the recording via docker exec grid /bin-utils/stop-video
where
grid is just the arbitrary container chosen name in docker run
command. Or simply stop the container and that will stop the video recording automatically.
Relevant environment variables to customize it are:
FFMPEG_FRAME_RATE=25
VIDEO_FILE_NAME="test"
VIDEO_FILE_EXTENSION=mkv
FFMPEG_CODEC_ARGS=""
It is important to note that ffmpeg
video recording takes an important amount of CPU usage, even more when a well compressed format like mkv is selected. You may want to delegate video recording through vnc2swf-start.sh
to a separate server process and even delegate compression to a further step or to a cloud service like YouTube.
When you don't specify a VNC password, a random one will be generated. That password can be seeing by grepping the logs:
docker exec grid wait_all_done 30s
#=> ... a VNC password was generated for you: ooGhai0aesaesh
You can connect to see what's happening
open vnc://:ooGhai0aesaesh@localhost:5900
Disabled by default, noVNC provides a browser VNC client so you don't need to install a vnc viewer if you choose so. Note: we were using guacamole before.
Safari Browser already comes with a built-in vnc viewer so this feature is overkill and is disabled by default, just navigate to vnc://localhost:5900 in your Safari browser.
You need to pass the environment variable -e NOVNC=true
in order to start the noVNC service and you will be able to open a browser at localhost:6080
docker run --rm -ti --name=grid -p 4444:24444 -p 5900:25900 \
-v /dev/shm:/dev/shm -p 6080:26080 -e NOVNC=true \
elgalu/selenium
If the VNC password was randomly generated find out with
docker exec grid wait_all_done 30s
#=> ... a VNC password was generated for you: ooGhai0aesaesh
If your tests crashes in Chrome you may need to increase shm size or simply start your container by sharing -v /dev/shm:/dev/shm
docker run ... -v /dev/shm:/dev/shm
Alternatively you can increase it inside the container:
- start docker in privileged mode:
docker run --privileged
- increase shm size from default 64mb to something bigger:
docker exec grid sudo umount /dev/shm
docker exec grid sudo mount -t tmpfs -o rw,nosuid,nodev,noexec,relatime,size=512M tmpfs /dev/shm
In CentOS and apparently since docker 1.10.0 is necessary to disable sandbox mode through --no-sandbox example client implementation.
The error comes along with this message while starting Chrome:
Failed to move to new namespace: PID namespaces supported. Network namespace supported, but failed: errno = Operation not permitted
ChromeOptions options = new ChromeOptions();
options.addArguments("--no-sandbox");
In Protrator
capabilities: {
browserName: 'chrome',
chromeOptions: {
args: ['--no-sandbox'],
},
},
However this is now the default of this image, see CHROME_ARGS="--no-sandbox"
in the Dockerfile so don't be surprised to see the "Stability and security will suffer" banner when opening Chrome inside the container.
Using VNC_PASSWORD=no
will make it VNC passwordless accessible, leave it empty to get a randomly generated one or if you don't use VNC simply deactivate it via docker run ... -e VNC_START=false
The docker images are built and pushed from TravisCI for full traceability.
Do NOT expose your selenium grid to the outside world (e.g. in AWS), because Selenium does not provide auth. Therefore, if the ports are not firewalled malicious users will use your selenium grid as a bot net.
Put that firewall stuff aside, a file scm-source.json is included at the root directory of the generated image with information that helps to comply with auditing requirements to trace the creation of this docker image.
Note scm-source.json file will always be 1 commit outdated in the repo but will be correct inside the container.
This is how the file looks like:
cat scm-source.json #=> { "url": "https://github.com/elgalu/docker-selenium",
"revision": "8d2e03d8b4c45c72e0c73481d5141850d54122fe",
"author": "lgallucci",
"status": "" }
There are also additional steps you can take to ensure you're using the correct image:
You can simply verify that image id is indeed the correct one.
# e.g. full image id for some specific tag version
export IMGID="<<Please see CHANGELOG.md>>"
if docker inspect -f='{{.Id}}' elgalu/selenium:latest |grep ${IMGID} &> /dev/null; then
echo "Image ID tested ok"
else
echo "Image ID doesn't match"
fi
Given docker.io currently allows to push the same tag image twice this represent a security concern but since docker >= 1.6.2 is possible to fetch the digest sha256 instead of the tag so you can be sure you're using the exact same docker image every time:
# e.g. sha256 for some specific tag
export SHA=<<Please see CHANGELOG.md>>
docker pull elgalu/selenium@sha256:${SHA}
You can find all digests sha256 and image ids per tag in the CHANGELOG so as of now you just need to trust the sha256 in the CHANGELOG. Bullet proof is to fork this project and build the images yourself if security is a big concern.
To open the [Sauce Labs][] tunnel while starting the docker container pass in the arguments -e SAUCE_TUNNEL=true -e SAUCE_USER_NAME=leo -e SAUCE_API_KEY=secret
that will also require the tunnel to open successfully, else the container will exit so you can be sure your tunnel is up and running before starting to test.
To open the BrowserStack tunnel while starting the docker container pass in the arguments -e BSTACK_TUNNEL=true -e BSTACK_ACCESS_KEY=secret
that will also require the tunnel to open successfully, else the container will exit so you can be sure your tunnel is up and running before starting to test.
Note the below method gives full access to the docker container to the host machine.
Host machine, terminal 1:
sudo apt-get install xserver-xephyr
export XE_DISP_NUM=12 SCREEN_WIDTH=2000 SCREEN_HEIGHT=1500
Xephyr -ac -br -noreset -resizeable \
-screen ${SCREEN_WIDTH}x${SCREEN_HEIGHT} :${XE_DISP_NUM}
Host machine, terminal 2:
docker run --rm --name=ch -p=4444:24444 \
-v /dev/shm:/dev/shm \
-e SCREEN_WIDTH -e SCREEN_HEIGHT -e XE_DISP_NUM \
-v /tmp/.X11-unix/X${XE_DISP_NUM}:/tmp/.X11-unix/X${XE_DISP_NUM} \
elgalu/selenium
3
Now when you run your tests instead of connecting. If docker run fails try xhost +
If you git clone this repo locally, i.e. git clone
it and cd
into where the Dockerfile is, you can:
docker build -t selenium .
CH=$(docker run --rm --name=CH -p=127.0.0.1::24444 -p=127.0.0.1::25900 \
-v /e2e/uploads:/e2e/uploads selenium)
Note: -v /e2e/uploads:/e2e/uploads
is optional in case you are testing browser uploads on your WebApp, you'll probably need to share a directory for this.
The 127.0.0.1::
part is to avoid binding to all network interfaces, most of the time you don't need to expose the docker container like that so just localhost for now.
I like to remove the containers after each e2e test with --rm
since this docker container is not meant to preserve state, spawning a new one is less than 3 seconds. You need to think of your docker container as processes, not as running virtual machines in case you are familiar with vagrant.
A dynamic port will be bound to the container ones, i.e.
# Obtain the selenium port you'll connect to:
docker port $CH 4444
#=> 127.0.0.1:49155
# Obtain the VNC server port in case you want to look around
docker port $CH 25900
#=> 127.0.0.1:49160
In case you have RealVNC binary vnc
in your path, you can always take a look, view only to avoid messing around your tests with an unintended mouse click or keyboard.
./bin/vncview.sh 127.0.0.1:49160
This command line is the same as for Chrome, remember that the selenium running container is able to launch either Chrome or Firefox, the idea around having 2 separate containers, one for each browser is for convenience, plus avoid certain :focus
issues your WebApp may encounter during e2e automation.
FF=$(docker run --rm --name=ff -p=127.0.0.1::24444 -p=127.0.0.1::25900 \
-v /e2e/uploads:/e2e/uploads selenium)
CONTAINER_IP=$(docker logs sele10 2>&1 | grep "Container docker internal IP: " | sed -e 's/.*IP: //' -e 's/<.*$//')
echo ${CONTAINER_IP} #=> 172.17.0.34
docker images
#=>
REPOSITORY TAG IMAGE ID CREATED SIZE
selenium latest a13d4195fc1f About an hour ago 2.927 GB
ubuntu xenial-20160525 2fa927b5cdd3 4 weeks ago 122 MB
By default docker run
sets the DNS to Google ones 8.8.8.8 and 8.8.4.4 however you may need to use your own.
First attempt is to use --dns
option, e.g.
docker run --dns=1.1.1.1 --dns=1.1.1.2 <args...>
However this may not work for you and simply want to share the same DNS name resolution than the docker host machine, in which case you should use --net=host
along with --pid=host
docker run --net=host --pid=host <args...>
So --pid=host
is included to avoid moby/moby#5899 sudo: unable to send audit message: Operation not permitted
Full example using --net=host
and --pid=host
but for this to work in OSX you need the latest docker mac package, upgrade if you haven't done so in the last month.
docker run -d --name=grid --net=host --pid=host \
-v /dev/shm:/dev/shm -e SELENIUM_HUB_PORT=4444 \
elgalu/selenium
docker exec grid wait_all_done 30s
./test/python_test.py
docker run -d --net=host --pid=host --name=grid -v /dev/shm:/dev/shm elgalu/selenium
docker exec grid wait_all_done 30s
All output is sent to stdout so it can be inspected by running:
$ docker logs -f <container-id|container-name>
Powered by Supervisor, the container leaves many logs;
/var/log/cont/docker-selenium-status.log
/var/log/cont/selenium-hub-stderr.log
/var/log/cont/selenium-hub-stdout.log
/var/log/cont/selenium-node-chrome-stderr.log
/var/log/cont/selenium-node-chrome-stdout.log
/var/log/cont/selenium-node-firefox-stderr.log
/var/log/cont/selenium-node-firefox-stdout.log
/var/log/cont/sshd-stderr.log
/var/log/cont/sshd-stdout.log
/var/log/cont/supervisord.log
/var/log/cont/video-rec-stderr.log
/var/log/cont/video-rec-stdout.log
/var/log/cont/vnc-stderr.log
/var/log/cont/vnc-stdout.log
/var/log/cont/xmanager-stderr.log
/var/log/cont/xmanager-stdout.log
/var/log/cont/xterm-stderr.log
/var/log/cont/xterm-stdout.log
/var/log/cont/xvfb-stderr.log
/var/log/cont/xvfb-stdout.log