- Repo Structure
- Mobile Testkit Dependencies
- Development Environment
- Running Mobile-Testkit Framework Unit Tests
- Sync Gateway Tests
- Development Environment
- Listener Tests
- Debugging
- Monitoring
This repository contains Mobile QE Functional / Integration tests.
$ git clone https://github.com/couchbaselabs/mobile-testkit.git
$ cd mobile-testkit/
The repo structure is the following:
-
libraries
-
keywords (new devopement, will be renamed to testkit in the near future)
-
provision
-
testkit (legacy)
-
utilities
-
testsuites
-
listener
-
syncgateway
The "controller" is the machine that runs the tests and communicates with the system under test, which is typically:
-
Your developer workstation
-
A virtual machine / docker container
The instructions below are for setting up directly on OSX. If you prefer to run this under Docker, see the Running under Docker wiki page.
If you are on OSX El Capitan, you must install python via brew rather than using the system python due to Pip issue 3165.
$ brew install python@3.7
After you install it, you should see that the python installed via brew is the default python:
$ which python
/usr/local/bin/python
$ python --version
Python 2.7.10
On MacOSX
$ brew install libcouchbase@2
On CentOS7
$ yum install gcc libffi-devel python-devel openssl-devel
$ yum install -y libev libevent
$ wget http://packages.couchbase.com/clients/c/libcouchbase-2.5.6_centos7_x86_64.tar && \
tar xvf libcouchbase-2.5.6_centos7_x86_64.tar && \
cd libcouchbase-2.5.6_centos7_x86_64 && \
rpm -ivh libcouchbase-devel-2.5.6-1.el7.centos.x86_64.rpm \
libcouchbase2-core-2.5.6-1.el7.centos.x86_64.rpm \
libcouchbase2-bin-2.5.6-1.el7.centos.x86_64.rpm \
libcouchbase2-libevent-2.5.6-1.el7.centos.x86_64.rpm && \
rm -f *.rpm
If you run into this error when running source setup.sh
:
src/pycbc.h:29:31: fatal error: libcouchbase/cbft.h: No such file or directory
#include <libcouchbase/cbft.h>
The update your libcouchbase version via:
wget http://packages.couchbase.com/releases/couchbase-release/couchbase-release-1.0-2-x86_64.rpm
sudo rpm -iv couchbase-release-1.0-2-x86_64.rpm
sudo yum install libcouchbase-devel libcouchbase2-bin gcc gcc-c++
$ [sudo] pip install virtualenv
$ cd mobile-testkit/
Setup PATH, PYTHONPATH, and ANSIBLE_CONFIG
source setup.sh
If you plan on doing development, it may be helpful to add the PYTHONPATH env variables to your .bashrc file so that you do not have to run this setup everytime you open a new shell.
Below is an example on how to run mobile testkit framework unit tests
pytest libraries/provision/test_install_sync_gateway.py
The sync_gateway tests require targeting different cluster topologies of sync_gateway(s) + Couchbase Server(s). Don’t worry! We will set this up for you. There are a few options for these cluster nodes. You can use EC2 AWS instances, docker (Sequoia) or local vms (vagrant).
The sync_gateway tests use Ansible to provision the clusters.
Prerequistites - Go installed - Docker installed
Download and build Sequoia
$ go get -v github.com/couchbaselabs/sequoia
$ cd $GOPATH/src/github.com/couchbaselabs/sequoia
$ go build
Edit the providers/docker/options.yml file to specify the versions you would like to use
-
Setup the Sync Gateway + Couchbase Server cluster
$ ./sequoia -test tests/mobile/test_sg.yml -scope tests/mobile/scope_1sg_1cbs.yml --expose_ports --skip_teardown --skip_test --network cbl
-
Copy the hosts.json from $SEQUOIA_REPO_ROOT/hosts.json to root of mobile-testkit repo
-
Mount testkit. Make sure to do this from the same director as the root directory of your development repository. This will mirror your local repo in the container and allow changes you make on the host to mirror in the container context
$ docker run --privileged -it --network=cbl --name mobile-testkit -v $(pwd):/opt/mobile-testkit -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker couchbase/mobile-testkit /bin/bash
[mobile-teskit] $ python libraries/utilities/generate_config_from_sequoia.py --host-file hosts.json --topology base_cc
-
Run tests (Functional or System)
[mobile-teskit] $ pytest -s --skip-provisioning --xattrs --mode=cc --server-version=5.0.0-3217 --sync-gateway-version=1.5.0-465 testsuites/syncgateway/functional/tests/
You will need an access key and secret access key. The AWSCredentials guide explains how to get them from your AWS account.
Then you will need an AWS keypair. The EC2 keypairs guide explains how to import your own Key Pair to Amazon EC2. Mobile-testkit creates a key-pair in the us-east region so the key pair must be set on this region too.
-
Add boto configuration
$ cd ~/
$ touch .boto
$ vi .boto
Note
|
Do not check in the information below |
-
Add your AWS credentials (Below are a fake example).
[Credentials]
aws_access_key_id = CDABGHEFCDABGHEFCDAB
aws_secret_access_key = ABGHEFCDABGHEFCDABGHEFCDABGHEFCDABGHEFCDAB
-
Add AWS env variables
$ export AWS_ACCESS_KEY_ID=CDABGHEFCDABGHEFCDAB
$ export AWS_SECRET_ACCESS_KEY=ABGHEFCDABGHEFCDABGHEFCDABGHEFCDABGHEFCDAB
$ export AWS_KEY=<your-aws-keypair-name>
You probably want to persist these in your .bash_profile
-
Create and AWS CloudFormation Stack. Make sure you have set up AWS credentials described in [Sync Gateway Test Dependencies]
$ python libraries/provision/create_and_instantiate_cluster.py \
--stackname="YourCloudFormationStack" \
--num-servers=1 \
--server-type="m3.large" \
--num-sync-gateways=2 \
--sync-gateway-type="m3.medium" \
--num-gatlings=1 \
--gatling-type="m3.medium" \
--num-lbs=0 \
--lb-type="m3.medium"
-
Generate
pool.json
file
Replace YourCloudFormationStack with the actual cloudformation stack name used.
python libraries/provision/generate_pools_json_from_aws.py --stackname YourCloudFormationStack --targetfile resources/pool.json
Note
|
This has only been tested on Mac OSX |
-
NOTE: Install the Vagrant only when Virtual box is up and running
-
cd
intovagrant/private_network
(orvagrant/public_network
if you need VM’s exposed to LAN, for example when testing against actual mobile devices) -
Create the cluster.
vagrant up
-
Run the following.
python utilities/generate_cluster_configs_from_vagrant_hosts.py --private-network|public-network
-
This will discover running vagrant boxes and get their ips
-
Generate
resources/pool.json
-
Go to resources folder and copy the pool.json.example to generate the pool.json
-
Generate
resources/cluster_configs/
-
-
Create an ssh key.
cd <home-dir>/.ssh/ && ssh-keygen
-
Make sure you have PasswordAuthentication set on each vagrant instance
cd vagrant/private_network/ && vagrant ssh host1 $ [root@localhost vagrant]# sudo bash $ [root@localhost vagrant]# vi /etc/ssh/sshd_config ... # To disable tunneled clear text passwords, change to no here! #PasswordAuthentication yes #PermitEmptyPasswords no PasswordAuthentication yes ... $ [root@localhost vagrant]# service sshd restart $ Redirecting to /bin/systemctl restart sshd.service
-
Repeat those steps for all hosts listed in
Vagrantfile
. -
Install the ssh key into the machines via
python libraries/utilities/install_keys.py --public-key-path=~/.ssh/id_rsa.pub --ssh-user=vagrant --ssh-password=vagrant
The password is set to
vagrant
.
pip install cryptography
-
Create
ansible.cfg
$ cp ansible.cfg.example ansible.cfg $ vi ansible.cfg # edit to your liking
-
Edit
ansible.cfg
and change the user to 'vagrant' -
Set the
CLUSTER_CONFIG
environment variable that is required by theprovision_cluster.py
script.$ export CLUSTER_CONFIG=resources/cluster_configs/1sg
-
Install the dependencies
python libraries/provision/install_deps.py
-
Run the following command.
python libraries/provision/provision_cluster.py --server-version=5.5.0 --sync-gateway-version=2.1
This command downloads and provisions the specified versions of Sync Gateway and Couchbase Server to the VMs. It will look up for those versions at builds/latestbuilds and builds/releases. Both URLs require to be on the Couchbase VPN.
Enjoy! You now have a Couchbase Server + Sync Gateway cluster running on your machine!
$ cp ansible.cfg.example ansible.cfg
$ vi ansible.cfg # edit to your liking
Make sure to use your ssh user ("root" is default). If you are using AWS, you may have to change this to "centos"
This is the list of machines that is used to generate the resources/cluster_configs which are used by the functional tests.
-
Rename
resources/pool.json.example
toresources/pool.json
. Update the fake ips with your endpoints or EC2 endpoints. -
If you do not have IP endpoints and would like to use Vagrant, see [Spin Up Machines on Vagrant]
-
If you do not have IP endpoints and would like to use AWS, see Spin Up Machines on AWS
-
Make sure you have at least 4 unique endpoints
-
If you are using vms and do not have key access for ssh, you can use the key installer script (Not required for AWS). This will target 'resources/pool.json' and attempt to deploy a public key of your choice to the machines.
In order to use Ansible, the controller needs to have it’s SSH keys in all the hosts that it’s connecting to.
Follow the instructions in Docker container SSH key instructions to setup keys in Docker
python libraries/utilities/install_keys.py --public-key-path=~/.ssh/id_rsa.pub --ssh-user=root
-
Generate the necessary cluster topologies to run the tests
python libraries/utilities/generate_clusters_from_pool.py
This targets the 'resources/pool.json' you supplied above and generates cluster definitions required for provisioning and running the tests. The generated configurations can be found in 'resources/cluster_configs/'.
-
Provision the cluster with --install-deps flag (only once)
-
Set the
CLUSTER_CONFIG
environment variable that is required by theprovision_cluster.py
script. Eg:$ export CLUSTER_CONFIG=resources/cluster_configs/2sg_1cbs
-
Install the dependencies:
python libraries/provision/install_deps.py
-
Install sync_gateway package:
$ python libraries/provision/provision_cluster.py \
--server-version=4.1.1 \
--sync-gateway-version=1.2.0-79
-
OR Install sync_gateway source:
Since building Sync Gateway from source requires access to the private sync-gateway-accel repo, you will need to
be in possession of the appropriate SSH key.
See install-gh-deploy-keys.py
for more info.
$ python libraries/utilities/install-gh-deploy-keys.py
--key-path=/path/to/cbmobile_private_repo_read_only_key
--ssh-user=vagrant
$ python libraries/provision/provision_cluster.py \
--server-version=4.1.1 \
--sync-gateway-commit=062bc26a8b65e63b3a80ba0f11506e49681d4c8c (requires full commit hash)
If you experience ssh errors, you may need to verify that the key has been added to your ssh agent
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/sample_key
-
Follow instructions here - http://docs.ansible.com/ansible/intro_windows.html
-
Create an inventory similar to -
[windows]
win1 ansible_host=111.22.333.444
[windows:vars]
# Use your RDP / local windows user credentials for ansible_user / ansible_password
ansible_user=FakeUser
ansible_password=FakePassword
ansible_port=5986
ansible_connection=winrm
# The following is necessary for Python 2.7.9+ when using default WinRM self-signed certificates:
ansible_winrm_server_cert_validation=ignore
Save as resources/cluster_configs/windows
Note
|
Do not publish or check this inventory file in. If you do, anyone could potentially access your machine. |
-
Download and execute this in the windows target PowerShell (Run as Administrator) ConfigureRemotingForAnsible.ps1
.\ConfigureRemotingForAnsible.ps1 -SkipNetworkProfileCheck
$url = "https://raw.githubusercontent.com/ansible/ansible/devel/examples/scripts/ConfigureRemotingForAnsible.ps1"
$file = "$env:temp\ConfigureRemotingForAnsible.ps1"
(New-Object -TypeName System.Net.WebClient).DownloadFile($url, $file)
powershell.exe -ExecutionPolicy ByPass -File $file
To view the current listeners that are running on the WinRM service, run the following command:
winrm enumerate winrm/config/Listener
If you hit errors, you may have to allow unsigned script execution (Use with caution)
Set-ExecutionPolicy Unrestricted
if you hit into this error: ansible winrm : the specified credentials were rejected by the server First make sure credentials are right. If it is right then try changing the port from 5985 to 5986 or vice versa If that does not work, make below changes
winrm set winrm/config/service/auth '@{Basic="true"}'
winrm set winrm/config/service '@{AllowUnencrypted="true"}'
Test by:
ansible windows -i resources/cluster_configs/windows -m win_ping
You may use what ever environment you would like, however PyCharm is a very good option. There are a couple steps required to get going with this IDE if you choose to use it.
-
Go to PyCharm → Preferences
-
Expand Project: mobile-testkit and select Project Interpreter
-
From the dropdown, make sure your venv (created above) is selected
-
Click Apply
-
Click on the gear next to the interpreter
-
Select More …
-
Make sure your virtualenv is selected and click on the directory icon on the bottom (Show Paths for Selected Interpreter)
-
Click the plus icon and find the path to mobile-testkit/
-
Select libraries from inside the repo directory
-
Click OK, OK, Apply
Now PyCharm should recognize the custom libraries and provide intellisense.
-
Open Andrond Studio
-
Open the code of mobile-testkit/app/testkit.java/Testkit.java.Android/Tests/AndroidClient2
-
Build the app
-
Run the app
-
If any changes made to the Android code, make sure you run the following — Lint the code : Analyze → Inspect code --- https://developer.android.com/studio/write/lint.html — Code styles and format code : Code → Reformat Code
The listener tests are a series of tests utilizing Couchbase Lite Listener via LiteServ and Sync Gateway or P2P. They are meant to be cross platform and should be able to run for for all the platforms that expose the Listener (Mac OSX, .NET, Android, iOS)
Make sure you have a Sync Gateway + Couchbase server running: See above for provisioning
Android SDK. Download Android Studio to install
export ANDROID_HOME=$HOME/Library/Android/sdk
export PATH=$ANDROID_HOME/tools:$ANDROID_HOME/platform-tools:$PATH
Mono to execute LiteServ .NET on macosx
http://www.mono-project.com/docs/getting-started/install/mac/
Install libimobiledevice for capture device logging for iOS
$ brew install --HEAD libimobiledevice
$ brew install ideviceinstaller
Install ios-deploy to bootstrap install / lauching of iOS apps
brew install node
npm install -g ios-deploy
The Listener is exposed via a LiteServ application which will be downloaded and launched when running the test.
Note
|
For running with Android, you must be running an emulator or device. The easiest is Genymotion with NAT, however devices are supported as long the sync_gateway and the android device can communicate. |
-
In Android Studio, go to File/Settings (for Windows and Linux) or to Android Studio/Preferences (for Mac OS X)
-
Select Plugins and click Browse Repositories.
-
Right-click on Genymotion and click Download and install. To see Genymotion plugin icon, display the toolbar by clicking View > Toolbar.
-
Or Download https://www.genymotion.com/download/ [genymotion] and install
-
Open Genymotion and choose Custom Phone from available templates
-
Open Android app in the android studio[Use sample app form mobile-testkit framework mobile-testkit/CBLClient/Apps/CBLTestServer-Android]
-
Build the app using custom phone emulator
-
Now you should see the custom phone running in the VirtualBox
-
Verify emulator IP (under network details) is matching with the VirtualBox IP(VirtualBox/File/HostNetworkManager/DHCP Server). If it is not matching update it in and restart VirtualBox and Gennymotion.
-
Open Android Studio
-
Create new "dummy" project
-
Click on AVD manager (purple icon)
-
Create Virtual Device
-
Click "Download" next to Marshmallow x86_64
-
Hit Next/Finish to create it
The scenarios can run on Android stock emulators/Genymotion emulators and devices.
If you’re running Android stock emulators you should make sure they are using HAXM. Follow the instructions here to install HAXM.
Ensure the RAM allocated to your combined running emulators is less than the total allocated to HAXM. You can configure the RAM for your emulator images in the Android Virtual Device Manager and in HAXM by reinstalling via the .dmg in the android sdk folder.
To run the tests make sure you have lauched the correct number of emulators. You can launch them using the following command.
emulator -scale 0.25 @Nexus_5_API_23 &
emulator -scale 0.25 @Nexus_5_API_23 &
emulator -scale 0.25 @Nexus_5_API_23 &
emulator -scale 0.25 @Nexus_5_API_23 &
emulator -scale 0.25 @Nexus_5_API_23 &
Verify that the names listed below match the device definitions for the test you are trying to run
adb devices -l
List of devices attached
emulator-5562 device product:sdk_google_phone_x86 model:Android_SDK_built_for_x86 device:generic_x86
emulator-5560 device product:sdk_google_phone_x86 model:Android_SDK_built_for_x86 device:generic_x86
emulator-5558 device product:sdk_google_phone_x86 model:Android_SDK_built_for_x86 device:generic_x86
emulator-5556 device product:sdk_google_phone_x86 model:Android_SDK_built_for_x86 device:generic_x86
emulator-5554 device product:sdk_google_phone_x86 model:Android_SDK_built_for_x86 device:generic_x86
Most of the port forwarding will be set up via instantiation of the Listener. However, you do need to complete some additional steps.
Note
|
Instantiating a Listener in test_listener_rest.py will automatically forward the port the listener is running on to one on localhost. However, that port forwarding will not be bound on the local IP of your computer. This can be useful when combining actual devices and emulators. The following section describes how to make the emulators reachable from devices.
|
Once you have emulators and possibly port forwarding setup, set the P2P_APP
environment variable to the .apk
of the application to be tested.
$ export P2P_APP=/path/to/apk
If the test fails with a hostname unreachable error then it’s probably because port forwarding needs to be configured (read section below).
Add the following lines to the file /etc/sysctl.conf
net.inet.ip.forwarding=1
net.inet6.ip6.forwarding=1
Specifying the 'local_port' when instantiating a Listener will forward the port on localhost only.
We need to bind the port on the `en0` interface to be reachable on the Wi-Fi. On Mac, this can be done with `pfctl`. Create a new anchor file under `/etc/pf.anchors/com.p2p`:
rdr pass on lo0 inet proto tcp from any to any port 10000 -> 127.0.0.1 port 10000
rdr pass on en0 inet proto tcp from any to any port 10000 -> 127.0.0.1 port 10000
rdr pass on lo0 inet proto tcp from any to any port 11000 -> 127.0.0.1 port 11000
rdr pass on en0 inet proto tcp from any to any port 11000 -> 127.0.0.1 port 11000
...
Parse and test your anchor file to make sure there a no errors:
sudo pfctl -vnf /etc/pf.anchors/com.p2p
The file at /etc/pf.conf
is the main configuration file that pf
loads at boot. Make sure to add both lines below to /etc/pf.conf
:
scrub-anchor "com.apple/*"
nat-anchor "com.apple/*"
rdr-anchor "com.apple/*"
rdr-anchor "com.p2p" # Port forwading for p2p replications
dummynet-anchor "com.apple/*"
anchor "com.apple/*"
load anchor "com.apple" from "/etc/pf.anchors/com.apple"
load anchor "com.p2p" from "/etc/pf.anchors/com.p2p" # Port forwarding for p2p replications
The lo0
are for local requests, and the en0
entries are for external requests (coming from an actual device or another emulator targeting your host).
Next, load and enable pf
by running the following:
$ sudo pfctl -ef /etc/pf.conf
Now, all the databases are reachable on the internal network via host:forwarded_port (ex. http://192.168.0.21:10000/db), where 192.168.0.21 is your host computer’s ip and 10000 is the 'local_port' passed when instantiating the Listener.
Thanks to pytest, you can break into pdb very easily
import pdb
for thing in things:
pdb.set_trace()
# break here ^
thing.do()
If you want the test to drop into pdb at the point of failure, you can execute the test with the flag
pytest --pdb
Make sure you have installed expvarmon
go get github.com/divan/expvarmon
To monitor the Gateload expvars for [load_generators]
nodes in the cluster_config
python libraries/utilities/monitor_gateload.py
To monitor the sync_gateway expvars for [sync_gateways]
nodes in the cluster_config
python libraries/utilities/monitor_sync_gateway.py