This Collective Knowledge repository provides high level front-end for ARM Workload Automation framework (WA). It includes unified JSON API to WA, automated experimentation, benchmarking and tuning across farms of machines (Android, Linux, MacOS, Windows), web-based dashboard, optimization knowledge sharing, etc. Please, read CK Getting Started Guide, DATE'16 paper and CPC'15 article for more details about CK and our vision of collaborative, reproducible and systematic experimentation.
Check out our related project to crowdsource benchmarking and optimization of deep learning (engines, models, inputs): http://cKnowledge.io
- BSD, 3-clause
Relatively stable. Development is led by dividiti, the cTuning foundation and ARM.
You can see some public crowd-benchmarking results at CK public repo - just select "crowd-benchmark shared workloads via ARM WA framework" crowdsourcing scenario.
- Upcoming CK presentation and demo at ARM TechCon'16 (Oct. 27);
- Press-release: ARM and dividiti use CK to accelerate computer engineering (2016, HiPEAC Info'45, page 17)
- Collective Knowledge framework (@GitHub)
ARM WA will be automatically installed by CK. It requires:
- python2
- pip (install via apt-get install python-pip)
- yaml (install via apt-get install libyaml-dev)
- trace-cmd (install via apt-get install trace-cmd) - useful for advanced scenarios
CK is very portable and can run on diverse platforms including Linux, Windows, MacOS with Python 2.7 and 3.2+. WA can currently run only on Linux/MacOS with Python 2.7 - in the future, we plan to provide tighter integration of WA with the CK to make it run across any platform with Python 2.7 and 3.2+.
- Grigori Fursin, dividiti / cTuning foundation
- Sergei Trofimov, ARM
- Michael McGeagh, ARM
- Dave Butcher, ARM
- Anton Lokhmotov, dividiti
- Ed Plowman, ARM
Obtain CK repository for Workload Automation:
$ ck pull repo:ck-wa
Note that other CK repositories (dependencies) will be also automatically pulled. For example, ''ck-wa-workloads'' repository with all WA workloads shared in the CK format will also be installed from GitHub. We expect that later other users will be able to easily share, plug in and reuse their workloads (or use them in private workgroups).
First, you need to register a target machine in the CK as described in detail in the CK wiki:
$ ck add machine:my-target-machine
Please, select either ''WA: Android machine accessed via ARM's workload automation framework'' for Android based machine or ''WA: Linux machine accessed via ARM's workload automation'' for Linux based machine.
Now you can see available WA workloads via
$ ck list wa
Now you can try to run youtube workload via CK universal pipeline using Android mobile device connected via ADB (results will be recorded in a local ''wa_output directory''):
$ ck run wa:youtube --target=my-target-machine
Note, that all raw and unified results will be automatically recorded in ''wa-result'' entries. You can see these entries via
$ ck list wa-result
You can also browse results in a user-friendly way via web-based WA dashboard:
$ ck dashboard wa
You can easily clean all results via:
$ ck rm wa-result:* --force
Some workloads required mandatory parameters. You can cache and later reuse them via flag ''--cache'', i.e.
$ ck run wa:skype --cache
You will be asked parameters only once. Note, that at this moment, password parameters are openly recorded in CK repo, which is totally insecure. We plan to develop a secure auth mechanism for such workloads in the future: ARM-software/workload-automation#267
We also provided ''scenario'' flag to pre-select device config (such as used instruments) and parameters. You can see available WA scenarios via
$ ck list wa-scenario
You can then run a given workload with a given scenario via
$ ck run wa:youtube --scenario=cpu
Note that scenario ''cpu'' requires trace-cmd installed on your host machine. ''It may also require your mobile device to be rooted''! On Ubuntu, you can install it via
$ sudo apt install trace-cmd
Workloads which have C sources (currently '''dhrystone''' and '''memcpy''') are converted into universal CK program format. This allows users to reuse powerful crowd-benchmarking, autotuning and crowd-tuning functionality in the CK which works across different hardware, operating systems and compilers.
For example, you can compile dhrystone workload via
$ ck compile program:dhrystone --speed --target=my-target-machine
You can then run dhrystone workload via CK and record results in the tmp directory via
$ ck run program:dhrystone --target=my-target-machine
$ ck ls `ck find program:dhrystone`/tmp
You can autotune above program (using shared autotuning plugins) via
$ ck autotune program:dhrystone --target=my-target-machine
When autotuning/exploration is finished, you will see information how to plot a graph with results.
You can also replay a given WA run using above UIDs via
$ ck replay wa-result:{UID}
You can delete all above results via
$ ck rm wa-result:* --force
We prepared a demo to crowdsource benchmarking (and tuning) of shared workloads. Any user can participate in crowd-benchmarking simply as follows
$ ck crowdbench wa:dhyrstone
You can also attribute your public contributions using flag --user via
$ ck crowdbench wa:googlephotos --user=grigori@dividiti.com
The results are aggregated in the Collective Knowledge public repository. You just need to select crowdsourcing scenario "crowd-benchmark shared workloads via ARM WA framework".
At the same page, you can also see all participated platforms, CPU, GPU, OS, as well as user timeline.
You can also participate in crowd-tuning of other shared workloads simply via
$ ck pull repo:ck-crowdtuning
$ ck crowdsource experiment
Finally, you can participate in crowd-benchmarking and crowd-tuning using commodity mobile phones via two Android apps:
You can install or update latest WA from GitHub for a given target machine via CK via
$ ck install package:arm-wa-github
Just follow online instructions to reinstall WA on your machine.
It is possible to run workloads (currently shared as sources in CK format) on remote Windows devices using light-weight, standalone, open-source CK crowd-node server (similar to ADB). Simply download the latest CK crowd-node version here, install and run it on your target Windows device, and then register it in CK-WA using ''ck add machine''.
''Note that at this stage your client machine should also run Windows. However, we may provide cross-compilation in the future.''
We have Docker automation in the CK.
You can run the latest CK-WA Docker image via
$ ck run docker:ck-wa
or
$ docker pull ctuning/ck-wa
$ docker run ctuning-ck-wa
You can also build and run your local or customized CK-WA instance (on Linux and Windows) using the following commands:
$ ck build docker:ck-wa
$ ck run docker:ck-wa
You can find Docker image description in CK format in the following CK entry:
$ ck find docker:ck-wa
This Docker image include Android SDK and NDK, and supports mobile devices connected via ADB.
If you would like to deploy above image on Windows, please follow instructions from the 'ck-docker' repository:
$ ck show repo:ck-docker
To be able to easily share and reuse workloads, devices, instruments and other WA artifacts, we automatically import them into CK format (in the future, we hope to integrate CK to WA to avoid unnecessary imports).
You can find already imported workloads in the following repository (automatically pulled with the ck-wa):
$ ck find repo:ck-wa-workloads
$ ck show repo:ck-wa-workloads
You can find already imported WA devices and instruments in the following repository (also automatically pulled with the ck-wa):
$ ck find repo:ck-wa-extra
$ ck show repo:ck-wa-extra
You can import new workloads, devices and instruments as follows:
export CK_PYTHON=python2 ; ck import wa --target_repo_uoa=ck-wa-workloads --extra_target_repo_uoa=ck-wa-extra
If you want to import WA artifacts to other CK repositories (for example, private), just change flags ''--target_repo_uoa'' and ''--extra_target_repo_uoa''. If omitted, already existing entries will be updated or new ones will be recorded in ''local'' repository.
Various workloads may require specific versions of APK installed on Android devices. If these APK are not installed, user has to manually find and install them.
We started automating this process. It is now possible to list all APK and their versions via
$ ck detect apk
or
$ ck detect apk:com.android.calendar
If you have found and downloaded a specific APK, you can register it in the CK via
$ ck add apk:{name} --path
You can then install or uninstall a given APK via CK:
$ ck install apk:{name}
$ ck uninstall apk:{name}
Whenever you run a workload which require an APK, CK will search for it in the CK repo, and will try to install it if found. Private CK repositories with a collection of APK can be easily shared in companies' workgroups to automate workload benchmarking.
- We now focus on crowdsourcing benchmarking and optimization of deep learning across diverse hardware, models and inputs via CK. See ck-caffe, ck-tensorflow and our engaging Android app.
- We continuing unifying high-level API to crowd-benchmark and crowd-tune shared workloads (see 1, 2 and 3 to know more about our vision).
If you found our collaborative approach to benchmarking and optimization useful for your research, feel free to reference the following publication:
@inproceedings{cm:29db2248aba45e5:9671da4c2f971915,
title = {Collective Knowledge: towards R&D sustainability},
author = {Fursin, Grigori and Lokhmotov, Anton and Plowman, Ed},
booktitle = {Proceedings of DATE 2016 (Design, Automation and Test in Europe)},
year = {2016},
month = {March},
keys = {http://www.date-conference.com},
url = {http://bit.ly/ck-date16}
}
This concept has been described in the following publications:
- http://tinyurl.com/zyupd5v (DATE'16)
- http://arxiv.org/abs/1506.06256 (CPC'15)
- http://hal.inria.fr/hal-01054763 (Journal of Scientific Programming'14)
You can download all above references in the BibTex format here.
If you have questions or comments, feel free to get in touch with us via our public mailing list.
CK development is coordinated by dividiti, cTuning foundation and ARM. We would like to thank all volunteers for their valuable feedback and contributions.