-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add performance benchmarks for Qiskit Machine Learning #13
Comments
@desireevl Can you please comment in the issue so that I can assign you? |
Thanks! |
Checkpoint 2 This project has been about improving the existing benchmarks and adding more machine learning benchmarks. There are two main ML categories that I have focused on so far: classification and regression. When I started this project, a few benchmarks tracking the time taken to run various classification models already existed. These focused on timing fitting, scoring and predicting on a synthetic dataset for the VQC and NN Classifier frameworks. I have since added the iris dataset to represent real world data, which these benchmarks are now also testing on. In order to track how well these classification models were performing over time, I have added a new benchmark that records the score of the model on a testing set. These metrics are based on the confusion matrix of results and only one metric per class is chosen that best represents the results in order to minimize overhead. On the regression side, I have created a new synthetic dataset suited for regression problems and added the Combined Cycle Power Plant (CCPP) dataset as it is real world data we can benchmark on. As there were no existing benchmarks for regression upon starting the project, I replicated those existing for classification, namely the timed benchmarks, and modified them to work for the regression models. Another benchmark tracking score of the regression models has been added that tracks the R2 score, mean average error and the mean squared error. In addition to these specific changes to the classification and regression benchmarks, I have also added general changes to the format of the benchmarking scripts and creating standards that allow new benchmarks to be added easily. I now plan on focusing on adding benchmarks for quantum kernels, which currently also do not have any benchmarking. The additions to the regression and classification benchmarks currently exist in two PRs which are still being reviewed: qiskit-community/qiskit-app-benchmarks#27, qiskit-community/qiskit-app-benchmarks#28 |
Final presentation |
Description
Qiskit Applications (Optimization, Finance, Nature, ML) have recently started working on performance benchmarks. By performance we understand various metrics like executing time, required memory and other specific metrics, e.g. score, that may arise in the benchmarks. Currently we don't have anything in Qiskit Machine Learning. This project aims at building a set of such benchmarks and making conclusions what can improved.
The project will roughly consist of:
This is how application benchmarks look like now: https://qiskit.github.io/qiskit-app-benchmarks/ and this is how they should look like at the end: https://qiskit.github.io/qiskit/.
Mentor/s
Anton Dekusar @adekusar-drl
Research Software Engineer / Qiskit Machine Learning contributor
Type of participant
Requirements:
Number of participants
1-2
Deliverable
Performance benchmarks and optionally a set of conclusions on improvements.
The text was updated successfully, but these errors were encountered: