-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[microTVM][Zephyr]Add project files for mlperftiny submission #13690
Conversation
Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment.
Generated by tvm-bot |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good! I've left a few nits, but this could be merged as-is. Thanks @mehrdadh!
#if TARGET_MODEL == 1 // KWS | ||
g_input_data = static_cast<void*>(ee_get_buffer_pointer()); | ||
#elif TARGET_MODEL == 2 // VWW | ||
int8_t* temp_int = reinterpret_cast<int8_t*>(ee_get_buffer_pointer()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe clarify that this is converting uint8
to int8
values?
apps/microtvm/zephyr/template_project/src/mlperftiny/submitter_implemented.cc
Show resolved
Hide resolved
1d96b14
to
13abc4e
Compare
} | ||
|
||
/** | ||
* Inference without feature engineering. The inpput tensor is expected to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit:
* Inference without feature engineering. The inpput tensor is expected to | |
* Inference without feature engineering. The input tensor is expected to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess that needs to be fixed upstream.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, I haven't changed the API files.
extern float g_quant_scale; | ||
extern int8_t g_quant_zero; | ||
|
||
void TVMRuntimeInit(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add doxygen comments for these functions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for catching this, I added documentation and fixed the header macros.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good, thanks for doing this.
@tvm-bot rerun |
…#13690) This PR makes these changes: 1. add source/header files for generating a zephyr project which is compatible with EEMBC runner for MLPerfTiny 2. adjust microtvm_api_server.py and CMakeLists.template to support `mlperftiny` project type 3. adds EEMBC api files from https://github.com/mlcommons/tiny in `thirdparty/tiny`. This pull request was co-authored by @alanmacd, @mkatanbaf, @guberti and @areusch as part of our effort to submit to MLPerfTiny. You can find our submission results here: https://mlcommons.org/en/inference-tiny-10/
This PR makes these changes:
mlperftiny
project typethirdparty/tiny
.This pull request was co-authored by @alanmacd, @mkatanbaf, @guberti and @areusch as part of our effort to submit to MLPerfTiny. You can find our submission results here: https://mlcommons.org/en/inference-tiny-10/