-
We've used this excellent library for training using the InceptionTimePlus model and we're now looking to deploy saved models for inference in a production environment. For this inference, all we need is to load the model & make predictions; basically two library commands. The trouble I am currently facing is that it's taking a long time to load the library in what I thought was a minimal form. For example, the following command is taking a full 45-seconds on my local PC and ~35 seconds in Google Colab. I realize that this import will only need to occur once per service startup, but I was hoping to make the service as light-weight and fast as possible. Are there any tips on improving the speed of what we're trying to do? Thanks in advance. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 10 replies
-
Hi @bob-mcrae, |
Beta Was this translation helpful? Give feedback.
-
Great! I'm glad it works as expected! 😀 I'll close this issue now. Please, reopen it if necessary. And thanks a lot for bringing this up to my attention. |
Beta Was this translation helpful? Give feedback.
Great! I'm glad it works as expected! 😀
I'll close this issue now. Please, reopen it if necessary.
And thanks a lot for bringing this up to my attention.