-
Notifications
You must be signed in to change notification settings - Fork 797
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add depth estimation pipeline #389
Conversation
Uses `size_divisor` to determine resize width and height
Xenova you need to STOP. enough is Enough. "i'm xenova and along with the ort web and optimium teams i'm going to get literally all the <1gb sota ML models quantized and running in the browser, and make it really easy for web devs to use"-- NO. This is not Natural. Web devs are meant to struggle with op support on a tflite model for 2 weeks before realising that they really can't get around the TF Select op requirements, and then steel themselves for creating a custom tflite build, but then find out that the build tooling required for adding TF Select ops is Google-internal, and then try to port the model to tfjs instead, but end up with weird errors, which they eventually fix by breaking the model apart into multiple models and implementing some operations in JS to glue the model back together, with one component actually being executed via the tflite runtime because it has support for a particular op that tfjs didn't have, but then the results for one part of the tfjs model are different from the Python model, so they post an issue with a Colab that isolates and minimally replicates it, but that isn't replied to until 6 months later and that reply is actually just the stale bot. |
I should probably post this as a separate issue, but is there a way to get a trimmed down version of transformers.js that only has the depth estimation pipeline? I.e. a "treeshaking" type feature to basically create the leanest possible build for cases where you only need to use certain parts of the library. (this isn't critical for me right now, to be clear, just curious, and it would be good if it were possible in the long term, as more models/features are added, since I imagine the library could get quite big) |
@josephrocca ❤️ and we're just getting started 😏
This has actually been requested quite a lot by others, like for use-cases where only tokenizers are used (so, we don't need onnxruntime-web to be bundled). So, please feel free to open up a feature request. Perhaps some others might be able to contribute to the discussion. For the most part, I think this responsibility should be transferred to build tools like webpack or rollup, as they have much better and advanced support for this. In the worst case, you can just fork the repo and remove everything you don't need :) |
This PR adds support for the
depth-estimation
pipeline w/ DPT and GLPNCloses #350
Example usage:
Input
Output:
Code used: https://huggingface.co/Intel/dpt-hybrid-midas#how-to-use
JavaScript (quantized):
JavaScript (unquantized):