-
Notifications
You must be signed in to change notification settings - Fork 434
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
triton seem to be not buildable outside of google #2855
Comments
We do have OSS CI for both JAX and TF, so it does build in OSS checkout. I do not think that we maintain a Windows GPU CI though. |
Then how can I access the
I know, in case you want to use an old version of jaxlib on windows, you can access it from https://github.com/cloudhan/jax-windows-builder ;) There is 180GB outbound bandwidth per month at the moment. I think you should take the idea, of officially adding one Win CI for jax, seriously. Some day. |
I think there's something worth figuring out here. openxla/xla is pinning commit 1627e0 of the There's a second question, which is: how are we locating A hack that would work, looking at the code, would be to just set |
It is a fork, then why not just add a new API called |
Well, it's a fork, but it's trying to be a minimal fork. The primary goal of the fork is to synchronize Triton against LLVM head. @chsigg any input? |
You are right Peter, I didn't intend to push the
That's pretty much we do with this change internally, but apparently it was not needed in OSS. The simplest approach likely would be to just revert this google-only change to get back to what we had in |
@chsigg would it be acceptable to change it to, like: index b2913e5..40d1aee 100644
--- a/lib/Target/LLVMIR/LLVMIRTranslation.cpp
+++ b/lib/Target/LLVMIR/LLVMIRTranslation.cpp
@@ -25,7 +25,9 @@
#include "llvm/IRReader/IRReader.h"
#include "llvm/Linker/Linker.h"
#include "llvm/Support/SourceMgr.h"
+#if USE_FIND_CUDA
#include "third_party/py/triton/google/find_cuda.h"
+#endif
#include <dlfcn.h>
#include <filesystem>
#include <iterator>
@@ -164,8 +166,14 @@ static std::map<std::string, std::string> getExternLibs(mlir::ModuleOp module) {
}
return std::filesystem::path(fileinfo.dli_fname);
}();
+#if USE_FIND_CUDA
static const auto runtime_path = (
fs::path(PathToLibdevice()) / "libdevice.10.bc");
+#else
+ static const auto runtime_path =
+ this_library_path.parent_path().parent_path() / "third_party" / "cuda" /
+ "lib" / "libdevice.10.bc";
+#endif
if (fs::exists(runtime_path)) {
externLibs.try_emplace(libdevice, runtime_path.string());
} else { and let target |
This problem is resolved. |
The Issues section is not opened in openxla/triton repo. So I report it here. This is related with openxla/triton and cannot be upstreamed.
I need to apply the following path before producing a jaxlib whl on windows
And it seems that the
find_cuda.h
is only available within google, according tohttps://github.com/openxla/triton/blob/5b63e5b265a2ff9784b084d901b9feff5a4fc0fc/BUILD#L486-L488
The text was updated successfully, but these errors were encountered: