What's Taichi's take on NVIDIA's Warp? #8184
Ruibin-Liu
started this conversation in
General
Replies: 1 comment
-
Overall the biggest distinction as of now is that Taichi operates at a slightly higher level. E.g. implict loop parallelization, high level spatial data structures, direct interops with torch, etc. We are trying to implement support for lower level programming styles to accommodate such things as native intrinsics, but we do think of those as more advanced optimization techniques, and at the same time we strive for easier entry and usage for beginners or people not so used to CUDA's programming model |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
NVIDIA's Warp says:
I don't find it incorrect but does Taichi has plans to enhance in the two points mentioned? Maybe the first one is related to #5446 (comment)?
Beta Was this translation helpful? Give feedback.
All reactions