-
Notifications
You must be signed in to change notification settings - Fork 520
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(pt): allow PT OP CXXABI different from TF #3891
Conversation
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
WalkthroughWalkthroughThis update enhances the build process for managing PyTorch and TensorFlow compatibility by introducing checks and appropriate settings for the CXX11 ABI flag, refactoring GPU library creation, and adding conditional linking. These modifications support customized C++ operations without requiring explicit environment variables, ensuring seamless integration and compatibility within different build configurations. Changes
Recent review detailsConfiguration used: CodeRabbit UI Files selected for processing (2)
Additional context usedLanguageTool
Markdownlint
Additional comments not posted (1)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
It seems to me that we can still not enable PyTorch by default, as the default PyTorch on the PyPI requires NVCC...
xref: pytorch/pytorch#78530 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Outside diff range and nitpick comments (5)
source/lib/CMakeLists.txt (1)
15-58
: Approve the introduction ofcreate_library
function for modularity.The
create_library
function significantly improves the modularity and reusability of the library creation process. Ensure that the function is used consistently across the project to maintain uniformity.Consider using this pattern in other parts of the project where similar functionality is required to maintain consistency and improve maintainability.
source/lib/src/gpu/CMakeLists.txt (1)
82-118
: Approve the introduction ofcreate_gpu_lib
function for GPU libraries.The
create_gpu_lib
function enhances the modularity and reusability for GPU library creation. This is a positive change that aligns with best practices in modern CMake usage.Encourage the use of this pattern in other parts of the project dealing with similar library creation requirements to maintain consistency and enhance modularity.
doc/install/install-from-source.md (3)
Line range hint
89-89
: Improve clarity in virtual environment exit instructions.The phrase "if one wants to skip out of the virtual environment" could be simplified to enhance clarity. Consider rephrasing to make the instructions more direct.
- if one wants to skip out of the virtual environment, he/she can do + To exit the virtual environment, simply run:
Line range hint
145-145
: Clarify installation path details for TensorFlow.The sentence regarding the TensorFlow path could benefit from a comma for better readability.
- By default the installer only finds TensorFlow under user site-package directory... + By default, the installer only finds TensorFlow under user site-package directory...
Line range hint
189-189
: Correct grammatical error in CPU environment setup.There's a minor grammatical mistake in the description of setting up the environment for CPU usage.
- If your work in a CPU environment, please prepare + If you work in a CPU environment, please prepare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Outside diff range and nitpick comments (3)
doc/install/install-from-source.md (3)
Line range hint
146-146
: Clarify installation instructions for TensorFlow.The sentence "By default the installer only finds TensorFlow under user site-package directory" might be confusing due to the lack of a comma after "By default". Adding a comma can improve the readability.
- By default the installer only finds TensorFlow under user site-package directory + By default, the installer only finds TensorFlow under user site-package directory
Line range hint
190-190
: Correct grammatical error to enhance clarity.The phrase "If your work in a CPU environment" should be corrected to "If you work in a CPU environment" to avoid confusion and improve the grammatical structure.
- If your work in a CPU environment, please prepare runtime as below: + If you work in a CPU environment, please prepare runtime as below:
Line range hint
295-295
: Clarify compatibility note regarding PyTorch.The phrase "which may be not compatible with other libraries" is somewhat awkward. Rewording it to "which may not be compatible with other libraries" could enhance clarity.
- which may be not compatible with other libraries + which may not be compatible with other libraries
`TORCH_CXX_FLAGS` on macOS and Windows doesn't have `_GLIBCXX_USE_CXX11_ABI`. This PR sets `OP_CXX_ABI_PT` to a default value to fix the error introduced in #3891. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Chores** - Updated build configuration to set `OP_CXX_ABI_PT` conditionally for improved compatibility with macOS and Windows environments. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
- Build PT OP libraries with compatible CXXABI if PT has a different CXX ABI with TF; - Enable PT OP in test_cuda workflow; - Update documentation. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Documentation** - Removed outdated instructions related to setting environment variables for enabling customized C++ OPs in PyTorch. - **Chores** - Updated build configuration to handle PyTorch CXX11 ABI compatibility with TensorFlow. - Refactored library creation processes for better handling of CUDA and ROCm toolkits. - Improved build scripts to dynamically adjust compile definitions and installation paths based on different build configurations. - **CI/CD** - Enhanced the continuous integration workflow to include PyTorch variable assignments and settings for testing with CUDA. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
`TORCH_CXX_FLAGS` on macOS and Windows doesn't have `_GLIBCXX_USE_CXX11_ABI`. This PR sets `OP_CXX_ABI_PT` to a default value to fix the error introduced in deepmodeling#3891. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Chores** - Updated build configuration to set `OP_CXX_ABI_PT` conditionally for improved compatibility with macOS and Windows environments. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
Summary by CodeRabbit
Documentation
Chores
CI/CD