Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Cannot convert torch to tensorflow #28812

Closed
2 of 4 tasks
kotonorose opened this issue Sep 2, 2024 · 3 comments
Closed
2 of 4 tasks

[Bug]: Cannot convert torch to tensorflow #28812

kotonorose opened this issue Sep 2, 2024 · 3 comments
Labels
Bug Report Report bugs detected in Ivy.

Comments

@kotonorose
Copy link

Bug Explanation

I use the following code.
https://ivy.dev/docs/overview/design/ivy_as_a_transpiler.html#source-to-source-transpiler

And I got

2024-09-02 06:35:24.145916: I external/local_tsl/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2024-09-02 06:35:24.171140: I external/local_tsl/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2024-09-02 06:35:24.289400: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:479] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-09-02 06:35:24.367185: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:10575] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-09-02 06:35:24.367996: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1442] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-09-02 06:35:24.426892: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-09-02 06:35:25.140093: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Cache database already exists at /workarea/IVY/ivy2/workarea/.ivy/cache.sqlite.
Cache file /workarea/IVY/ivy2/workarea/ivy/compiler/_cache/torch_to_torch_frontend_translation_cache.pkl not found. Starting with an empty cache.
Time taken for pre_load_cache: 0.000049 seconds
<class 'Translated_Outputs.Translated_Network_output.run_1.Translated_Network.Translated_Network'> stored at path: /workarea/IVY/ivy2/workarea/Translated_Outputs/Translated_Network_output/run_1
Cache file /workarea/IVY/ivy2/workarea/ivy/compiler/_cache/torch_frontend_to_ivy_translation_cache.pkl not found. Starting with an empty cache.
Time taken for pre_load_cache: 0.000026 seconds
<class 'Translated_Outputs.ivy_Network_output.run_1.ivy_Network.ivy_Network'> stored at path: /workarea/IVY/ivy2/workarea/Translated_Outputs/ivy_Network_output/run_1
2024-09-02 06:35:42.188335: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2024-09-02 06:35:42.189276: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2251] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
/opt/conda/lib/python3.10/site-packages/ivy/utils/exceptions.py:383: UserWarning: The current backend: 'tensorflow' does not support inplace updates natively. Ivy would quietly create new arrays when using inplace updates with this backend, leading to memory overhead (same applies for views). If you want to control your memory management, consider doing ivy.set_inplace_mode('strict') which should raise an error whenever an inplace update is attempted with this backend.
warnings.warn(
Cache file /workarea/IVY/ivy2/workarea/ivy/compiler/_cache/ivy_to_tensorflow_translation_cache.pkl not found. Starting with an empty cache.
Time taken for pre_load_cache: 0.000034 seconds
Traceback (most recent call last):
File "/workarea/IVY/ivy2/workarea/ivy_torch2tflite.py", line 102, in
TFNetwork = ivy.transpile(Network, source="torch", target="tensorflow")
File "/opt/conda/lib/python3.10/site-packages/ivy/compiler/compiler.py", line 252, in transpile
return _transpile(
File "VLL.pyx", line 136, in VLL.transpile
File "VLX.pyx", line 25, in VLX.TranslatorsContainer.run_translators
File "VVI.pyx", line 106, in VVI.Translator.translate
File "IXI.pyx", line 130, in IXI.format_all_files_in_directory
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "/workarea/IVY/ivy2/workarea/Translated_Outputs/tensorflow_Network_output/run_1/tensorflow_Network.py", line 6, in
from .tensorflow__stateful_layers import KerasDense
File "/workarea/IVY/ivy2/workarea/Translated_Outputs/tensorflow_Network_output/run_1/tensorflow__stateful_layers.py", line 1, in
from .tensorflow__helpers import tensorflow_handle_transpose_in_input_and_output
ImportError: cannot import name 'tensorflow_handle_transpose_in_input_and_output' from 'Translated_Outputs.tensorflow_Network_output.run_1.tensorflow__helpers' (/workarea/IVY/ivy2/workarea/Translated_Outputs/tensorflow_Network_output/run_1/tensorflow__helpers.py)

Steps to Reproduce Bug

import ivy
import tensorflow as tf
import torch

class Network(torch.nn.Module):

def __init__(self):
 super().__init__()
 self._linear = torch.nn.Linear(3, 3)

def forward(self, x):
 return self._linear(x)

TFNetwork = ivy.transpile(Network, source="torch", target="tensorflow")

x = tf.convert_to_tensor([1., 2., 3.])
net = TFNetwork()
net(x)

Environment

Ubuntu20.04 from Docker:
FROM pytorch/pytorch:2.1.0-cuda12.1-cudnn8-runtime

Ivy Version

Version: 0.0.9.7

Backend

  • NumPy
  • TensorFlow
  • PyTorch
  • JAX

Device

CPU

@kotonorose kotonorose added the Bug Report Report bugs detected in Ivy. label Sep 2, 2024
@kotonorose kotonorose changed the title [Bug]: <TITLE>Cannot convert torch to tensorflow [Bug]: Cannot convert torch to tensorflow Sep 2, 2024
@Sam-Armstrong
Copy link
Contributor

Ah thanks for pointing that out, I'll look into getting this fixed 👍

@kotonorose
Copy link
Author

Thank you Sam,

I also guess the code below should be fixed.
https://github.com/ivy-llc/ivy?tab=readme-ov-file#ivytranspile-will-eagerly-transpile-if-a-class-or-function-is-provided

tf_fn = ivy.transpile(test_fn, source="torch", target="tensorflow")

should be

tf_fn = ivy.transpile(torch_fn, source="torch", target="tensorflow")

@kotonorose
Copy link
Author

FROM pytorch/pytorch:2.1.0-cuda12.1-cudnn8-runtime

Now I use the https://github.com/ivy-llc/ivy/blob/main/docker/Dockerfile instead of my own Dockerfile with requirements.txt, etc, and it works.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Report Report bugs detected in Ivy.
Projects
None yet
Development

No branches or pull requests

2 participants