-
Notifications
You must be signed in to change notification settings - Fork 286
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
S3 filesystem pure virtual method called; terminate called without an active exception #1912
Comments
Can we please verify why the latest so exhibiting this issue. Thank you |
I had the same issue and it was driving me insane. I have some unrelated custom c++ ops and wasted a day digging into those. I am using s3 and going back to 0.34.0 fixed it. |
Facing the same issue but for I've also faced the same error with |
As an update, I followed the build instructions for tensorflow-io (Ubuntu 22.04 and then Python Wheels), and discovered that this particular Note: The link in the docker build instructions is broken - https://github.com/tensorflow/io/blob/master/docs/development.md#docker - and the latest image in tfsigio/tfio is about 2 years old. |
@saimi Is there any chance you can please post the steps you took to build? I tried to build but was thwarted by the issues you mentioned. |
@rivershah I pulled the
and installed all the packages and bazel as instructed in https://github.com/tensorflow/io/blob/master/docs/development.md#ubuntu-2204 (without the
I then followed the instructions at https://github.com/tensorflow/io/blob/master/docs/development.md#python-wheels:
Then, within the same container, I was able to validate tf-io's S3 filesystem functionality by trying to checkpoint a model to S3. I'll need to do some additional work to reproduce the failure I got when copying the generated tf-io wheel out into a different container, since I've terminated all of that setup now. |
Bumping this issue. Needs looking at to ensure build process handling correctly |
This problem persists in |
@yongtang would you be able to help here? Sounds like this is a pretty serious issue, so it would be much appreciated!! |
This is blocking us from upgrading the tensorstore version. A quick fix will be much appreciated! |
+1, also running into this issue |
Bump Ubuntu version for Linux Wheel to address issue tensorflow#1912 tensorflow#1912
@yongtang per the comment #1912 (comment) above, assuming my PR #2005 passes can you please consider a minor release (0.37.1 maybe?) to address the S3 issues discussed above. Thanks! |
@yongtang Thanks for the fix. In the interest of us being able to upgrade tensorflow, can you please do a 0.37.1 release as per @yarri-oss request. Thanks again |
I am still seeing
on |
@yarri-oss how is #2005 supposed to fix this issue? |
The problem still persists. Replicable with above
|
cc @yongtang -- can we reopen the issue? |
End users have confirmed this issue fixed. @spolloni If you can post a repro (with S3 bucket blob) we can investigate further. I would prefer a new issue be opened against your specific repro tho. |
? which users?
the repro has not changed, it is the one posted here: #1912 (comment) |
Which users? I posted the issue and it repros as per above |
Bump Ubuntu version for Linux Wheel to address issue #1912 tensorflow/io#1912
We are still having this issue with tensorflow-io 0.37.1, please help to reopen this issue. @yarri-oss import tensorflow as tf
import tensorflow_io as tfio
tf.io.gfile.glob("s3://mybucket/dir") my-server ~ > python test_tf.py 17:54
2024-09-10 17:55:34.571626: I tensorflow/core/util/port.cc:111] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-09-10 17:55:34.756820: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2024-09-10 17:55:35.673576: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-09-10 17:55:35.673613: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-09-10 17:55:35.679356: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-09-10 17:55:36.191044: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2024-09-10 17:55:36.193843: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-09-10 17:55:40.594805: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
pure virtual method called
terminate called without an active exception
zsh: IOT instruction python test_tf.py
my-server ~ > echo $? 17:55
134 |
I am getting a core dump during interpreter teardown, when using the s3 filesystem. Can I please be given guidance how to handle this issue. Please see script to reproduce inside docker:
FROM tensorflow/tensorflow:2.14.0-gpu
The following environment variables are set
The text was updated successfully, but these errors were encountered: