Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pretrained Model's Output is Unexpected/Garbage #47

Closed
alexlao opened this issue Apr 15, 2019 · 3 comments
Closed

Pretrained Model's Output is Unexpected/Garbage #47

alexlao opened this issue Apr 15, 2019 · 3 comments

Comments

@alexlao
Copy link

alexlao commented Apr 15, 2019

Hi, I'm using the images that are provided in datasets/examples/images on the pretrained model that I got from Google Drive. The output map seems to simply be completely incorrect. Has anyone else gotten outputs like this?

@Engineering-Course
Copy link
Owner

can you provide more information such as your environment?

@alexlao
Copy link
Author

alexlao commented Apr 25, 2019

@Engineering-Course Hi, thank you for getting back! I am using Tensorflow 1.12.0 with Python 2.7.12 on Ubuntu 16.04.
Here's the output when running the evaluate_parsing_JPPNET-s2.py. Its image looks like this.
Warnings seem to be fine - just warnings for deprecation and memory
python evaluate_parsing_JPPNet-s2.py WARNING:tensorflow:From /home/alexlao/Desktop/LIP_JPPNet/utils/image_reader.py:146: slice_input_producer (from tensorflow.python.training.input) is deprecated and will be removed in a future version. Instructions for updating: Queue-based input pipelines have been replaced bytf.data. Use tf.data.Dataset.from_tensor_slices(tuple(tensor_list)).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs). If shuffle=False, omit the .shuffle(...). WARNING:tensorflow:From /home/alexlao/Desktop/LIP_JPPNet/venv/local/lib/python2.7/site-packages/tensorflow/python/training/input.py:372: range_input_producer (from tensorflow.python.training.input) is deprecated and will be removed in a future version. Instructions for updating: Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.range(limit).shuffle(limit).repeat(num_epochs). If shuffle=False, omit the .shuffle(...). WARNING:tensorflow:From /home/alexlao/Desktop/LIP_JPPNet/venv/local/lib/python2.7/site-packages/tensorflow/python/training/input.py:318: input_producer (from tensorflow.python.training.input) is deprecated and will be removed in a future version. Instructions for updating: Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.from_tensor_slices(input_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs). If shuffle=False, omit the .shuffle(...). WARNING:tensorflow:From /home/alexlao/Desktop/LIP_JPPNet/venv/local/lib/python2.7/site-packages/tensorflow/python/training/input.py:188: limit_epochs (from tensorflow.python.training.input) is deprecated and will be removed in a future version. Instructions for updating: Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.from_tensors(tensor).repeat(num_epochs). WARNING:tensorflow:From /home/alexlao/Desktop/LIP_JPPNet/venv/local/lib/python2.7/site-packages/tensorflow/python/training/input.py:197: __init__ (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version. Instructions for updating: To construct input pipelines, use the tf.datamodule. WARNING:tensorflow:From /home/alexlao/Desktop/LIP_JPPNet/venv/local/lib/python2.7/site-packages/tensorflow/python/training/input.py:197: add_queue_runner (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version. Instructions for updating: To construct input pipelines, use thetf.datamodule. WARNING:tensorflow:From evaluate_parsing_JPPNet-s2.py:115: calling expand_dims (from tensorflow.python.ops.array_ops) with dim is deprecated and will be removed in a future version. Instructions for updating: Use theaxisargument instead WARNING:tensorflow:From evaluate_parsing_JPPNet-s2.py:116: calling argmax (from tensorflow.python.ops.math_ops) with dimension is deprecated and will be removed in a future version. Instructions for updating: Use theaxisargument instead 2019-04-24 22:27:31.496060: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2019-04-24 22:27:31.571931: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-04-24 22:27:31.572257: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.86 pciBusID: 0000:01:00.0 totalMemory: 7.92GiB freeMemory: 6.59GiB 2019-04-24 22:27:31.572269: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0 2019-04-24 22:27:31.727927: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-04-24 22:27:31.727955: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 2019-04-24 22:27:31.727960: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N 2019-04-24 22:27:31.728100: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6353 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1) [!] Load failed... WARNING:tensorflow:From evaluate_parsing_JPPNet-s2.py:139: start_queue_runners (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version. Instructions for updating: To construct input pipelines, use thetf.datamodule. 2019-04-24 22:27:36.451954: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.58GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2019-04-24 22:27:36.470456: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.94GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2019-04-24 22:27:36.552996: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.58GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2019-04-24 22:27:36.566645: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.94GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2019-04-24 22:27:36.635374: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.58GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2019-04-24 22:27:36.660576: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.94GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. step 0 ./datasets/examples/images/114317_456748.jpg

@alexlao
Copy link
Author

alexlao commented Apr 26, 2019

Resolved - pretrained weights are supposed to be placed in checkpoint/JPPNet-s2

@alexlao alexlao closed this as completed Apr 26, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants