Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Noobaa command to list bucket/account giving trimmed output in pipe #8111

Closed
madhuthorat opened this issue Jun 5, 2024 · 23 comments · Fixed by #8117
Closed

Noobaa command to list bucket/account giving trimmed output in pipe #8111

madhuthorat opened this issue Jun 5, 2024 · 23 comments · Fixed by #8117

Comments

@madhuthorat
Copy link
Collaborator

Environment info

  • NooBaa Version: 5.15.0/1/2/3
  • Platform: other: RHEL9

Actual behavior

  1. Bucket/account details output getting trimmed when using with pipe

Expected behavior

  1. it should give full output in pipe

Steps to reproduce

More information - Screenshots / Logs / Other output

@romayalon
Copy link
Contributor

@madhuthorat could you attach the command you are running and the output ? thanks

@PravinRanjan10
Copy link
Collaborator

Hi @romayalon We are trying to read all the noobaa-cli output through the PIPE. but after receiveing few bytes, the returned output is 0 and EOF. That means full output is coming through pipe.

},
{
"_id": "6656d6d3585e0a1deb367e9b",
"name": "bucket-bmw",
"owner_account": "6656d6bcf4781cd093acb9e5",
"system_owner": "account-bmw",
"bucket_owner": "account-bmw",
"versioning": "DISABLED",
"creation_date": "2024-05-29T07:18:43.986Z",
"path": "/mnt/gpfs0/account-bmw/bucket-bmw",
"should_create_underlying_storage": false,
"fs_backend": "GPFS"
},
{
"_id": "6656d6d3585e0a1deb367e9b",
"name": "bucket-bmw",
"owner_account": "6656d6bcf4781cd093acb9e5",
"system_owner": "accou
bytes recieved: 0: EOF

@romayalon
Copy link
Contributor

@PravinRanjan10 @madhuthorat Could you add the exact command you are running?
Please try following the issue instructions, this creates redundant ping pongs.
@naveenpaul1 @guymguym this seems to me the same as #7894
don't we need to take the same changes we did for the health script -#7898 and apply them to noobaa-cli?
maybe also this is a repro of #7900

@PravinRanjan10
Copy link
Collaborator

@romayalon Here is the command, we are running

Running command: env LC_ALL=C /usr/local/bin/noobaa-cli bucket 2>/dev/null list --wide

@madhuthorat
Copy link
Collaborator Author

@PravinRanjan10 @madhuthorat Could you add the exact command you are running? Please try following the issue instructions, this creates redundant ping pongs. @naveenpaul1 @guymguym this seems to me the same as #7894 don't we need to take the same changes we did for the health script -#7898 and apply them to noobaa-cli? maybe also this is a repro of #7900

@romayalon the behavior looks same as seen with health command before #7894 fix.

@romayalon
Copy link
Contributor

@PravinRanjan10 @madhuthorat How many buckets/accounts you have in the system?
I want to try reproduce it

@PravinRanjan10
Copy link
Collaborator

@PravinRanjan10 @madhuthorat How many buckets/accounts you have in the system? I want to try reproduce it

Approx 5k account/buckets.

@madhuthorat
Copy link
Collaborator Author

@romayalon 'as the fix sounds similar to #7894 can we have the fix in 5.15.4 ? currently we are using a workaround which is little cumbersome

@romayalon
Copy link
Contributor

@madhuthorat yes, see #8120 backport PR to 5.15.4.

@madhuthorat
Copy link
Collaborator Author

@madhuthorat yes, see #8120 backport PR to 5.15.4.

Thank you @romayalon

@PravinRanjan10
Copy link
Collaborator

PravinRanjan10 commented Jun 12, 2024

Re-opening as it's not working as expected. Below is some output:

I have 5k buckets

noobaa-cli bucket list 2>/dev/null |jq
parse error: Unfinished JSON term at EOF at line 4594, column 0


Some other output:


  {
    "name": "bkt4110"
  },
  {
    "name": "bkt1969"
  },
  {
    "name": "bkt3442"
  },
  {

bytes recieved: 0: EOF

@guymguym
Copy link
Member

@PravinRanjan10 Thanks for the quick verification!
Can you please share with us (off github) the output file of noobaa-cli bucket list &>noobaa-cli-bucket-list.log (we need to capture the output of both stdout and stderr...) Thanks

@PravinRanjan10
Copy link
Collaborator

@PravinRanjan10 Thanks for the quick verification! Can you please share with us (off github) the output file of noobaa-cli bucket list &>noobaa-cli-bucket-list.log (we need to capture the output of both stdout and stderr...) Thanks

output of: noobaa-cli bucket list &>noobaa-cli-bucket-list.log

noobaa-cli-bucket-list.log

@PravinRanjan10
Copy link
Collaborator

PravinRanjan10 commented Jun 13, 2024

output with : noobaa-cli bucket list |jq

[root@node]# noobaa-cli bucket list |jq
load_nsfs_nc_config.setting config.NSFS_NC_CONF_DIR /mnt/cesSharedRoot/ces/s3-config
nsfs: config_dir_path=/mnt/cesSharedRoot/ces/s3-config config.json= {
  ENDPOINT_FORKS: 2,
  ENDPOINT_PORT: 6001,
  ENDPOINT_SSL_PORT: 6443,
  UV_THREADPOOL_SIZE: 16,
  GPFS_DL_PATH: '/usr/lpp/mmfs/lib/libgpfs.so',
  NOOBAA_LOG_LEVEL: 'all',
  NSFS_NC_STORAGE_BACKEND: 'GPFS',
  NSFS_NC_CONFIG_DIR_BACKEND: 'GPFS',
  NSFS_DIR_CACHE_MAX_DIR_SIZE: 536870912,
  NSFS_DIR_CACHE_MAX_TOTAL_SIZE: 1073741824
}
2024-06-13 05:53:17.527005 [PID-1794896/TID-1794896] FS::GPFS GPFS_DL_PATH=/usr/lpp/mmfs/lib/libgpfs.so
2024-06-13 05:53:17.527117 [PID-1794896/TID-1794896] FS::GPFS found GPFS lib file GPFS_DL_PATH=/usr/lpp/mmfs/lib/libgpfs.so
2024-06-13 05:53:17.530533 [PID-1794896/TID-1794896] [L1] FS::set_debug_level 5
Jun-13 5:53:18.002 [/1794896]   [LOG] CONSOLE:: detect_fips_mode: found /proc/sys/crypto/fips_enabled with value 0
2024-06-13 05:53:18.048957 [PID-1794896/TID-1794896] [L1] FS::FSWorker::Begin: Stat _path=/mnt/cesSharedRoot/ces/s3-config
2024-06-13 05:53:18.049386 [PID-1794896/TID-1794905] [L1] FS::FSWorker::Execute: Stat _path=/mnt/cesSharedRoot/ces/s3-config _uid=0 _gid=0 _backend=GPFS
(node:1794896) NOTE: We are formalizing our plans to enter AWS SDK for JavaScript (v2) into maintenance mode in 2023.

Please migrate your code to use AWS SDK for JavaScript (v3).
For more information, check the migration guide at https://a.co/7PzMCcy
(Use `node --trace-warnings ...` to show where the warning was created)
2024-06-13 05:53:18.050086 [PID-1794896/TID-1794905] [L1] FS::FSWorker::Execute: Stat _path=/mnt/cesSharedRoot/ces/s3-config _uid=0 _gid=0 geteuid()=0 getegid()=0Jun-13 5:53:18.050 [/1794896]   [LOG] CONSOLE:: read_rand_seed: reading 32 bytes from /dev/urandom ...
 getuid()=0 getgid()=0
2024-06-13 05:53:18.051139 [PID-1794896/TID-1794905] [L1] FS::FSWorker::Execute: Stat _path=/mnt/cesSharedRoot/ces/s3-config  took: 0.266377 ms
Jun-13 5:53:18.062 [/1794896]   [LOG] CONSOLE:: read_rand_seed: got 32 bytes from /dev/urandom, total 32 ...
Jun-13 5:53:18.063 [/1794896]   [LOG] CONSOLE:: read_rand_seed: closing fd ...
2024-06-13 05:53:18.063688 [PID-1794896/TID-1794896] [L1] FS::Stat::OnOK: _path=/mnt/cesSharedRoot/ces/s3-config _stat_res.st_ino=54534 _stat_res.st_size=8192
Jun-13 5:53:18.063 [/1794896]    [L1] core.cmd.manage_nsfs:: nsfs.check_and_create_config_dirs: config dir exists: /mnt/cesSharedRoot/ces/s3-config
2024-06-13 05:53:18.064243 [PID-1794896/TID-1794896] [L1] FS::FSWorker::Begin: Stat _path=/mnt/cesSharedRoot/ces/s3-config/buckets
2024-06-13 05:53:18.064315 [PID-1794896/TID-1794908] [L1] FS::FSWorker::Execute: Stat _path=/mnt/cesSharedRoot/ces/s3-config/buckets Jun-13 5:53:18.064 [/1794896]   [LOG] CONSOLE:: init_rand_seed: seeding with 32 bytes
_uid=0 _gid=0 _backend=GPFS
2024-06-13 05:53:18.064761 [PID-1794896/TID-1794908] [L1] FS::FSWorker::Execute: Stat _path=/mnt/cesSharedRoot/ces/s3-config/buckets _uid=0 _gid=0 geteuid()=0 getegid()=0 getuid()=0 getgid()=0
2024-06-13 05:53:18.066812 [PID-1794896/TID-1794908] [L1] FS::FSWorker::Execute: Stat _path=/mnt/cesSharedRoot/ces/s3-config/buckets  took: 0.193503 ms
2024-06-13 05:53:18.067332 [PID-1794896/TID-1794896] [L1] FS::Stat::OnOK: _path=/mnt/cesSharedRoot/ces/s3-config/buckets _stat_res.st_ino=14338 _stat_res.st_size=262144
Jun-13 5:53:18.067 [/1794896]    [L1] core.cmd.manage_nsfs:: nsfs.check_and_create_config_dirs: config dir exists: /mnt/cesSharedRoot/ces/s3-config/buckets
2024-06-13 05:53:18.067733 [PID-1794896/TID-1794896] [L1] FS::FSWorker::Begin: Stat _path=/mnt/cesSharedRoot/ces/s3-config/accounts
2024-06-13 05:53:18.067782 [PID-1794896/TID-1794909] [L1] FS::FSWorker::Execute: Stat _path=/mnt/cesSharedRoot/ces/s3-config/accounts _uid=0 _gid=0 _backend=GPFS
2024-06-13 05:53:18.067844 [PID-1794896/TID-1794909] [L1] FS::FSWorker::Execute: Stat _path=/mnt/cesSharedRoot/ces/s3-config/accounts _uid=0 _gid=0 geteuid()=0 getegid()=0 getuid()=0 getgid()=0
2024-06-13 05:53:18.067998 [PID-1794896/TID-1794909] [L1] FS::FSWorker::Execute: Stat _path=/mnt/cesSharedRoot/ces/s3-config/accounts  took: 0.089916 ms
2024-06-13 05:53:18.068046 [PID-1794896/TID-1794896] [L1] FS::Stat::OnOK: _path=/mnt/cesSharedRoot/ces/s3-config/accounts _stat_res.st_ino=14339 _stat_res.st_size=16384
Jun-13 5:53:18.068 [/1794896]    [L1] core.cmd.manage_nsfs:: nsfs.check_and_create_config_dirs: config dir exists: /mnt/cesSharedRoot/ces/s3-config/accounts
2024-06-13 05:53:18.068318 [PID-1794896/TID-1794896] [L1] FS::FSWorker::Begin: Stat _path=/mnt/cesSharedRoot/ces/s3-config/access_keys
2024-06-13 05:53:18.068362 [PID-1794896/TID-1794910] [L1] FS::FSWorker::Execute: Stat _path=/mnt/cesSharedRoot/ces/s3-config/access_keys _uid=0 _gid=0 _backend=GPFS
2024-06-13 05:53:18.068425 [PID-1794896/TID-1794910] [L1] FS::FSWorker::Execute: Stat _path=/mnt/cesSharedRoot/ces/s3-config/access_keys _uid=0 _gid=0 geteuid()=0 getegid()=0 getuid()=0 getgid()=0
2024-06-13 05:53:18.068545 [PID-1794896/TID-1794910] [L1] FS::FSWorker::Execute: Stat _path=/mnt/cesSharedRoot/ces/s3-config/access_keys  took: 0.070583 ms
2024-06-13 05:53:18.068589 [PID-1794896/TID-1794896] [L1] FS::Stat::OnOK: _path=/mnt/cesSharedRoot/ces/s3-config/access_keys _stat_res.st_ino=14340 _stat_res.st_size=65536
Jun-13 5:53:18.068 [/1794896]    [L1] core.cmd.manage_nsfs:: nsfs.check_and_create_config_dirs: config dir exists: /mnt/cesSharedRoot/ces/s3-config/access_keys
2024-06-13 05:53:18.068834 [PID-1794896/TID-1794896] [L1] FS::FSWorker::Begin: Stat _path=/var/run/noobaa-nsfs/wal
2024-06-13 05:53:18.068879 [PID-1794896/TID-1794911] [L1] FS::FSWorker::Execute: Stat _path=/var/run/noobaa-nsfs/wal _uid=0 _gid=0 _backend=GPFS
2024-06-13 05:53:18.068927 [PID-1794896/TID-1794911] [L1] FS::FSWorker::Execute: Stat _path=/var/run/noobaa-nsfs/wal _uid=0 _gid=0 geteuid()=0 getegid()=0 getuid()=0 getgid()=0
2024-06-13 05:53:18.069014 [PID-1794896/TID-1794911] [L1] FS::FSWorker::Execute: Stat _path=/var/run/noobaa-nsfs/wal  took: 0.038081 ms
2024-06-13 05:53:18.069059 [PID-1794896/TID-1794896] [L1] FS::FSWorker::OnError: Stat _path=/var/run/noobaa-nsfs/wal  error.Message()=Invalid argument
Jun-13 5:53:18.069 [/1794896]    [L1] core.cmd.manage_nsfs:: nsfs.check_and_create_config_dirs: could not create pre requisite path /var/run/noobaa-nsfs/wal
2024-06-13 05:53:18.069747 [PID-1794896/TID-1794896] [L1] FS::FSWorker::Begin: Readdir _path=/mnt/cesSharedRoot/ces/s3-config/buckets
2024-06-13 05:53:18.069859 [PID-1794896/TID-1794912] [L1] FS::FSWorker::Execute: Readdir _path=/mnt/cesSharedRoot/ces/s3-config/buckets _uid=0 _gid=0 _backend=GPFS
2024-06-13 05:53:18.070098 [PID-1794896/TID-1794912] [L1] FS::FSWorker::Execute: Readdir _path=/mnt/cesSharedRoot/ces/s3-config/buckets _uid=0 _gid=0 geteuid()=0 getegid()=0 getuid()=0 getgid()=0
2024-06-13 05:53:18.073272 [PID-1794896/TID-1794912] [L1] FS::FSWorker::Execute: Readdir _path=/mnt/cesSharedRoot/ces/s3-config/buckets  took: 2.68073 ms
2024-06-13 05:53:18.073537 [PID-1794896/TID-1794896] [L1] FS::FSWorker::OnOK: Readdir _path=/mnt/cesSharedRoot/ces/s3-config/buckets


parse error: Unfinished JSON term at EOF at line 4594, column 0

@guymguym
Copy link
Member

Hi @PravinRanjan10
I checked the output file you provided and after stripping off the log prints I can parse it with jq successfully.
Which version of jq are you using?
Does this work?

noobaa-cli bucket list 2>/dev/null >bucket-list-output
jq <bucket-list-output

@PravinRanjan10
Copy link
Collaborator

Hi @PravinRanjan10 I checked the output file you provided and after stripping off the log prints I can parse it with jq successfully. Which version of jq are you using? Does this work?

noobaa-cli bucket list 2>/dev/null >bucket-list-output
jq <bucket-list-output

@guymguym
Jq VERSION:

jq-1.6-15.el9.x86_64

Yes, if number of buckets are less (around <=2.5k), it works fine with pipe and jq.

@guymguym
Copy link
Member

@PravinRanjan10 can you test without a pipe like i suggested above?

@romayalon
Copy link
Contributor

@PravinRanjan10 @guymguym
I tried it this way - cat noobaa-cli-bucket-list.log | jq
and it worked, I don't see any issue

@PravinRanjan10
Copy link
Collaborator

Actually, the problem is with PIPE only. In code we are trying to collect the output of noobaa-cli bucket list and parse. The problem is, if size of bucket list increases then we need to use stdout pipe to collect those output and parse.

@guymguym
Copy link
Member

@romayalon I was trying to reproduce somehow with this script below but it doesn't reproduce (not even with 100,000).
So we need to recreate this 5000 buckets case on dev env and investigate further.

> node -e '
  json = (name, count) => JSON.stringify({ response: { reply: Array(count).fill().map((x,i)=>({name:name+"-"+i})) } }, null, 2);
  write = (str) => { process.stdout.write(str + "\n", () => process.exit(0)) };
  write(json("buckets", 5000));
' | jq ".response.reply[-3:][].name"

"buckets-4997"
"buckets-4998"
"buckets-4999"

@romayalon
Copy link
Contributor

@PravinRanjan10 I created 6000 buckets and couldn't see your issue, can we have access to your machine?

@romayalon
Copy link
Contributor

Updating that I had a call with @PravinRanjan10 and after upgrading to the latest RPM we couldn't see the issue again.
Pravin is currently running some more tests for verifying we can close this one

@romayalon
Copy link
Contributor

Closing the issue per @PravinRanjan10 confirmation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants