We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The configuration file file.yml: cluster: user: 'ubuntu' head: "STORAGE01" clients: ["STORAGE01"] osds: ["STORAGE01"] mons: STORAGE01: a: "127.0.0.1:6789" osds_per_node: 1 fs: 'xfs' mkfs_opts: '-f -i size=2048' mount_opts: '-o inode64,noatime,logbsize=256k' conf_file: '/home/ubuntu/cbt/ceph.conf.1osd' iterations: 1 use_existing: False clusterid: "ceph" tmp_dir: "/tmp/cbt" pool_profiles: rbd: pg_size: 4096 pgp_size: 4096 replication: 'rbd' benchmarks: radosbench: op_size: [ 4194304, 524288, 4096 ] write_only: False time: 300 concurrent_ops: [ 1 ] concurrent_procs: 1 use_existing: True pool_profile: rbd
cluster: user: 'ubuntu' head: "STORAGE01" clients: ["STORAGE01"] osds: ["STORAGE01"] mons: STORAGE01: a: "127.0.0.1:6789" osds_per_node: 1 fs: 'xfs' mkfs_opts: '-f -i size=2048' mount_opts: '-o inode64,noatime,logbsize=256k' conf_file: '/home/ubuntu/cbt/ceph.conf.1osd' iterations: 1 use_existing: False clusterid: "ceph" tmp_dir: "/tmp/cbt" pool_profiles: rbd: pg_size: 4096 pgp_size: 4096 replication: 'rbd' benchmarks: radosbench: op_size: [ 4194304, 524288, 4096 ] write_only: False time: 300 concurrent_ops: [ 1 ] concurrent_procs: 1 use_existing: True pool_profile: rbd
ceph.conf1.osd file: `[global] osd pool default size = 1 auth cluster required = none auth service required = none auth client required = none keyring = /etc/ceph/ceph.client.admin.keyring osd pg bits = 8 osd pgp bits = 8 log to syslog = false log file = /tmp/cbt/ceph/log/$name.log public network = 172.30.89.0/24 cluster network = 172.30.89.0/24 rbd cache = true osd scrub load threshold = 0.01 osd scrub min interval = 137438953472 osd scrub max interval = 137438953472 osd deep scrub interval = 137438953472 osd max scrubs = 16 filestore merge threshold = 40 filestore split multiple = 8 osd op threads = 8 mon pg warn max object skew = 100000 mon pg warn min per osd = 0 mon pg warn max per osd = 32768
[mon] mon data = /tmp/cbt/ceph/mon.$id
[mon.a] host = STORAGE01 mon addr = 127.0.0.1:6789
[osd.0] host = STORAGE01 osd data = /tmp/cbt/mnt/osd-device-0-data osd journal = /dev/disk/by-partlabel/osd-device-0-journal`
command: python3 cbt.py --archive=/home/ubuntu/cbt/archive --conf=./ceph.conf.1osd ./file.yml
python3 cbt.py --archive=/home/ubuntu/cbt/archive --conf=./ceph.conf.1osd ./file.yml
Error Exception: checked_Popen args=scp ubuntu@STORAGE01:/tmp/cbt/ceph/client.admin/keyring /tmp/cbt/ceph/client.admin/keyring.tmp continue_if_error=False rtncode=1 stdout:
stderr: /tmp/cbt/ceph/client.admin/keyring.tmp: No such file or directory
The client.admin folder do not exist on target machine. It is not trying to create a folder prior to copy.
The text was updated successfully, but these errors were encountered:
No branches or pull requests
The configuration file file.yml:
cluster: user: 'ubuntu' head: "STORAGE01" clients: ["STORAGE01"] osds: ["STORAGE01"] mons: STORAGE01: a: "127.0.0.1:6789" osds_per_node: 1 fs: 'xfs' mkfs_opts: '-f -i size=2048' mount_opts: '-o inode64,noatime,logbsize=256k' conf_file: '/home/ubuntu/cbt/ceph.conf.1osd' iterations: 1 use_existing: False clusterid: "ceph" tmp_dir: "/tmp/cbt" pool_profiles: rbd: pg_size: 4096 pgp_size: 4096 replication: 'rbd' benchmarks: radosbench: op_size: [ 4194304, 524288, 4096 ] write_only: False time: 300 concurrent_ops: [ 1 ] concurrent_procs: 1 use_existing: True pool_profile: rbd
ceph.conf1.osd file:
`[global]
osd pool default size = 1
auth cluster required = none
auth service required = none
auth client required = none
keyring = /etc/ceph/ceph.client.admin.keyring
osd pg bits = 8
osd pgp bits = 8
log to syslog = false
log file = /tmp/cbt/ceph/log/$name.log
public network = 172.30.89.0/24
cluster network = 172.30.89.0/24
rbd cache = true
osd scrub load threshold = 0.01
osd scrub min interval = 137438953472
osd scrub max interval = 137438953472
osd deep scrub interval = 137438953472
osd max scrubs = 16
filestore merge threshold = 40
filestore split multiple = 8
osd op threads = 8
mon pg warn max object skew = 100000
mon pg warn min per osd = 0
mon pg warn max per osd = 32768
[mon]
mon data = /tmp/cbt/ceph/mon.$id
[mon.a]
host = STORAGE01
mon addr = 127.0.0.1:6789
[osd.0]
host = STORAGE01
osd data = /tmp/cbt/mnt/osd-device-0-data
osd journal = /dev/disk/by-partlabel/osd-device-0-journal`
command:
python3 cbt.py --archive=/home/ubuntu/cbt/archive --conf=./ceph.conf.1osd ./file.yml
Error
Exception: checked_Popen args=scp ubuntu@STORAGE01:/tmp/cbt/ceph/client.admin/keyring /tmp/cbt/ceph/client.admin/keyring.tmp continue_if_error=False rtncode=1
stdout:
stderr:
/tmp/cbt/ceph/client.admin/keyring.tmp: No such file or directory
The client.admin folder do not exist on target machine. It is not trying to create a folder prior to copy.
The text was updated successfully, but these errors were encountered: