Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[]". HTTP response code is 400. #924

Open
yongshengma opened this issue Feb 23, 2024 · 7 comments

Comments

@yongshengma
Copy link

Hello,

I'm installing a self hosted engine on a ovirt node ng 4.5.5. I chose nfs when specify the storage:

Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:

However it reports an error and then shows up choosing storage again:

[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage domain] [ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 400."} Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:

Any idea please? Thanks.

Yongsheng

@yongshengma
Copy link
Author

yongshengma commented Feb 25, 2024

This issue still exists. It shows up when hosted-engine --deploy --4 comes to
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage domain].

I'm pretty sure both DNS and NFS share work correctly. DNS is on the same private network with ovirt host and engine; NFS share is on another network specifically for storage.

All the steps look fine until Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:. I found the nfs share was mounted correctly when I checked using df -h such as

192.168.20.67:/mnt/data/engine 7.3T 1006G 6.3T 14% /rhev/data-center/mnt/192.168.20.67:_mnt_data_engine

But the deploy got stuck on the error [ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 400."}

BTW, I run deploy with root login.

Yongsheng

@agibson684
Copy link

I got this issue too. hope it can be fixed soon or a workaround?

@yongshengma
Copy link
Author

One more detail of this issue.

When dnf install ovirt-engine-appliance -y, I always got gpg error:
oVirt upstream for CentOS Stream 8 - oVirt 4.5 2.8 MB/s | 2.9 kB 00:00 GPG key at file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oVirt-4.5 (0xFE590CB7) is already installed The GPG keys listed for the "oVirt upstream for CentOS Stream 8 - oVirt 4.5" repository are already installed but they are not correct for this package. Check that the correct key URLs are configured for this repository.. Failing package is: ovirt-engine-appliance-4.5-20231201120252.1.el8.x86_64 GPG Keys are configured as: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oVirt-4.5 The downloaded packages were saved in cache until the next successful transaction. You can remove cached packages by executing 'dnf clean packages'. Error: GPG check FAILED

So --nogpgcheck has to be appended.

@gocallag
Copy link
Contributor

The GPG is a different unrelated issue IMO - it only applies to Centos 8 - I would suggest creating a new issues based on that.

@gocallag
Copy link
Contributor

gocallag commented Mar 11, 2024

This issue still exists. It shows up when hosted-engine --deploy --4 comes to [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage domain].

I'm pretty sure both DNS and NFS share work correctly. DNS is on the same private network with ovirt host and engine; NFS share is on another network specifically for storage.

All the steps look fine until Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:. I found the nfs share was mounted correctly when I checked using df -h such as

192.168.20.67:/mnt/data/engine 7.3T 1006G 6.3T 14% /rhev/data-center/mnt/192.168.20.67:_mnt_data_engine

But the deploy got stuck on the error [ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 400."}

BTW, I run deploy with root login.

Yongsheng

I was able to re-create this issue Http Resp Code 400 with my QNAP acting as a NAS. I got the same issue with a standalone engine trying to add the same NFS mount. but I was able to circumvent the issue by setting 'squash all users' on the QNAP side and making sure the user I was squashing to had access to the 'share' on the QNAP side. I still have this issue with the self-hosted engine and will explore when I get a chance.

@CyberPunkXXX
Copy link

I have tried it and still keep getting the error even with Squash all user's set. Any fix?

@gocallag
Copy link
Contributor

gocallag commented Mar 23, 2024

I have tried it and still keep getting the error even with Squash all user's set. Any fix?

What does the vdsm.log say? My issue was definitely a problem with my NFS backend. The vdsm.log on your ovirt node will tell you what the problem is (in vague terms). I certainly agree that the error UX leads a lot to be desired.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants