Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build: build-rpm.sh failing #283

Merged
merged 3 commits into from
May 19, 2023
Merged

build: build-rpm.sh failing #283

merged 3 commits into from
May 19, 2023

Conversation

dougsland
Copy link
Contributor

@dougsland dougsland commented May 7, 2023

Currently buid-scripts/build-rpm.sh is failing
with mkdir error in the end.

GH-Issue: #282

Signed-off-by: Douglas Schilling Landgraf dougsland@redhat.com

@dougsland dougsland changed the title build: make build-rpm.sh return correctly build: build-rpm.sh failing May 7, 2023
Copy link
Member

@engelmi engelmi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @dougsland, thanks for your contribution!
Could you also update in the README.developer.mds Packaging section that the default of the artifact directory will be /artifacts. Otherwise, the PR looks good to me.

build-scripts/build-rpm.sh Outdated Show resolved Hide resolved
build-scripts/build-rpm.sh Outdated Show resolved Hide resolved
@dougsland
Copy link
Contributor Author

Thanks @engelmi @mwperina I will prepare a new patch during the day.

1 similar comment
@dougsland
Copy link
Contributor Author

Thanks @engelmi @mwperina I will prepare a new patch during the day.

Currently buid-scripts/build-rpm.sh is failing
with mkdir error in the end.

GH-Issue: #282
Signed-off-by: Douglas Schilling Landgraf <dougsland@redhat.com>
@dougsland
Copy link
Contributor Author

@mwperina @engelmi could you please re-review? Thanks!

@dougsland
Copy link
Contributor Author

/retest

@dougsland
Copy link
Contributor Author

dougsland commented May 14, 2023

humm I cannot see this failure during my builds:

Recommends: hirte-debugsource(x86-64) = 0.3.0-0.202305140825.gite7e186d.fc38
Checking for unpackaged file(s): /usr/lib/rpm/check-files /root/rpmbuild/BUILDROOT/hirte-0.3.0-0.202305140825.gite7e186d.fc38.x86_64
Wrote: rpmbuild/x86_64/hirte-selinux-0.3.0-0.202305140825.gite7e186d.fc38.x86_64.rpm
Wrote: rpmbuild/x86_64/hirte-agent-0.3.0-0.202305140825.gite7e186d.fc38.x86_64.rpm
Wrote: rpmbuild/x86_64/hirte-0.3.0-0.202305140825.gite7e186d.fc38.x86_64.rpm
Wrote: rpmbuild/x86_64/hirte-ctl-0.3.0-0.202305140825.gite7e186d.fc38.x86_64.rpm
Wrote: rpmbuild/x86_64/hirte-ctl-debuginfo-0.3.0-0.202305140825.gite7e186d.fc38.x86_64.rpm
Wrote: rpmbuild/x86_64/hirte-agent-debuginfo-0.3.0-0.202305140825.gite7e186d.fc38.x86_64.rpm
Wrote: rpmbuild/x86_64/hirte-debuginfo-0.3.0-0.202305140825.gite7e186d.fc38.x86_64.rpm
Wrote: rpmbuild/x86_64/hirte-debugsource-0.3.0-0.202305140825.gite7e186d.fc38.x86_64.rpm
Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.Zel4H8
+ umask 022
+ cd /root/rpmbuild/BUILD
+ cd hirte-0.3.0
+ /usr/bin/rm -rf /root/rpmbuild/BUILDROOT/hirte-0.3.0-0.202305140825.gite7e186d.fc38.x86_64
+ RPM_EC=0
++ jobs -p
+ exit 0
Executing(rmbuild): /bin/sh -e /var/tmp/rpm-tmp.17r55R
+ umask 022
+ cd /root/rpmbuild/BUILD
+ rm -rf hirte-0.3.0 hirte-0.3.0.gemspec
+ RPM_EC=0
++ jobs -p
+ exit 0
++ pwd
++ date +%m-%d-%Y
+ ARTIFACTS_DIRECTORY=/root/rpm/hirte/artifacts/rpms/05-14-2023
+ [[ -z '' ]]
+ [[ -d /root/rpm/hirte/artifacts/rpms/05-14-2023 ]]
+ mkdir -p /root/rpm/hirte/artifacts/rpms/05-14-2023
+ find rpmbuild -iname '*rpm'
+ xargs mv -t /root/rpm/hirte/artifacts/rpms/05-14-2023
[root@dell730 hirte]# echo $?
0

@engelmi @mwperina @pypingou are you familiar with this one?

@pypingou
Copy link
Member

Where do you see a failure?

@dougsland
Copy link
Contributor Author

Where do you see a failure?

Looks like this one:
https://github.com/containers/hirte/actions/runs/4908515875/jobs/8773101650#step:8:1331

@pypingou
Copy link
Member

So the error I'm seeing in the tests here is in the Show tmt log output in case of failure step in https://github.com/containers/hirte/actions/runs/4959963305/jobs/8910675905?pr=283
It seems to be a time-out error in one of the proxy service tests:


10:29:45                 out: test_start_proxy_service.py:61: 
10:29:45                 out: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
10:29:45                 out: ../../../hirte_test/test.py:104: in run
10:29:45                 out:     exec(ctrl_container, node_container)
10:29:45                 out: test_start_proxy_service.py:30: in exec
10:29:45                 out:     result, _ = ctrl.exec_run(f"hirtectl start {node_foo_name} requesting.service")
10:29:45                 out: ../../../hirte_test/container.py:67: in exec_run
10:29:45                 out:     result, output = self.container.exec_run(command)
10:29:45                 out: /opt/hostedtoolcache/Python/3.10.11/x64/lib/python3.10/site-packages/podman/domain/containers.py:194: in exec_run
10:29:45                 out:     start_resp = self.client.post(
10:29:45                 out: /opt/hostedtoolcache/Python/3.10.11/x64/lib/python3.10/site-packages/podman/api/client.py:306: in post
10:29:45                 out:     return self._request(
10:29:45                 out: /opt/hostedtoolcache/Python/3.10.11/x64/lib/python3.10/site-packages/podman/api/client.py:404: in _request
10:29:45                 out:     self.request(
10:29:45                 out: /opt/hostedtoolcache/Python/3.10.11/x64/lib/python3.10/site-packages/requests/sessions.py:587: in request
10:29:45                 out:     resp = self.send(prep, **send_kwargs)
10:29:45                 out: /opt/hostedtoolcache/Python/3.10.11/x64/lib/python3.10/site-packages/requests/sessions.py:745: in send
10:29:45                 out:     r.content
10:29:45                 out: /opt/hostedtoolcache/Python/3.10.11/x64/lib/python3.10/site-packages/requests/models.py:899: in content
10:29:45                 out:     self._content = b"".join(self.iter_content(CONTENT_CHUNK_SIZE)) or b""
10:29:45                 out: /opt/hostedtoolcache/Python/3.10.11/x64/lib/python3.10/site-packages/requests/models.py:816: in generate
10:29:45                 out:     yield from self.raw.stream(chunk_size, decode_content=True)
10:29:45                 out: /opt/hostedtoolcache/Python/3.10.11/x64/lib/python3.10/site-packages/urllib3/response.py:628: in stream
10:29:45                 out:     data = self.read(amt=amt, decode_content=decode_content)
10:29:45                 out: /opt/hostedtoolcache/Python/3.10.11/x64/lib/python3.10/site-packages/urllib3/response.py:567: in read
10:29:45                 out:     data = self._fp_read(amt) if not fp_closed else b""
10:29:45                 out: /opt/hostedtoolcache/Python/3.10.11/x64/lib/python3.10/site-packages/urllib3/response.py:533: in _fp_read
10:29:45                 out:     return self._fp.read(amt) if amt is not None else self._fp.read()
10:29:45                 out: /opt/hostedtoolcache/Python/3.10.11/x64/lib/python3.10/http/client.py:466: in read
10:29:45                 out:     s = self.fp.read(amt)
10:29:45                 out: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
10:29:45                 out: 
10:29:45                 out: self = <socket.SocketIO object at 0x7f821fc5e590>
10:29:45                 out: b = <memory at 0x7f821fee7880>
10:29:45                 out: 
10:29:45                 out:     def readinto(self, b):
10:29:45                 out:         """Read up to len(b) bytes into the writable buffer *b* and return
10:29:45                 out:         the number of bytes read.  If the socket is non-blocking and no bytes
10:29:45                 out:         are available, None is returned.
10:29:45                 out:     
10:29:45                 out:         If *b* is non-empty, a 0 return value indicates that the connection
10:29:45                 out:         was shutdown at the other end.
10:29:45                 out:         """
10:29:45                 out:         self._checkClosed()
10:29:45                 out:         self._checkReadable()
10:29:45                 out:         if self._timeout_occurred:
10:29:45                 out:             raise OSError("cannot read from timed out object")
10:29:45                 out:         while True:
10:29:45                 out:             try:
10:29:45                 out: >               return self._sock.recv_into(b)
10:29:45                 out: E               Failed: Timeout >10.0s
10:29:45                 out: 
10:29:45                 out: /opt/hostedtoolcache/Python/3.10.11/x64/lib/python3.10/socket.py:705: Failed
10:29:45                 out: =========================== short test summary info ============================
10:29:45                 out: FAILED test_start_proxy_service.py::test_start_proxy_service - Failed: Timeout >10.0s

@pypingou
Copy link
Member

/retest

@engelmi
Copy link
Member

engelmi commented May 17, 2023

Where do you see a failure?

Looks like this one: https://github.com/containers/hirte/actions/runs/4908515875/jobs/8773101650#step:8:1331

@dougsland It seems like the proxy service is crashing hirte - this was a bug fixed in #278. Could you update this branch? This should solve the issue.

@dougsland
Copy link
Contributor Author

Where do you see a failure?

Looks like this one: https://github.com/containers/hirte/actions/runs/4908515875/jobs/8773101650#step:8:1331

@dougsland It seems like the proxy service is crashing hirte - this was a bug fixed in #278. Could you update this branch? This should solve the issue.

Sure thing, I noticed yesterday night. Done. Thanks!

@dougsland
Copy link
Contributor Author

is this a new error @engelmi or my rebase didn't work well?

14:42:57                 out: test_proxy_service_fails_on_execstart.py:27: AssertionError
14:42:57                 out: =========================== short test summary info ============================
14:42:57                 out: FAILED test_proxy_service_fails_on_execstart.py::test_proxy_service_fails_on_execstart - AssertionError: assert False

@engelmi
Copy link
Member

engelmi commented May 18, 2023

is this a new error @engelmi or my rebase didn't work well?

14:42:57                 out: test_proxy_service_fails_on_execstart.py:27: AssertionError
14:42:57                 out: =========================== short test summary info ============================
14:42:57                 out: FAILED test_proxy_service_fails_on_execstart.py::test_proxy_service_fails_on_execstart - AssertionError: assert False

Your rebase worked :)
One of the 13 new tests was failing - it was the same in another PR. Waiting a (longer) while and re-running it resolved the issue. Unfortunately, I can't tell why it was failing - probably the integration tests need more refinement in the future.

Copy link
Member

@engelmi engelmi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Simplify the bash script, thanks to mperina.

Signed-off-by: Douglas Schilling Landgraf <dougsland@redhat.com>
Copy link
Member

@mwperina mwperina left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

@engelmi engelmi merged commit 142e053 into eclipse-bluechi:main May 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants