Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Install pytest based tests #1525

Draft
wants to merge 24 commits into
base: main
Choose a base branch
from
Draft

Conversation

swick
Copy link
Contributor

@swick swick commented Dec 4, 2024

I personally don't really understand the value of installed tests, so this is more of an question if it makes sense to do it. The plan is to remove the C based tests soon which would mean we'd have no more installed tests.

Only the last commit is relevant.

/cc @smcv

swick added 6 commits December 3, 2024 23:34
Without libumockdev-preload.so the tests all fail because we ensure that
it is preloaded. However, we did not make sure that umockdev is actually
available.

This commit ensures umockdev is avilable when we run pytest based tests,
imports umockdev and adds type annotations for it.
Otherwise the sound validator will reject what the tests consider a
valid sound and fail.
Those functions are useful for calling into the portals, other dbus
interfaces and with waiting for the right conditions.

Currently tests use a PortalMock instance instead which only allows
interacting with the mock and the "main" portal of the test. The new API
allows interacting with any dbus service easily.

The next commit will convert the tests to make use of this new API
instead of PortalMock.
The goal here is to prepare the tests for a new fixture design.

The `portals` fixture is responsible for starting everything to have a
working xdg-desktop-portal instance on the session bus. For now it just
relies on PortalMock for it but that will change in the future.

The `dbus_con` and `dbus_con_sys` fixtures return the mocked session and
system dbus connection. They currently also rely on PortalMock to get
the connections.

The new test helper API is based on dbus connections instead of the
PortalMock, so those fixtures make it possible to use the new API.
Now that we have the test fixtures and the new test API in place, we can
move our existing tests over to it.
@swick swick force-pushed the wip/pytest-installed branch from 25ccd86 to a99bb49 Compare December 4, 2024 19:11
swick added 17 commits December 4, 2024 21:36
This commit moves all the mocking to conftest.py and all the test
helpers to __init__.py. The PortalMock class is removed in favor of
fixtures which allow adjusting the behavior more easily.

Each test now defines all the dbusmock templates that it requires
instead of special casing a "main" portal.

The xdg-desktop-portal, xdg-document-portal and xdg-permission-store
which should be tested are passed in via environment variables.

It creates the portal config at runtime and looks up the ASAN
suppression from the source directory.

This means we have no test build dependency and can run the tests from
the source directory.
This ensures that our typing information is correct. Unfortunately
we cannot enable mypy checks in pre-commit because it passes the list of
changed python files to it which then ignores our exclude rule.
The camera tests require a backend for access and lockdown, so add them
as well.
This will be useful for the notification portal and the permission store
tests because they require complex typed arguments and async calls.
The pytest harness already knows how to set up all the portal bits and
pieces so we can drop the shell script and the dbus service activation
configuration.
@swick swick force-pushed the wip/pytest-installed branch from 5362611 to dd4ec29 Compare December 4, 2024 21:38
@swick
Copy link
Contributor Author

swick commented Dec 4, 2024

The part that's quite awkward is that pytest-tap isn't in the ubuntu repos which means I have to install it using pip into the system for the installed tests to work.

The alternative is to just not use TAP output. Again, I don't know enough about those tests to make a reasonable decision here.

@smcv
Copy link
Collaborator

smcv commented Dec 5, 2024

I personally don't really understand the value of installed tests, so this is more of an question if it makes sense to do it.

Most of the value of as-installed tests is that distributions can re-run the test against a new version of a dependency (like GLib or pytest), without recompiling the dependent software (in this case xdg-desktop-portal), to simulate what will happen on end-user systems in apt upgrade or equivalent.

This avoids spending time (a lot of time in some cases) on recompiling the dependent software. More importantly, it avoids hiding problems - if a library like GLib breaks its ABI, recompiling the dependent software under test against the new version will usually work around the ABI break by building new binaries that target the new ABI, whereas re-running previously-compiled tests (which expect the old ABI) will detect the ABI break so that it can be investigated and fixed.

Debian and Ubuntu routinely do this, via the autopkgtest framework.

A secondary reason to use as-installed tests is that they are often running in a less "fake" environment than build-time tests. For example, during as-installed tests, we can assume that x-d-p is a fully working D-Bus-activatable service, so if we let the message bus start it via service activation, that will detect any brokenness that might exist in its .service file. Similarly, we implicitly check for its ability to load the "real" bwrap from the PATH and its ability to find the "real" xdg-desktop-portal-validate-* executables in /usr/libexec; and unlike build-time tests being run by a distro autobuilder, the as-installed tests will usually not be running in a chroot or a heavily-constrained container, so we won't usually need workarounds like #1498.

Debian and Ubuntu currently run as-installed tests in either privileged lxc, some sort of lxd (privileged, I think), or a qemu VM, depending on CPU architecture, with Podman as a possible future replacement for lxc, and it is possible to flag tests as isolation-machine (meaning "needs machine-level testbed isolation"; we do this for xdg-desktop-portal) so that they will automatically be skipped on the architectures that are not using qemu. This makes the test environment a much more realistic reflection of what will happen on end-user systems than the heavily-constrained environment of an autobuilder.

I think Fedora also has some amount of "as-installed" testing, although I don't know the finer details. I believe Fedora uses mock (a restricted chroot/container vaguely similar to Debian's schroot) for autobuilders and build-time testing, but then uses qemu or some sort of more permissive container technology for "as-installed" testing.

@swick
Copy link
Contributor Author

swick commented Dec 5, 2024

Thanks for the thorough explanation.

The first reason would indicate that we should install the pytest based tests as well.

The second reason cannot be realized with the pytest based tests because they tightly control startup of all components themselves. The framework is flexible enough that maybe one could add a special mode for installed tests. The current C portal tests also tightly control the startup so that isn't a regression. Some of the other C based tests however rely on a less "fake" environment.

Given that I want to drop all the C tests, do you have concerns with that?

@smcv
Copy link
Collaborator

smcv commented Dec 5, 2024

The part that's quite awkward is that pytest-tap isn't in the ubuntu repos
...
The alternative is to just not use TAP output.

TAP output is a nice-to-have, but I don't consider it to be important.

We have https://github.com/python-tap/tappy in Debian/Ubuntu (as python3-tap), but I don't think we have its pytest plugin.

@smcv
Copy link
Collaborator

smcv commented Dec 5, 2024

The second reason cannot be realized with the pytest based tests because they tightly control startup of all components themselves. The framework is flexible enough that maybe one could add a special mode for installed tests. The current C portal tests also tightly control the startup so that isn't a regression.

I think that's reasonable: I can see that the tests for x-d-p probably do need to exercise quite tight control over it so that it will select mock portals (which are testable) instead of the real implementations (which are not).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Needs Triage
Development

Successfully merging this pull request may close these issues.

2 participants