Skip to content

Robottelo Contributing Guidelines

Ondřej Gajdušek edited this page Oct 1, 2024 · 16 revisions

Robottelo Contributing Guidelines and Best Practices

Table of Contents


Introduction

Welcome to the Robottelo project! This document provides essential information for contributing to the project, including best practices for writing and organizing tests, managing helper functions, code structure, and guidelines for pull requests.

Before contributing, please ensure your development environment is set up correctly by following the Robottelo Setup Guide.

The goal is to ensure contributors can efficiently locate existing helpers, add new ones in the right place, and maintain consistency across the project.

General Best Practices

  • API Calls First: Prefer API calls over UI and CLI interactions for test setups unless specific endpoint setups are needed. UI should be the last resort.
  • Target Satellite Fixtures: Use target_sat fixtures in tests to leverage the wide range of helper methods provided by the Satellite object.
  • Non-Reusable Helpers: Place non-reusable helpers close to the test module itself.
  • Avoid One-Liners: If a helper is a one-liner, directly replace the function code where it's used.

Test Case Guidelines

Test Case IDs

  • Retain test case IDs unless the test's logic or purpose significantly changes.
  • Generate a new test ID if the fundamentals of the test change.

Test Structure

  • Robottelo follows the pytest framework.
  • Arrange-Act-Assert : Structure tests by performing setup (Arrange) in fixtures, actions (Act), and assertions (Assert). For CLI and UI modules, use APIs in the Arrange phase and the UI/CLI for the Act phase.

Test Generation Mechanism

  • Understand the test generation mechanism through the supportability configuration. [Link to the documentation TBD].

Test Writing Conventions

Each test needs to start with test_ in its name to be properly collected by pytest. The test must include a docstring containing critical information and several keywords for documentation and tracking. See the Test Case Naming Conventions wiki page for more details.

Required Docstring Content

  • Short Description: Briefly describe what the test does. {{< // >}}
  • :id: Unique identifier for the test (e.g., using uuidgen or python -c 'import uuid; print(uuid.uuid4())').
  • :parametrized: {yes|no} Specify if the test is parametrized (using @pytest.mark.parametrize). This is important for tools like Polarion.
  • :setup: Describe steps needed to prepare the testing environment, often handled by fixtures.
  • :steps: Describe the steps the test performs.
  • :expectedresults: Describe the expected outcomes of the test.

Optional Docstring Keywords

  • :BlockedBy: SAT-12345 Use to skip the test until a specific Jira issue is resolved.
  • :Verifies: SAT-67890 Use to indicate that the test covers a particular Jira issue.
  • :customerscenario: true Use if the test covers a bug linked to a customer case.

Note

SAT-12345 and SAT-67890 are examples. Please make sure that the keyword value references the correct Jira project and issue number.

Inherited or Test-Specific Keywords

  • :CaseAutomation: {Automated|NotAutomated|ManualOnly} Typically set to Automated.
  • :CaseComponent: <component_name> E.g., Repositories, ActivationKeys.
  • :team: <team_name> E.g., Phoenix-subscriptions, Rocket.
  • :CaseImportance: {Critical|High|Medium|Low} Based on the importance or severity of the issue being tested.

Validating Docstrings

  • To check the validity of docstrings, use the command:

    make test-docstrings

Writing and Organizing Code

Helper Functions

  • Isolation and Encapsulation: Create helper functions to isolate shared logic and encapsulate complex operations.

  • Implementing Helper Functions Needed by Fixtures: Place reusable functions in the utils module. If the function is not reusable, keep it within the fixture's module.

  • Preferred User Interface for Cross-Interface Helpers: API helpers are preferred. Use CLI or UI helpers only if necessary.

  • Utility Helpers Placement: If utility helpers do not fit into existing modules, add them to robottelo/utils/__init__.py.

  • Upgrade Scenario Helpers: Follow the same rules as for other helpers. Place them where they logically fit within the structure.

  • Handling Duplicate Helper Methods: Consider merging duplicate helper methods with optional parameters if feasible. If there are distinct CLI/API requirements, keep them separate.

  • Organizing Helpers:

    • Use api_factory.py, cli_factory.py, and ui_factory.py for helpers related to API, CLI, and UI operations, respectively.
    • For helpers that operate across all interfaces (API/CLI/UI), prefer API-based helpers.
    • Place helpers in mixin classes for functionality extending Satellite, Capsule, and Host classes.
    • Add general-purpose helpers that don't operate on hosts to the utils package.

Using Fixtures

  • Fixture Placement: Place fixtures in the pytest_fixtures directory. Core and component-specific fixtures should be organized under appropriate subdirectories.
  • Reusability: Reuse existing fixtures wherever possible. Before adding a new fixture, check for similar or existing ones.
  • Scope and Dependency: Choose fixtures when caching based on scope, dependency on other fixtures, or when setup/teardown is required.

When should I prefer/not prefer fixtures over helper functions?

When to Prefer Fixtures

  1. Setup and Teardown: Use fixtures when you need to set up some state before tests run and clean it up afterward.
  2. Reusability: Fixtures are ideal when the same setup is required across multiple test cases.
  3. Consistency: They help ensure that tests run in a consistent environment.
  4. Performance: Fixtures can improve performance by avoiding repetitive setup code in each test.

When Not to Prefer Fixtures

  1. Simplicity: If the setup is simple and only used in a single test, a helper function might be more straightforward.
  2. Readability: Helper functions can sometimes make tests easier to read and understand, especially for new contributors.
  3. Flexibility: Helper functions can be more flexible for tests that require slightly different setups.
  4. Isolation: If tests need to be isolated and independent, helper functions might be preferable to avoid shared state.

Additional Considerations

  • Maintenance: Consider the maintenance overhead. Fixtures can become complex and harder to manage if not used carefully.
  • Debugging: Helper functions can make debugging easier since the setup code is closer to the test code.
  • Global Fixtures: Use the pytest_fixtures package for global fixtures, choosing core for framework-level or component for component-specific fixtures.

Using Constants

  • Use constants to avoid hardcoding values in tests and helpers.

Packages and Modules

  • pytest_fixtures: Contains globally accessible fixtures for setup and teardown, organized by component and core functionality.

    • broker.py: Contains target_sat fixtures for consistent satellite instances.
    • contenthosts.py: Houses content host creation fixtures.
    • sat_cap_factory.py: Contains satellite/capsule creation fixtures.
    • xdist.py: Manages xDist distribution for Robottelo tests.
  • Robottelo: Contains modules for helper functions:

    • host_helpers/: Houses APIFactory, CLIFactory, UIFactory, and mixin classes.
    • utils/: General utilities that do not operate on hosts directly.
    • hosts.py: Contains the main classes for ContentHost, Capsule, and Satellite.

Finalizers

  • Use finalizers in pytest to clean up after tests. Proper resource cleanup is crucial to avoid side effects in other tests.

Logging Guidelines

  • Extensive Logging: Log as much as possible to aid in debugging and understanding test flow.
  • Log Levels:
    • DEBUG: For detailed diagnostic information.
    • INFO: General progress information.
    • WARNING: Indicators of potential issues.
    • ERROR: Errors that prevent further execution.

Pull Request Guidelines

  • Detailed Descriptions: Always fill in the PR description using the provided template. Include as much context as possible to facilitate the review process.
  • Use the PR Template: Follow the project's PR template to ensure all necessary information is included.

For more information on how your pull request will be reviewed, as well as the approval and merging process, please refer to the Reviewers Guide. Understanding the review workflow can help you align your submissions to ensure a smoother integration.

Label Overview

The Robottelo repository uses a variety of labels to help categorize and manage pull requests (PRs) and issues efficiently. The complete list of labels is available here. Below are some of the most important labels contributors should be aware of:

  • CherryPick and No-CherryPick: This label indicates that the PR requires a cherry-pick (backport) to previous branches. If this label is present, maintainers will create a cherry-pick for the specified branches. The contributor must use additional branch-specific labels to indicate which branches need the backport. If backporting the patch is not desired, No-CherryPick should be assigned to the PR along with Stream to indicate that the change is applicable only to Satellite Stream.

  • Branch Labels (6.14.z, 6.15.z, 6.16.z, etc.): These labels specify the target branch for the cherry-pick, corresponding with the Satellite version. Contributors should apply these labels based on the branches to which they wish to backport their changes.

  • AutoMerge_Cherry_Picked: This label is applied to indicate that the backport PR(s) will be automatically merged if all checks pass successfully.

Using these labels properly helps maintainers understand your PR's purpose and handle the backporting smoothly. Before creating a new PR, always check for existing labels and apply the right ones to make the review and integration easier.

Pull Request Testing

After submitting a Pull Request (PR), the automated testing process known as PR Testing (PRT) is initiated. This process ensures that the PR undergoes rigorous testing before being merged.

To initiate PRT for your pull request:

  1. PRT Comment Initiation: Add a comment in the PR containing the trigger statement "trigger": "test-robottelo". This statement triggers the Jenkins job to run automated tests.
  2. Specify Test Details: You may include any specific test configurations or test suites to be executed in the comment.
  3. Automated Testing: Once triggered, Jenkins will run the specified tests as part of the CI/CD pipeline, and results will be reported directly on the PR page via GitHub's status API.

For detailed instructions on how the PRT process works, refer to the PRT wiki page.

FAQs

Where should a property/method describing Satellite/Capsule/Host be added?

  • Add properties/methods to the Satellite, Capsule, or ContentHost classes.

Where should operations on Satellite/Capsule/Host be implemented?

  • Add to the appropriate mixin classes in robottelo/host_helpers/*_mixins.py.

When to prefer host object methods over utils functions?

  • Prefer host object methods when they depend on existing host methods or attributes.

By adhering to these guidelines and best practices, you'll contribute to maintaining high code quality and consistency in the Robottelo project. Thank you for your contributions!