Skip to content
This repository has been archived by the owner on Jul 3, 2024. It is now read-only.

Our testing policy

EnergyCube edited this page Sep 3, 2023 · 1 revision

Introduction

This document outlines the testing policy for the set of extensions developed by AstrodevLabs to add features to Visual Studio Code.

Testing Approach

Our testing approach aims to ensure that the extensions and the Language Server Protocol (LSP) are extensively tested for functionality, performance, and reliability. The key aspects of our testing approach include:

  1. Functional Testing: Verify that the extensions and LSP meet the specified requirements and perform their intended functionality accurately.

  2. Performance Testing: Evaluate the responsiveness, memory consumption, and computational efficiency of the extensions and LSP under varying workloads.

  3. Reliability Testing: Identify and rectify any potential defects, errors, or crashes that may impact the stability and reliability of the extensions and LSP.

Testing Methodologies

To achieve comprehensive testing, we will employ the following testing methodologies:

  1. Unit Testing: Write unit tests to validate individual components, functions, and modules of the extensions written in TypeScript. Use testing frameworks/tools, such as (TBD, Jest? Mocha?), to implement and execute these unit tests for the extentions and the native Rust test framework for the core.

  2. Integration Testing: Conduct integration testing to ensure seamless integration between TypeScript and the underlying Rust codebase. Verify that TypeScript extensions are able to effectively utilize the Rust core.

  3. End-to-End Testing: Develop end-to-end tests to cover the complete workflow and scenarios of the extensions within Visual Studio. Provide a set of representative test cases to simulate real-world usage of the extensions.

  4. Performance Testing: Perform performance testing under various load conditions to evaluate the extensions response times, resource utilization, and scalability. (TBD Tools)

  5. Usability Testing: Conduct usability testing with a diverse user base (with pre-release or if not availaible by testing manually all possible conditions) to validate the extensions ease-of-use, user experience, and adherence to user-centered design principles.

Test Environment

To ensure consistent and reproducible test results, define a dedicated test environment with the following specifications:

  • Operating System: Windows >= 7 (same as Visual Studio), Ubuntu, Fedora (TBD Arch ?)
  • Visual Studio version: Latest stable release
  • TypeScript version: Latest stable release
  • Rust : TBD
  • Required dependencies and libraries to run and test the extensions

Test Automation

Automated testing maintaining the quality and efficiency of the testing process. The automation framework will enable quicker test execution, enhanced test coverage, and continuous integration capabilities. We will utilize the following tools and frameworks for test automation:

Continuous Integration: Integrate the testing process into the CI/CD pipeline using GitHub Actions. Configure test runs to trigger automatically upon code changes or scheduled builds.

Test Documentation

Maintain comprehensive documentation to support testing activities, including:

  • Test Plan: Detailed plan outlining the testing objectives, scope, test coverage, and test schedule.
  • Test Cases: Documented test cases covering functional scenarios, edge cases, and integration tests.
  • Test Results: Capture and record test results, including pass/fail status, logs, and screenshots for regression analysis and defect tracking.

Test Reporting and Defect Tracking

Utilize a centralized defect tracking system (TBD) to log, track, and prioritize identified defects.

Test Quality Requirements

  • For an extension to be considered sufficiently tested, it must have at least 80% coverage and thorough testing of the most critical parts.
  • If any test fails, the extension is considered unstable and cannot be delivered to users.