Skip to content

Commit

Permalink
Page restructure and additional info load
Browse files Browse the repository at this point in the history
  • Loading branch information
jp-tosca committed Nov 8, 2023
1 parent 998cfe6 commit b73c77a
Show file tree
Hide file tree
Showing 12 changed files with 170 additions and 37 deletions.
2 changes: 0 additions & 2 deletions doc/sphinx-guides/source/qa/checklist-qa-pr.rst

This file was deleted.

2 changes: 0 additions & 2 deletions doc/sphinx-guides/source/qa/checklist-qa-release.rst

This file was deleted.

2 changes: 0 additions & 2 deletions doc/sphinx-guides/source/qa/deploying-jenkins.rst

This file was deleted.

5 changes: 0 additions & 5 deletions doc/sphinx-guides/source/qa/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,7 @@ QA Guide
performance-tests
manual-testing
test-automation-integration
deploying-jenkins
other-approaches
tips-tricks
workflow-qa-pr
checklist-qa-pr
checklist-qa-release



Expand Down
12 changes: 6 additions & 6 deletions doc/sphinx-guides/source/qa/manual-testing.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Manual testing approach
Manual Testing Approach
=======================

.. contents:: |toctitle|
Expand All @@ -10,13 +10,13 @@ We use a risk-based, manual testing approach to achieve the most benefit with li

If it seems possible through user error or some other occurrence that such a serious failure will occur, we try to make it happen in the test environment. If the code has a UI component, we also do a limited amount of browser compatibility testing using Chrome, Firefox, and Safari browsers. We do not currently do UX or accessibility testing on a regular basis, though both have been done product-wide by the Design group and by the community.

Examining a pull request for test cases:
Examining a Pull Pequest for Test Cases:
----------------------------------------
What does it do? What problem does it solve?
What Problem Does it Solve?
++++++++++++++++++++++++++++++++++++++++++++
Read the top part of the pull request for a description, notes for reviewers, and usually a how-to test section. Does it make sense? If not, read the underlying ticket it closes, and any release notes or documentation. Knowing in general what it does helps you to think about how to approach it.

How is it configured?
How is it Configured?
+++++++++++++++++++++
Most pull requests do not have any special configuration and are enabled on deployment, but some do. Configuration is part of testing. An admin will need to follow these instructions so try them out. Plus, that is the only way you will get it working to test it!

Expand All @@ -32,7 +32,7 @@ Check permissions. Is this feature limited to a specific set of users? Can it be

Think about risk. Is the feature or function part of a critical area such as permissions? Does the functionality modify data? You may do more testing when the risk is higher.

Smoke test
Smoke Test
-----------

1. Go to the homepage on https://dataverse-internal.iq.harvard.edu. Scroll to the bottom to ensure the build number is the one you intend to test from Jenkins.
Expand All @@ -41,4 +41,4 @@ Smoke test
4. Create a dataset: I use the same username; I fill in the required fields (I do not use a template).
5. Upload 3 different types of files: I use a tabular file, 50by1000.dta, an image file, and a text file.
6. Publish the dataset.
7. Download a file, done.
7. Download a file.
125 changes: 124 additions & 1 deletion doc/sphinx-guides/source/qa/other-approaches.rst
Original file line number Diff line number Diff line change
@@ -1,2 +1,125 @@
Other approaches to deploying and testing
=========================================
=========================================

.. contents:: |toctitle|
:local:

This workflow is fine for a single person testing a PR, one at a time. It would be awkward or impossible if there were multiple people wanting to test different PRs at the same time. I’m assuming if a developer is testing, they would likely just deploy to their dev environment. That might be ok but not sure the env is fully configured enough to offer a real-world testing scenario. An alternative might be to spin an EC2 branch on AWS, potentially using sample data. This can take some time so another option might be to spin up a few, persistent AWS instances with sample data this way, one per tester, and just deploy new builds there when you want to test. You could even configure Jenkins projects for each if desired to maintain consistency in how they’re built.

Tips and tricks
---------------

- Start testing simply, with the most obvious test. You don’t need to know all your tests upfront. As you gain comfort and understanding of how it works, try more tests until you are done. If it is a complex feature, jot down your tests in an outline format, some beforehand as a guide, and some after as things occur to you. Save the doc in a testing folder (I have one on Google Drive). This potentially will help with future testing.
- When in doubt, ask someone. If you are confused about how something is working, it may be something you have missed, or it could be a documentation issue, or it could be a bug! Talk to the code reviewer and the contributor/developer for their opinion and advice.
- Always tail the server.log file while testing. Open a terminal window to the test instance and tail -F server.log. This helps you get a real-time sense of what the server is doing when you act and makes it easier to identify any stack trace on failure.
- When overloaded, do the simple pull requests first to reduce the queue. It gives you a mental boost to complete something and reduces the perception of the amount of work still to be done.
- When testing a bug fix, try reproducing the bug on the demo before testing the fix, that way you know you are taking the correct steps to verify that the fix worked.
- When testing an optional feature that requires configuration, do a smoke test without the feature configured and then with it configured. That way you know that folks using the standard config are unaffected by the option if they choose not to configure it.
- Back up your DB before applying an irreversible DB update and you are using a persistent/reusable platform. Just in case it fails, and you need to carry on testing something else you can use the backup.

Workflow for Completing QA on a PR
-----------------------------------

1. Assign the PR you are working on to yourself.

2. What does it do?

Read the description at the top of the PR, any release notes, documentation, and the original issue.

3. Does it address the issue it closes?

The PR should address the issue entirely unless otherwise noted.

4. How do you test it?

Look at the “how to test section” at the top of the pull request. Does it make sense? This likely won’t be the only testing you perform. You can develop further tests from the original issue or problem description, from the description of functionality, the documentation, configuration, and release notes. Also consider trying to reveal bugs by trying to break it: try bad or missing data, very large values or volume of data, exceed any place that may have a limit or boundary.

5. Does it have or need documentation?

Small changes or fixes usually don’t have doc but new features or extensions of a feature or new configuration options should have documentation.

6. Does it have or need release notes?

Same as for doc, just a heads up to an admin for something of note or especially upgrade instructions as needed.

7. Does it use a DB, flyway script?

Good to know since it may collide with another existing one by version or it could be a one way transform of your DB so back up your test DB before. Also, happens during deployment so be on the lookout for any issues.

8. Validate the documentation.

Build the doc using Jenkins, does it build without errors?
Read it through for sense.
Use it for test cases and to understand the feature.

9. Build and deploy the pull request.

Normally this is done using Jenkins and automatically deployed to the QA test machine.

10. Configure if required

If needed to operate and everyone installing or upgrading will use this, configure now as all testing will use it.

11. Smoke test the branch.

Standard, minimal test of core functionality.

12. Regression test-related or potentially affected features

If config is optional and testing without config turned on, do some spot checks/ regression tests of related or potentially affected areas.

13. Configure if optional

What is the default, enabled or disabled? Is that clearly indicated? Test both.
By config here we mean enabling the functionality versus choosing a particular config option. Some complex features have config options in addition to enabling. Those will also need to be tested.

14. Test all the new or changed functionality.

The heart of the PR, what is this PR adding or fixing? Is it all there and working?

15. Regression test related or potentially affected features.

Sometimes new stuff modifies and extends other functionality or functionality that is shared with other aspects of the system, e.g. Export, Import. Check the underlying functionality that was also modified but in a spot check or briefer manner.

16. Report any issues found within the PR

It can be easy to lose track of what you’ve found, steps to reproduce, and any errors or stack traces from the server log. Add these in a numbered list to a comment in the pr. Easier to check off when fixed and to work on. Add large amounts of text as in the server log as attached, meaningfully named files.

17. Retest all fixes, spot check feature functionality, smoke test

Similar to your initial testing, it is only narrower.

18. Test Upgrade Instructions, if required

Some features build upon the existing architecture but require modifications, such as adding a new column to the DB or changing or adding data. It is crucial that this works properly for our 100+ installations. This testing should be performed at the least on the prior version with basic data objects (collection, dataset, files) and any other data that will be updated by this feature. Using the sample data from the prior version would be good or deploying to dataverse-internal and upgrading there would be a good test. Remember to back up your DB before doing a transformative upgrade so that you can repeat it later if you find a bug.

19. Make sure the integration tests in the PR have been completed and passed.

They are run with each commit to the PR and take approximately 42 minutes to run.

20. Merge PR

Click merge to include this PR into the common develop branch.

21. Delete merged branch

Just a housekeeping move if the PR is from IQSS. Click the delete branch button where the merge button had been. There is no deletion for outside contributions.


Checklist for Completing QA on a PR
------------------------------------

1. Build the docs
2. Smoke test the pr
3. Test the new functionality
4. Regression test
5. Test any upgrade instructions

Checklist for QA on Release
---------------------------

1. Review Consolidated Release Notes, in particular upgrade instructions.
2. Conduct performance testing and compare with the previous release.
3. Perform clean install and smoke test.
4. Potentially follow upgrade instructions. Though they have been performed incrementally for each PR, the sequence may need checking

4 changes: 2 additions & 2 deletions doc/sphinx-guides/source/qa/overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,13 +13,13 @@ Workflow
--------
The basic workflow is bugs or feature requests are submitted to GitHub by the community or by team members as issues. These issues are prioritized and added to a two-week sprint that is reflected on the GitHub Kanban board. As developers work on these issues, a GitHub branch is produced, code is contributed, and a pull request is made to merge these new changes back into the common develop branch and ultimately released as part of the product. Before a pull request is merged it must be reviewed by a member of the development team from a coding perspective, it must pass automated integration tests before moving to QA. There it is tested manually, exercising the UI using three common browser types and any business logic it implements. Depending on whether the code modifies existing code or is completely new, a smoke test of core functionality is performed and some basic regression testing of modified or related code is performed. Any documentation provided is used to understand the feature and any assertions are tested. Once this passes and any bugs that are found are corrected, the automated integration tests are confirmed to be passing, the PR is merged into development, the PR is closed, and the branch is deleted. At this point, the pr moves from the QA column automatically into the Done column and the process repeats with the next pr until it is decided to make a release.

Release cadence and sprints
Release Cadence and Sprints
---------------------------
A release likely spans multiple two-week sprints. Each sprint represents the priorities for that time and is sized so that the team can reasonably complete most of the work on time. This is a goal to help with planning, it is not a strict requirement. Some issues from the previous sprint may remain and likely be included in the next sprint but occasionally may be deprioritized and deferred to another time.

The decision to make a release can be based on the time since the last release, some important feature needed by the community or contractual deadline, or some other logical reason to package the work completed into a named release and posted to the releases section on GitHub.

Performance testing and deployment
Performance Testing and Deployment
----------------------------------
The final testing activity before producing a release is performance testing. This could be done throughout the release cycle but since it is time-consuming it is done once near the end. Using a load-generating tool named Locust, it loads the statistically most loaded pages, according to Google Analytics, that is 50% homepage and 50% some type of dataset page. Since dataset page weight also varies by the number of files, a selection of about 10 datasets with varying file counts is used. The pages are called randomly as a guest user with increasing levels of user load, from 1 user to 250 users. Typical daily loads in production are around the 50-user level. Though the simulated user level does have a modest amount of random think time before repeated calls, from 5-20 seconds (I believe), it is not a real-world load so direct comparisons to production are not reliable. Instead, we compare performance to prior versions of the product and based on how that performed in production we have some idea whether this might be similar in performance or whether there is some undetected issue that appears under load, such as inefficient or too many DB queries per page.

Expand Down
8 changes: 4 additions & 4 deletions doc/sphinx-guides/source/qa/performance-tests.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Performance testing
Performance Testing
===================

.. contents:: |toctitle|
Expand All @@ -12,11 +12,11 @@ Access
------
Access to performance cluster instances requires ssh keys, see Leonid. The cluster itself is normally not running to reduce costs. To turn on the cluster, log on to the demo server and run the perfenv scripts from the centos default user dir. Access to the demo requires an ssh key, see Leonid.

Special notes
-------------
Special Notes ⚠️
-----------------
Please note the performance database is also used occasionally by Julian and the Curation team to generate prod reports so a courtesy check with Julian would be good before taking over the env.

Executing the performance script
Executing the Performance Script
--------------------------------
To execute the performance test script, you need to install a local copy of the database-helper-scripts project (https://github.com/IQSS/dataverse-helper-scripts), written by Raman. I have since produced a stripped-down script that calls just the DB and ds and works with python3.

Expand Down
32 changes: 31 additions & 1 deletion doc/sphinx-guides/source/qa/test-automation-integration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,34 @@ Test automation and integration test
====================================

.. contents:: |toctitle|
:local:
:local:

This test suite is added to and maintained by development. It is generally advisable for code contributors to add integration tests when adding new functionality. The approach here is one of code coverage: exercise as much of the code base’s code paths as possible, every time to catch bugs.

This type of approach is often used to give contributing developers confidence that their code didn’t introduce any obvious, major issues and is run on each commit. Since it is a broad set of tests, it is not clear whether any specific, conceivable test is run but it does add a lot of confidence that the code base is functioning due to its reach and consistency.

Building and Deploying a Pull Request from Jenkins to Dataverse-Internal:
-------------------------------------------------------------------------

1. Log on to GitHub, go to projects, dataverse to see Kanban board, select a pull request to test from the QA queue.

2. From the pull request page, click the copy icon next to the pull request branch name.

3. Log on to jenkins.dataverse.org, select the IQSS_Dataverse_Internal project, and configure the repository URL and branch specifier to match the ones from the pull request. For example:

- 8372-gdcc-xoai-library has IQSS implied
| **Repository URL:** https://github.com/IQSS/dataverse.git
| **Branch specifier:** \*/8372-gdcc-xoai-library
- GlobalDataverseCommunityConsortium:GDCC/DC-3B
| **Repository URL:** https://github.com/GlobalDataverseCommunityConsortium/dataverse.git
| **Branch specifier:** \*/GDCC/DC-3B.
4. Click Build Now and note the build number in progress.

5. Once complete, go to https://dataverse-internal.iq.harvard.edu and check that the deployment succeeded, and that the homepage displays the latest build number.

6. If for some reason it didn’t deploy, check the server.log file. It may just be a caching issue so try un-deploying, deleting cache, restarting, and re-deploying on the server (su - dataverse, /usr/local/payara5/bin/asadmin list-applications, /usr/local/payara5/bin/asadmin undeploy dataverse-5.11.1, /usr/local/payara5/bin/asadmin deploy /tmp/dataverse-5.11.1.war)

7. If that didn’t work, you may have run into a flyway DB script collision error but that should be indicated by the server.log

8. Assuming the above steps worked, and they should 99% of the time, test away! Note: be sure to tail -F server.log in a terminal window while you are doing any testing. This way you can spot problems that may not appear in the UI and have easier access to any stack traces for easier reporting.
6 changes: 3 additions & 3 deletions doc/sphinx-guides/source/qa/testing-infrastructure.rst
Original file line number Diff line number Diff line change
@@ -1,16 +1,16 @@
Infrastructure for testing
Infrastructure for Testing
==========================

.. contents:: |toctitle|
:local:


Dataverse internal
Dataverse Internal
-------------------
To build and test a PR, we use a build named IQSS_Dataverse_Internal on jenkins.dataverse.org, which deploys the .war file to an AWS instance named dataverse-internal.iq.harvard.edu.
Login to Jenkins requires a username and password. Check with Don Sizemore. Login to the dataverse-internal server requires a key, see Leonid.

Guides server
Guides Server
-------------
There is also a guides build project named guides.dataverse.org. Any test builds of guides are deployed to a named directory** on guides.dataverse.org and can be found and tested by going to the existing guides, removing the part of the URL that contains the version, and browsing the resulting directory listing for the latest change.
Login to the guides server requires a key, see Don Sizemore.
2 changes: 0 additions & 2 deletions doc/sphinx-guides/source/qa/tips-tricks.rst

This file was deleted.

7 changes: 0 additions & 7 deletions doc/sphinx-guides/source/qa/workflow-qa-pr.rst

This file was deleted.

0 comments on commit b73c77a

Please sign in to comment.