-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Release 2.2rc1 #639
Merged
Release 2.2rc1 #639
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* Don't skip immediately if cannot find matching tumor or normal. * First check if tumor is in run and workflow group is 'research'. * Then check if there is a matching clinical normal if there are no matching research normals * Second check if normal is in run and workflow is clinical * Then check if there is a matching research tumor if there are no matching clinical tumors. * Get a unique list of jobs in the event that a clinical normal and research tumor are on the same run. * Added filter for library ids in get_workflows_by_subject_id. We don't want to wait for libraries that might not be involved in this pairing.
* Bumped node v18 * Improved README
Upgraded to yarn v3 berry
* Added Prism dynamic faker config to control deterministic return on nextPageToken field * Added v2 endpoints * Sync-ed up latest endpoint definitions from libica * Fixed Prism container image platform
Improved ICA endpoint mocking
* Added scanners - ggshield, trufflehog
Reinforced DevSecOps
Bumped dependencies
…d-type-tn-pairing
Updates by file: * Added abstract workflow type DRAGEN_WGTS_QC. * Added labmetadata rule method must_be_wgts * Handle WTS data and add enable_rna option. True for WTS data, false for WGS data * Set batcher runstep parameter to DRAGEN_WGTS_QC abstract workflow type * Replace must_be_wgs labmetadata rule with must_be_wgts * Update warning to use DRAGEN_WGTS_QC abstract workflow type
Now dragen wts happens after QC is complete. Instead of performing dragen_wts after bclconvert, we first run dragen_wgts_qc (which now includes dragen_wts samples).
\## workflow.py Added in dragen_wts_qc and dragen_wgts_qc into from_value class method \## dragen_wgs_qc.py Moved labmetadata improt to below try / except statement. \## orchestrator.py Split out WGS_QC and WTS_QC notification handling.
* Fixed tumour / tumor typo * Added assertion job list is zero for clinical tumor and research normal * Updated logging , should be pairing research tumor with clinical normal rather than the otherway around
* Arriba overrides is now in its own funciton in the wts lambda file. * Added a test to ensure that the overrides dict is as expected. * Use enums for ICA Resource and Size
Dragen WTS now runs after dragen wts qc. Use similar logic to t/n runs (that work off a metadata list) rather than a batcher (as used in the qc runs). Updates by file \## orchestration/dragen_wts_step \### Perform function * Updated perform function to check if all dragen_wts_qc workflows have finished for a given sequencing run * Invoke prepare_dragen_wts_jobs with a metadata list instead of a batcher object. \### prepare dragen wts jobs function Use similar (but simpler) logic to preparing of tn workflows Collect fastq list rows by the library id input (grouped by library-id / subject-id). And create the job dict with just the subject id, library id and fastq list rows object Rather than use a labmetadatarule we use a query set (see metadata_srv changes for more info). \## orchestration/tumor_normal_step * Move handle rerun function to liborca * Also fixed return in docstring \## tools/liborca * Collected handle rerun function from tumor normal step \## services/metadata_srv * Created queryset for collecting wts metdata by the wts qc runs \## pipeline/lambdas/dragen_wts * Handle new job input * Remove all references to sequence_run and batch_run * Assume subject Id is in the event dict
…c-pipeline Resolved merge conflicts from arriba mem update
Bumped dependencies
Added LabMetadata processing for 2018 and 2017 sheets
Bumped dependencies
* Added sash_step module hook to orchestrator * Improved look up for oncoanalyser output directory to liborca * Removed duplicate service layer method for workflow by portal_run_id * Refactored portal_run_id parsing from path element to liborca * Refactored TestConstant portal_run_id(s) stub * Improved related unit tests
Implemented SASH automation
* Replaced with WTS QC BAM for Holmes fingerprint extraction
Removed transcriptome BAM fingerprinting from orchestrator
…Failures * This feature allows FIFO queue's Lambda trigger to allow partially process of messages in a batch and, still keep exception-raised events in the queue. Essentially, e.g. total 10 messages; 6 process success and 4 exception raised then 6 will clear away from Q; but leave 4 in the Q. Instead of retrying the whole batch. See doc * https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html#services-sqs-batchfailurereporting * https://repost.aws/knowledge-center/lambda-sqs-report-batch-item-failures co-co: @reisingerf
* Prohibit init DRAGEN_WGTS_QC from SecondaryAnalysisHelper constructor * Use WGTS label as aggregated QC batch notification * Improved unit tests and exception handling
Improved QC implementation and reinforced unit testing
Improved SQS Lambda handler partial batch responses - ReportBatchItemFailures
* If SQS handler is configured with `ReportBatchItemFailures` then it has to return in proper Lambda response format such as dict or list. Otherwise, it gets treated as Lambda failure and put into DLQ. Read the doc "Success and failure conditions" clause: https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html#services-sqs-batchfailurereporting * Related #632
* And reinforced with unit test guarding
Improved ICA ENS and AWS Batch SQS Lambda handlers response in dict
Fixed QC workflow trigger wrong workflow typing
* Simplified star alignment metadata lookup with ICA CWL through metadata_srv layer method * Improved metadata_srv documentation * Reinforced unit test cases
Improved star alignment step implementation
* Reinforced business logic lookup condition on running workflows by sequencerun. The `end` time must be null and `end_status` must be `Running` to evaluate as ongoing workflow.
Disabled ReportBatchItemFailures response for SQS Standard queue
Improved workflow service layer running workflow lookup
Bumped dependencies
reisingerf
approved these changes
Oct 26, 2023
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's gooo!
Merging... |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Release
2.2rc1
for Staging