diff --git a/README.md b/README.md index cf151281e..606380948 100644 --- a/README.md +++ b/README.md @@ -13,62 +13,9 @@ NHSNLink is an open-source reference implementation for CDC’s National Healthcare Safety Network (NHSN) reporting. It is an application that aggregates, transforms, evaluates, validates and submits patient-level clinical data for patients matching NHSN surveillance requirements. It is based on a event driven micro service architecture using C#, Java, Kafka and other technologies. NHSNLink is designed to handle large-scale data processing efficiently. It leverages streaming technologies that can be configured to continuously query and evaluate patient data throughout the reporting cycle, rather than waiting until the end to initiate this process. -## Link Cloud Services - -### Tenant - -The Tenant service is the entry point for configuring a tenant into Link Cloud. The service is responsible for maintaining and generating events for the scheduled measure reporting periods that the tenant is configured for. These events contain the initial information needed for Link Cloud to query resources and perform measure evaluations based on a specific reporting period. - -### Census - -The Census service is primarily responsible for maintaining a tenants admit and discharge patient information needed to determine when a patient is ready for reporting. To accomplish this, the Census service has functionality in place to request an updated FHIR List of recently admitted patients. The frequency that the request is made is based on a Tenant configuration made in the Census service. - -### Query Dispatch - -The Query Dispatch service is primarily responsible for applying a lag period prior to making FHIR resource query requests against a facility endpoint. The current implementation of the Query Dispatch service handles how long Link Cloud should wait before querying for a patient’s FHIR resources after being discharged. To ensure that the encounter related data for the patient has been settled (Medications have been closed, Labs have had their results finalized, etc), tenants are able to customize how long they would like the lag from discharge to querying to be. - -### Data Acquisition - -The Data Acquistion service is responsible for connecting and querying a tenant's endpoint for FHIR resources that are needed to evaluate patients for a measure. For Epic installations, Link Cloud is utilizing the STU3 Patient List ([Link Here](https://fhir.epic.com/Specifications?api=879)) resource to inform which patients are currently admitted in the facility. While this is the current solution to acquiring the patient census, there are other means of patient acquisition being investigated (ADT V2, Bulk FHIR) to provide universal support across multiple EHR vendors. - -### Normalization - -FHIR resources queried from EHR endpoints can vary from location to location. There will be occasions where data for specific resources may need to be adjusted to ensure that Link Cloud properly evaluates a patient against dQM’s. The Normalization service is a component in Link Cloud to help make those adjustments in an automated way. The service operates in between the resource acquisition and evaluation steps to ensure that the tenant data is in a readied state for measure evaluation. - -### Measure Eval - -The Measure Eval service is a Java based application that is primarily responsible for evaluating bundles of acquired patient resources against the measures that Link Cloud tenants are configured to evaluate with. The service utilizes the CQF framework ([Link Here](https://github.com/cqframework/cqf-ruler)) to perform the measure evaluations. - -### Report - -The Report service is responsible for persisting the Measure Reports and FHIR resources that the Measure Eval service generates after evaluating a patient against a measure. When a tenant's reporting period end date has been met, the Report Service performs various workflows to determine if all of the patient MeasureReports are accounted for that period prior to initiating the submission process. - -### Submission -The Submission service is responsible for packaging a tenant's reporting content and submitting them to a configured destination. Currently, the service only writes the submission content to its local file store. The submission package for a reporting period includes the following files: - -| File | Description | Multiple Files? | -| ---- | ---- | ---- | -| Aggregate | A [MeasureReport](https://hl7.org/fhir/R4/measurereport.html) resource that contains references to each patient evaluation for a specific measure | Yes, one per measure | -| Patient List | A [List](https://hl7.org/fhir/R4/list.html) resource of all patients that were admitted into the facility during the reporting period | No | -| Device | A [Device](https://hl7.org/fhir/R4/device.html) resource that details the version of Link Cloud that was used | No | -| Organization | An [Organization](https://hl7.org/fhir/R4/organization.html) resource for the submitting facility | No | -| Other Resources | A [Bundle](https://hl7.org/fhir/R4/bundle.html) resource that contains all of the shared resources (Location, Medication, etc) that are referenced in the patient Measure Reports | No | -| Patient | A [Bundle](https://hl7.org/fhir/R4/bundle.html) resource that contains the MeasureReports and related resources for a patient | Yes, one per evaluated patient | - -An example of the submission package can be found at `\link-cloud\Submission Example`. - -### Account - -The Account service is responsible for maintaining roles and permissions for Link Cloud users. - -### Audit - -The Audit service is responsible for persisting auditable events that are generated by the Link Cloud services. - -### Notification - -The Notification service is responsible for emailing configured users when a notifiable event occurs when the Link Cloud services attempt to perform their work. +## Documentation +Documentation on Link's implementation and the functionality it supports can be found [here](docs/README.md). ## Helpful Tools - SQL Server Management Studio: [Link Here](https://learn.microsoft.com/en-us/sql/ssms/download-sql-server-management-studio-ssms?view=sql-server-ver16) @@ -127,7 +74,7 @@ The initial building of the services will take a few minutes to run. 5. Open a web browser and access Kafka UI. By default, the page can be accessed at `http://localhost:9095`. Click the `Topics` tab and ensure that Kafka topics exist: - +Kafka UI showing Link topics If there aren't any topics populated (shown in the image above), attempt to rerun the following command: `docker compose up kafka_init -d` @@ -256,7 +203,7 @@ Test-Hospital At the end of the reporting period, the Report service will make additional requests to query and evaluate patients that are currently admitted in the facility prior to submitting. After each of those admitted patients are evaluated, the Report service will then produce a `SubmitReport` event to inform the Submission service that a report is complete. To access the submission package open Docker Desktop and click the `link-submission` container. Select the `files` tab and navigate to the `app\submissions` folder. There, you'll be able to download the submission results for the reporting period: - +Docker Desktop UI showing submissions folder ## Reporting Event Workflow > [!NOTE] @@ -266,13 +213,13 @@ Detailed below are the steps Link Cloud takes to generate a report for a tenant ### Report Scheduling - +UML diagram showing events for report scheduling At the beginning of a new reporting period, the Tenant service produces a `ReportScheduled` event. The Query Dispatch and Report services consume and persist the reporting information in the event into their databases. The Report service sets an internal cron job (based on the EndDate of the consumed event) to execute the work needed to complete the report. ### Census Acquisition and Discharge - +UML diagram showing events for census of patients During the reporting period, the Census service is configured to continually request a new list of patients admitted in a facility by producing the `CensusAcquisitionScheduled` event. The Data Acquisition service consumed this event and queries the facility's List endpoint. After receiving a response back from the EHR endpoints, the Data Acquisition service then produces a `PatientIDsAcquired` event that contains a list of all patients that are currently admitted in the facility. @@ -287,7 +234,7 @@ The QueryDispatch service consumes the patient events and appends the tenants' r ### Resource Acquisition and Evaluation - +UML diagram showing events for data acquisition and evaluation A `DataAcquisitionRequested` event is generated for patients that have either been discharged or are still admitted when the reporting period end date is met. This event is the trigger that causes the resource acquisition, normalization and evaluation phases for a patient. diff --git a/docs/README.md b/docs/README.md new file mode 100644 index 000000000..5fdfd2383 --- /dev/null +++ b/docs/README.md @@ -0,0 +1,26 @@ + +## Overview + +This page and its references include documentation for Link's services and the functionality those services supports. + +## Functionality + +* [Census Management](functionality/census_management.md) +* Submission + * [Folder Structure](functionality/submission_folder.md) + +## Service Specifications + +* [Account](service_specs/account.md) +* [Admin UI](service_specs/admin_ui.md) +* [Audit](service_specs/audit.md) +* [Backend For Frontend (BFF)](service_specs/bff.md) +* [Census](service_specs/census.md) +* [Data Acquisition](service_specs/data_acquisition.md) +* [Measure Evaluation](service_specs/measure_eval.md) +* [Normalization](service_specs/normalization.md) +* [Notification](service_specs/notification.md) +* [Query Dispatch](service_specs/query_dispatch.md) +* [Report](service_specs/report.md) +* [Submission](service_specs/submission.md) +* [Tenant](service_specs/tenant.md) \ No newline at end of file diff --git a/docs/functionality/census_management.md b/docs/functionality/census_management.md new file mode 100644 index 000000000..e34a833bf --- /dev/null +++ b/docs/functionality/census_management.md @@ -0,0 +1,45 @@ +[← Back Home](../README.md) + +## Overview + +Link includes a **Census Management** module designed to acquire, evaluate, and maintain a real-time census of patients actively or recently treated within a hospital system. This module supports the submission of required patient data to governing bodies (such as NHSN) per established reporting criteria. + +The system routinely gathers census data by leveraging multiple methods compatible with hospital EHRs. This document provides an overview of each acquisition method, the data elements tracked, and future considerations for expanding data persistence. + +## Census Data Acquisition Methods + +### 1. FHIR Standard - List Resource + +The **FHIR List Resource** method is one of the primary approaches for acquiring patient lists from hospital EHR systems. In systems like Epic, patient lists are generated through proprietary queries within the EHR, associated with the FHIR List resource for access through FHIR integrations. + +- **Endpoint**: `GET /List/` +- **Functionality**: Queries the EHR for patient lists identified by a `listId`. +- **Tenant Configurability**: The `listId` is configurable per tenant within Link, allowing each institution to define the patient population that constitutes their census. +- **Applicability**: This method supports census management for any EHR that utilizes FHIR Lists representing relevant patient groups. + +### 2. FHIR Standard - Bulk FHIR Implementation Guide + +The **Bulk FHIR** method allows Link to acquire patient data via batch processing, useful for large patient groups in systems that support flexible querying. + +- **Endpoint**: Bulk FHIR `$export` request with `groupId` +- **Process**: + - Link initiates a `$export` request for patient data by `groupId`. + - The export process is monitored and polled routinely until completion. + - Upon completion, patient resources are retrieved, and the FHIR ID of each patient is extracted and stored. +- **Tenant Configurability**: Each tenant configures a unique `groupId` corresponding to their desired patient group. + +### 3. ADT Feeds (Under Exploration) + +Link is evaluating the feasibility of using **ADT feeds**—specifically for admission, discharge, and cancellation events—to dynamically manage census data. This approach would offer real-time census updates and potentially enhance the accuracy and responsiveness of the census. + +## Scheduling and Frequency + +Both the FHIR List and Bulk FHIR methods utilize a configurable scheduling system, allowing each tenant to define query intervals. The scheduling system is based on CRON patterns, ensuring Link can automatically query the EHR at specified times to maintain an updated census. + +## Data Persistence and Tracking + +Currently, Link persists only the **FHIR ID** of each patient. This identifier allows Link to accurately track patients across census updates without storing additional demographic information. + +### Future Considerations + +In the interest of enhancing the user interface, Link is considering storing additional data elements, such as the **patient name** associated with each FHIR ID. This would provide users with meaningful patient identifiers, facilitating easier navigation and record management within the Link UI. \ No newline at end of file diff --git a/docs/functionality/submission_folder.md b/docs/functionality/submission_folder.md new file mode 100644 index 000000000..f5e296d20 --- /dev/null +++ b/docs/functionality/submission_folder.md @@ -0,0 +1,27 @@ +[← Back Home](../README.md) + +## Folder Submission - Link Documentation + +### Naming Convention + +Format: `----` + +Example: `1234-NHSNdQMAcuteCareHospitalInitialPopulation+NHSNGlycemicControlHypoglycemicInitialPopulation-20240401-20240430-20240503T084523` + +### Files + +- `_manifest.json` +- `submitting-device.json` - The submitting Device (always “NHSNLink” for reports generated by NHSNLink) +- `submitting-org.json` - The submitting Organization +- `census.json` +- `query-plan.yml` +- `aggregate-NHSNdQMAcuteCareHospitalInitialPopulation.json` - a MeasureReport +- `aggregate-NHSNGlycemicControlHypoglycemicInitialPopulation.json` - a MeasureReport +- `patient-X.json` - X is the patient’s Patient.id and is a Bundle of: + - individual MeasureReport for ACH + - individual MeasureReport for HYPO + - Patient/X + - Condition/X, etc. +- `shared-resources.json` - a Bundle of shared resources, such as Medication and Location +- `validation-results.json` - an OperationOutcome of raw validation results for all data in the submission +- `validation-report.html` - categorized validation results for all data in the submission in a renderable HTML \ No newline at end of file diff --git a/Documentation/Images/docker_submission.png b/docs/images/docker_submission.png similarity index 100% rename from Documentation/Images/docker_submission.png rename to docs/images/docker_submission.png diff --git a/Documentation/Images/readme_event_acquisition_evaluation.png b/docs/images/readme_event_acquisition_evaluation.png similarity index 100% rename from Documentation/Images/readme_event_acquisition_evaluation.png rename to docs/images/readme_event_acquisition_evaluation.png diff --git a/Documentation/Images/readme_event_census_discharge.png b/docs/images/readme_event_census_discharge.png similarity index 100% rename from Documentation/Images/readme_event_census_discharge.png rename to docs/images/readme_event_census_discharge.png diff --git a/Documentation/Images/readme_event_report_scheduling.png b/docs/images/readme_event_report_scheduling.png similarity index 100% rename from Documentation/Images/readme_event_report_scheduling.png rename to docs/images/readme_event_report_scheduling.png diff --git a/Documentation/Images/readme_kafkaui.png b/docs/images/readme_kafkaui.png similarity index 100% rename from Documentation/Images/readme_kafkaui.png rename to docs/images/readme_kafkaui.png diff --git a/docs/service_specs/account.md b/docs/service_specs/account.md new file mode 100644 index 000000000..293d53ca1 --- /dev/null +++ b/docs/service_specs/account.md @@ -0,0 +1,53 @@ +[← Back Home](../README.md) + +## Account Overview + +The Account service is responsible for maintaining roles and permissions for Link Cloud users. + +- **Technology**: .NET Core +- **Image Name**: link-account +- **Port**: 8080 +- **Database**: MSSQL (previously Postgres) + +## Environment Variables + +| Name | Value | Secret? | +|------------------------------------------|---------------------------------|---------| +| ExternalConfigurationSource | AzureAppConfiguration | No | +| ConnectionStrings__AzureAppConfiguration | `` | Yes | + +## App Config + +### Kafka Connection + +| Name | Value | Secret? | +|--------------------------------------|---------------------------|----------| +| KafkaConnection__BootstrapServers__0 | `` | No | +| KafkaConnection__GroupId | Account | No | + +### Redis + +| Name | Value | Secret? | +|--------------------------|---------------------------|--------| +| ConnectionStrings__Redis | `` | Yes | +| Cache__Enabled | true/false | No | + +### Database Settings (MSSQL) + +| Name | Value | Secret? | +|---------------------------------------|-----------------------|----------| +| ConnectionStrings__DatabaseConnection | `` | Yes | + +### Tenant API Settings + +| Name | Value | Secret? | +|-----------------------------------------------|-------------------------------------|---------| +| TenantApiSettings__TenantServiceBaseEndpoint | `/api` | No | + +## Consumed Events + +- **NONE** + +## Produced Events + +- **Event**: `AuditableEventOccurred` \ No newline at end of file diff --git a/docs/service_specs/admin_ui.md b/docs/service_specs/admin_ui.md new file mode 100644 index 000000000..47bcf4b08 --- /dev/null +++ b/docs/service_specs/admin_ui.md @@ -0,0 +1,28 @@ +[← Back Home](../README.md) + +## Admin UI + +> ⚠️ **Note:** This service is currently called "demo app" and is planned to be renamed. + +## Admin UI Overview + +- **Technology**: JavaScript (TypeScript) & Angular +- **Image Name**: link-admin-ui +- **Port**: 80 +- **Database**: NONE +- **Scale**: 0-5 + +## Volumes + +| Volume | Mount Path | Sub-path | +|-------------------------------|------------------------------------------------------|-------------------------| +| Azure Storage Account | `/usr/share/nginx/html/assets/app.config.local.json` | `app.config.local.json` | + +## app.config.local.json + +```json +{ + "baseApiUrl": "/api", + "idpIssuer": "https://oauth.nhsnlink.org/realms/NHSNLink", + "idpClientId": "link-botw" +} \ No newline at end of file diff --git a/docs/service_specs/audit.md b/docs/service_specs/audit.md new file mode 100644 index 000000000..3707e77d3 --- /dev/null +++ b/docs/service_specs/audit.md @@ -0,0 +1,35 @@ +[← Back Home](../README.md) + +## Audit Overview + +The Audit service is responsible for persisting auditable events that are generated by the Link Cloud services. + +- **Technology**: .NET Core +- **Image Name**: link-audit +- **Port**: 8080 +- **Database**: MSSQL + +## Environment Variables + +| Name | Value | Secret? | +|---------------------------------------------|-------------------------------|---------| +| Link__Audit__ExternalConfigurationSource | AzureAppConfiguration | No | +| ConnectionStrings__AzureAppConfiguration | `` | Yes | + +## App Config + +### Kafka Connection + +| Name | Value | Secret? | +|---------------------------------------------------|--------------------------|---------| +| Link__Audit__KafkaConnection__BootstrapServers__0 | `` | No | +| Link__Audit__KafkaConnection__GroupId | audit-events | No | +| Link__Audit__KafkaConnection__ClientId | audit-events | No | + +## Consumed Events + +- **AuditableEventOccurred** + +## Produced Events + +- **NONE** diff --git a/docs/service_specs/bff.md b/docs/service_specs/bff.md new file mode 100644 index 000000000..69506de92 --- /dev/null +++ b/docs/service_specs/bff.md @@ -0,0 +1,42 @@ +[← Back Home](../README.md) + +> ⚠️ **Note:** This service is planned to be renamed to "admin-bff". + +## Backend for Frontend (BFF) Overview + +- **Technology**: .NET Core +- **Image Name**: link-bff +- **Port**: 8080 +- **Database**: NONE +- **Scale**: 0-3 + +## Environment Variables + +| Name | Value | Secret? | +|---------------------------------------------|-------------------------------|---------| +| Link__Audit__ExternalConfigurationSource | AzureAppConfiguration | No | +| ConnectionStrings__AzureAppConfiguration | `` | Yes | + +## App Settings + +### Service Endpoints + +| Name | Value | Secret? | +|----------------------------------------------|--------------------------------|----------| +| GatewayConfig__KafkaBootstrapServers__0 | `` | No | +| GatewayConfig__AuditServiceApiUrl | ` (without /api)` | No | +| GatewayConfig__NotificationServiceApiUrl | ` (without /api)` | No | +| GatewayConfig__TenantServiceApiUrl | ` (without /api)` | No | +| GatewayConfig__CensusServiceApiUrl | ` (without /api)` | No | +| GatewayConfig__ReportServiceApiUrl | ` (without /api)` | No | +| GatewayConfig__MeasureServiceApiUrl | ` (without /api)` | No | + +### Identity Provider + +| Name | Value | Secret? | +|----------------------------------------------|--------------------------------|----------| +| IdentityProviderConfig__Issuer | ?? | No | +| IdentityProviderConfig__Audience | ?? | No | +| IdentityProviderConfig__NameClaimType | email | No | +| IdentityProviderConfig__RoleClaimType | roles | No | +| IdentityProviderConfig__ValidTypes | `[ "at+jwt", "JWT" ]` | No | diff --git a/docs/service_specs/census.md b/docs/service_specs/census.md new file mode 100644 index 000000000..0fc9d6d47 --- /dev/null +++ b/docs/service_specs/census.md @@ -0,0 +1,50 @@ +[← Back Home](../README.md) + +## Census Overview + +The Census service is primarily responsible for maintaining a tenants admit and discharge patient information needed to determine when a patient is ready for reporting. To accomplish this, the Census service has functionality in place to request an updated FHIR List of recently admitted patients. The frequency that the request is made is based on a Tenant configuration made in the Census service. + +- **Technology**: .NET Core +- **Image Name**: link-census +- **Port**: 8080 +- **Database**: MSSQL (previously Mongo) +- **Scale**: 0-3 + +## Environment Variables + +| Name | Value | Secret? | +|--------------------------------------------|-------------------------------|---------| +| Link__Audit__ExternalConfigurationSource | AzureAppConfiguration | No | +| ConnectionStrings__AzureAppConfiguration | `` | Yes | + +## App Settings + +### Kafka Connection + +| Name | Value | Secret? | +|-----------------------------------------|--------------------------|----------| +| KafkaConnection__BootstrapServers__0 | `` | No | +| KafkaConnection__GroupId | census-events | No | +| KafkaConnection__ClientId | census-events | No | + +### Tenant API Settings + +| Name | Value | Secret? | +|----------------------------------------------|------------------------|---------| +| TenantApiSettings__TenantServiceBaseEndpoint | `/api` | No | + +### Database Settings (MSSQL) + +| Name | Value | Secret? | +|---------------------------|----------------------|---------| +| MongoDB__ConnectionString | `` | Yes | +| MongoDb__DatabaseName | `` | No | +| MongoDb__CollectionName | `census` | No | + +## Consumed Events + +- **Event**: `PatientIDsAcquired` + +## Produced Events + +- **Event**: `PatientCensusScheduled` \ No newline at end of file diff --git a/docs/service_specs/data_acquisition.md b/docs/service_specs/data_acquisition.md new file mode 100644 index 000000000..b6e3fa3c5 --- /dev/null +++ b/docs/service_specs/data_acquisition.md @@ -0,0 +1,37 @@ +[← Back Home](../README.md) + +## Data Acquisition Overview + +The Data Acquisition service is responsible for connecting and querying a tenant's endpoint for FHIR resources that are needed to evaluate patients for a measure. For Epic installations, Link Cloud is utilizing the [Epic FHIR STU3 Patient List](https://fhir.epic.com/Specifications?api=879) resource to inform which patients are currently admitted in the facility. While this is the current solution to acquiring the patient census, there are other means of patient acquisition being investigated (ADT V2, Bulk FHIR) to provide universal support across multiple EHR vendors. + +- **Technology**: .NET Core +- **Image Name**: link-dataacquisition +- **Port**: 8080 +- **Database**: MSSQL (previously Mongo) + +## Environment Variables + +| Name | Value | Secret? | +|---------------------------------------------|-------------------------------|---------| +| Link__Audit__ExternalConfigurationSource | AzureAppConfiguration | No | +| ConnectionStrings__AzureAppConfiguration | `` | Yes | + +## App Settings + +### Kafka Connection + +| Name | Value | Secret? | +|------------------------------------------|---------------------------|----------| +| KafkaConnection__BootstrapServers__0 | `` | No | +| KafkaConnection__GroupId | data-acquisition-events | No | + +## Consumed Events + +- **PatientEvent** +- **PatientBulkAcquisitionScheduled** + +## Produced Events + +- **PatientIdsAcquired** +- **PatientAcquired** +- **NotificationRequested** \ No newline at end of file diff --git a/docs/service_specs/measure_eval.md b/docs/service_specs/measure_eval.md new file mode 100644 index 000000000..448701410 --- /dev/null +++ b/docs/service_specs/measure_eval.md @@ -0,0 +1,44 @@ +[← Back Home](../README.md) + +## Measure Eval Overview + +The Measure Eval service is a Java based application that is primarily responsible for evaluating bundles of acquired patient resources against the measures that Link Cloud tenants are configured to evaluate with. The service utilizes the [CQF framework](https://github.com/cqframework/cqf-ruler) to perform the measure evaluations. + +- **Technology**: .NET Core 8 +- **Image Name**: link-measureeval +- **Port**: 8080 +- **Database**: Mongo + +## Environment Variables + +| Name | Value | Secret? | +|---------------------------------------------|-------------------------------|---------| +| Link__Audit__ExternalConfigurationSource | AzureAppConfiguration | No | +| ConnectionStrings__AzureAppConfiguration | `` | Yes | + +## App Settings + +### Kafka Connection + +| Name | Value | Secret? | +|------------------------------------------|---------------------------|---------| +| KafkaConnection__BootstrapServers__0 | `` | No | +| KafkaConnection__GroupId | measure-events | No | + +### Measure Evaluation Config + +| Name | Value | Secret? | +|--------------------------------------------|-------------------------------------------------|---------| +| MeasureEvalConfig__TerminologyServiceUrl | `https://cqf-ruler.nhsnlink.org/fhir` | No | +| MeasureEvalConfig__EvaluationServiceUrl | `https://cqf-ruler.nhsnlink.org/fhir` | No | + +## Consumed Events + +- **PatientDataNormalized** + +## Produced Events + +- **MeasureEvaluated** +- **NotificationRequested** + +> **Note**: This service is being re-designed as a Java application to use CQFramework libraries directly rather than relying on a separate CQF-Ruler installation. \ No newline at end of file diff --git a/docs/service_specs/normalization.md b/docs/service_specs/normalization.md new file mode 100644 index 000000000..9d81a2e6c --- /dev/null +++ b/docs/service_specs/normalization.md @@ -0,0 +1,43 @@ +[← Back Home](../README.md) + +## Normalization Overview + +FHIR resources queried from EHR endpoints can vary from location to location. There will be occasions where data for specific resources may need to be adjusted to ensure that Link Cloud properly evaluates a patient against dQM’s. The Normalization service is a component in Link Cloud to help make those adjustments in an automated way. The service operates in between the resource acquisition and evaluation steps to ensure that the tenant data is in a readied state for measure evaluation. + +- **Technology**: .NET Core +- **Image Name**: link-normalization +- **Port**: 8080 +- **Database**: MSSQL (previously Mongo) +- **Scale**: 0-3 + +## Environment Variables + +| Name | Value | Secret? | +|---------------------------------------------|-------------------------------|---------| +| Link__Audit__ExternalConfigurationSource | AzureAppConfiguration | No | +| ConnectionStrings__AzureAppConfiguration | `` | Yes | + +## App Settings + +### Kafka Connection + +| Name | Value | Secret? | +|------------------------------------------|---------------------------|---------| +| KafkaConnection__BootstrapServers__0 | `` | No | +| KafkaConnection__GroupId | normalization-events | No | +| KafkaConnection__ClientId | normalization-events | No | + +### Database Settings (MSSQL) + +| Name | Value | Secret? | +|---------------------------------------|-------------------------------|----------| +| ConnectionStrings__DatabaseConnection | `` | Yes | + +## Consumed Events + +- **PatientDataAcquired** + +## Produced Events + +- **PatientNormalized** +- **NotificationRequested** diff --git a/docs/service_specs/notification.md b/docs/service_specs/notification.md new file mode 100644 index 000000000..f46107652 --- /dev/null +++ b/docs/service_specs/notification.md @@ -0,0 +1,60 @@ +[← Back Home](../README.md) + +## Notification Overview + +The Notification service is responsible for emailing configured users when a notifiable event occurs when the Link Cloud services attempt to perform their work. + +- **Technology**: .NET Core +- **Image Name**: link-notification +- **Port**: 8080 +- **Database**: MSSQL +- **Scale**: 0-3 + +## Environment Variables + +| Name | Value | Secret? | +|----------------------------------------------------------|-------------------------------|---------| +| Link__Audit__ExternalConfigurationSource | AzureAppConfiguration | No | +| ConnectionStrings__AzureAppConfiguration | `` | Yes | + +## App Settings + +### Service Endpoints + +| Name | Value | Secret? | +|-------------------------------------------------------------|-------------------------------|---------| +| Link__Notification__ServiceRegistry__TenantServiceApiUrl | `` | No | + +### Kafka + +| Name | Value | Secret? | +|-------------------------------------------------------------|-------------------------------|---------| +| Link__Notification__KafkaConnection__BootstrapServers__0 | `` | No | +| Link__Notification__KafkaConnection__GroupId | notification-events | No | +| Link__Notification__KafkaConnection__ClientId | notification-events | No | + +### SMTP + +| Name | Value | Secret? | +|-------------------------------------------------------------|-------------------------------|---------| +| Link__Notification__SmtpConnection__Host | | No | +| Link__Notification__SmtpConnection__Port | | No | +| Link__Notification__SmtpConnection__EmailFrom | | No | +| Link__Notification__SmtpConnection__UseBasicAuth | false or true | No | +| Link__Notification__SmtpConnection__Username | | No | +| Link__Notification__SmtpConnection__Password | | Yes | +| Link__Notification__SmtpConnection__UseOAuth2 | false or true | No | + +### Additional Settings + +| Name | Value | Secret? | +|-------------------------------------------------------------|-------------------------------|---------| +| Link__Notification__EnableSwagger | true (DEV and TEST) | No | + +## Consumed Events + +- **NotificationRequested** + +## Produced Events + +- **AuditableEventOccurred** diff --git a/docs/service_specs/query_dispatch.md b/docs/service_specs/query_dispatch.md new file mode 100644 index 000000000..02ae0a70d --- /dev/null +++ b/docs/service_specs/query_dispatch.md @@ -0,0 +1,51 @@ +[← Back Home](../README.md) + +## Query Dispatch Overview + +The Query Dispatch service is primarily responsible for applying a lag period prior to making FHIR resource query requests against a facility endpoint. The current implementation of the Query Dispatch service handles how long Link Cloud should wait before querying for a patient’s FHIR resources after being discharged. To ensure that the encounter related data for the patient has been settled (Medications have been closed, Labs have had their results finalized, etc), tenants are able to customize how long they would like the lag from discharge to querying to be. + +- **Technology**: .NET Core +- **Image Name**: link-querydispatch +- **Port**: 8080 +- **Database**: MSSQL (previously Mongo) +- **Scale**: 0-3 + +## Environment Variables + +| Name | Value | Secret? | +|---------------------------------------------|-------------------------------|---------| +| Link__Audit__ExternalConfigurationSource | AzureAppConfiguration | No | +| ConnectionStrings__AzureAppConfiguration | `` | Yes | + +## App Settings + +### Kafka Connection + +| Name | Value | Secret? | +|------------------------------------------|---------------------------|---------| +| KafkaConnection__BootstrapServers__0 | `` | No | +| KafkaConnection__GroupId | query-dispatch-events | No | +| KafkaConnection__ClientId | query-dispatch-events | No | + +### Database Settings (MSSQL) + +| Name | Value | Secret? | +|---------------------------|----------------------|----------| +| MongoDB__ConnectionString | `` | Yes | +| MongoDB__DatabaseName | `` | No | +| MongoDB__CollectionName | `` | No | + +### Additional Settings + +| Name | Value | Secret? | +|---------------|---------------------------------|---------| +| EnableSwagger | true (DEV and TEST) | No | + +## Consumed Events + +- **ReportScheduled** +- **PatientEvent** + +## Produced Events + +- **DataAcquisitionRequested** diff --git a/docs/service_specs/report.md b/docs/service_specs/report.md new file mode 100644 index 000000000..428a9ba2b --- /dev/null +++ b/docs/service_specs/report.md @@ -0,0 +1,54 @@ +[← Back Home](../README.md) + +## Report Overview + +The Report service is responsible for persisting the Measure Reports and FHIR resources that the Measure Eval service generates after evaluating a patient against a measure. When a tenant's reporting period end date has been met, the Report Service performs various workflows to determine if all of the patient MeasureReports are accounted for that period prior to initiating the submission process. + +- **Technology**: .NET Core +- **Image Name**: link-report +- **Port**: 8080 +- **Database**: MSSQL (previously Mongo) + +## Environment Variables + +| Name | Value | Secret? | +|---------------------------------------------|-------------------------------|---------| +| Link__Audit__ExternalConfigurationSource | AzureAppConfiguration | No | +| ConnectionStrings__AzureAppConfiguration | `` | Yes | + +## App Settings + +### Kafka + +| Name | Value | Secret? | +|-----------------------------------------------|--------------------------------|---------| +| KafkaConnection__BootstrapServers__0 | `` | No | +| KafkaConnection__GroupId | report-events | No | +| KafkaConnection__ClientId | report-events | No | + +### Database + +| Name | Value | Secret? | +|-----------------------------------------------|--------------------------------|---------| +| MongoDB__ConnectionString | `` | Yes | +| MongoDB__DatabaseName | `` | No | +| MongoDB__CollectionName | report | No | + +### Service Endpoints + +| Name | Value | Secret? | +|-----------------------------------------------|--------------------------------|---------| +| TenantApiSettings__TenantServiceBaseEndpoint | `/api` | No | + +## Consumed Events + +- **ReportScheduled** +- **MeasureEvaluated** +- **PatientsToQuery** +- **ReportSubmitted** + +## Produced Events + +- **SubmitReport** +- **DataAcquisitionRequested** +- **NotificationRequested** diff --git a/docs/service_specs/submission.md b/docs/service_specs/submission.md new file mode 100644 index 000000000..4cd19e81f --- /dev/null +++ b/docs/service_specs/submission.md @@ -0,0 +1,66 @@ +[← Back Home](../README.md) + +## Submission Overview + +The Submission service is responsible for packaging a tenant's reporting content and submitting them to a configured destination. Currently, the service only writes the submission content to its local file store. The submission package for a reporting period includes the following files: + +| File | Description | Multiple Files? | +| ---- | ---- | ---- | +| Aggregate | A [MeasureReport](https://hl7.org/fhir/R4/measurereport.html) resource that contains references to each patient evaluation for a specific measure | Yes, one per measure | +| Patient List | A [List](https://hl7.org/fhir/R4/list.html) resource of all patients that were admitted into the facility during the reporting period | No | +| Device | A [Device](https://hl7.org/fhir/R4/device.html) resource that details the version of Link Cloud that was used | No | +| Organization | An [Organization](https://hl7.org/fhir/R4/organization.html) resource for the submitting facility | No | +| Other Resources | A [Bundle](https://hl7.org/fhir/R4/bundle.html) resource that contains all of the shared resources (Location, Medication, etc) that are referenced in the patient Measure Reports | No | +| Patient | A [Bundle](https://hl7.org/fhir/R4/bundle.html) resource that contains the MeasureReports and related resources for a patient | Yes, one per evaluated patient | + +An example of the submission package can be found at `\link-cloud\Submission Example`. + +- **Technology**: .NET Core +- **Image Name**: link-submission +- **Port**: 8080 +- **Database**: MSSQL (previously Mongo) +- **Volumes**: Azure Storage Account File Share mounted at `/Link/Submission` + +## Environment Variables + +| Name | Value | Secret? | +|----------------------------------------------|------------------------------------------------------|----------| +| Link__Audit__ExternalConfigurationSource | AzureAppConfiguration | No | +| ConnectionStrings__AzureAppConfiguration | `` | Yes | + +## App Settings + +### Kafka + +| Name | Value | Secret? | +|----------------------------------------------|------------------------------------------------------|----------| +| KafkaConnection__BootstrapServers__0 | `` | No | +| KafkaConnection__GroupId | submission-events | No | +| KafkaConnection__ClientId | submission-events | No | + +### Database +| Name | Value | Secret? | +|----------------------------------------------|------------------------------------------------------|----------| +| MongoDB__ConnectionString | `` | Yes | +| MongoDb__DatabaseName | `` | No | + +### Service Endpoints + +| Name | Value | Secret? | +|----------------------------------------------|------------------------------------------------------|----------| +| SubmissionServiceConfig__ReportServiceUrl | `/api/Report/GetSubmissionBundle` | No | + +### Additional Settings + +| Name | Value | Secret? | +|----------------------------------------------|------------------------------------------------------|----------| +| FileSystemConfig__FilePath | `/data/Submission` | No | +| EnableSwagger | true (DEV and TEST) | No | + +## Consumed Events + +- **SubmitReport** + +## Produced Events + +- **ReportSubmitted** diff --git a/docs/service_specs/tenant.md b/docs/service_specs/tenant.md new file mode 100644 index 000000000..5431dbf87 --- /dev/null +++ b/docs/service_specs/tenant.md @@ -0,0 +1,55 @@ +[← Back Home](../README.md) + +## Tenant Overview + +The Tenant service is the entry point for configuring a tenant into Link Cloud. The service is responsible for maintaining and generating events for the scheduled measure reporting periods that the tenant is configured for. These events contain the initial information needed for Link Cloud to query resources and perform measure evaluations based on a specific reporting period. + +- **Technology**: .NET Core +- **Image Name**: link-tenant +- **Port**: 8080 +- **Database**: MSSQL (previously Mongo) +- **Scale**: 0-3 + +## Environment Variables + +| Name | Value | Secret? | +|----------------------------------------------|--------------------------------|----------| +| Link__Audit__ExternalConfigurationSource | AzureAppConfiguration | No | +| ConnectionStrings__AzureAppConfiguration | `` | Yes | + +## App Settings + +### Kafka + +| Name | Value | Secret? | +|----------------------------------------------|--------------------------------|----------| +| KafkaConnection__BootstrapServers__0 | `` | No | +| KafkaConnection__GroupId | tenant-events | No | + +### Database + +| Name | Value | Secret? | +|----------------------------------------------|--------------------------------|----------| +| MongoDB__ConnectionString | `` | Yes | +| MongoDB__DatabaseName | `` | No | +| MongoDB__CollectionName | tenant | No | + +### Service Endpoints + +| Name | Value | Secret? | +|----------------------------------------------|--------------------------------|----------| +| MeasureServiceRegistry__MeasureServiceApiUrl | `` | No | + +### Additional Settings + +| Name | Value | Secret? | +|----------------------------------------------|--------------------------------|----------| +| EnableSwagger | true (DEV and TEST) | No | + +## Consumed Events + +- **NONE** + +## Produced Events + +- **ReportScheduled**