diff --git a/README.md b/README.md index b097d77..ebd5e9c 100644 --- a/README.md +++ b/README.md @@ -1,59 +1,196 @@ # Medical Event Data Standard -The Medical Event Data Standard (MEDS) is a draft data schema for storing streams of medical events, often sourced from either Electronic Health Records or claims records. +The Medical Event Data Standard (MEDS) is a data schema for storing streams of medical events, often +sourced from either Electronic Health Records or claims records. Before we define the various schema that make +up MEDS, we will define some key terminology that we use in this standard. -The core of the standard is that we define a ``patient`` data structure that contains a series of time stamped events, that in turn contain measurements of various sorts. +## Terminology + 1. A _patient_ in a MEDS dataset is the primary entity being described by the sequences of care observations + in the underlying dataset. In most cases, _patients_ will, naturally, be individuals, and the sequences + of care observations will cover all known observations about those individuals in a source health + datasets. However, in some cases, data may be organized so that we cannot describe all the data for an + individual reliably in a dataset, but instead can only describe subsequences of an individual's data, + such as in datasets that only link an individual's data observations together if they are within the same + hospital admission, regardless of how many admissions that individual has in the dataset (such as the + [eICU](https://eicu-crd.mit.edu/) dataset). In these cases, a _patient_ in the MEDS dataset may refer to + a hospital admission rather than an individual. + 2. A _code_ is the categorical descriptor of what is being observed in any given observation of a patient. + In particular, in almost all structured, longitudinal datasets, a measurement can be described as + consisting of a tuple containing a `patient_id` (who this measurement is about); a `time` (when this + measurement happened); some categorical qualifier describing what was measured, which we will call a + `code`; a value of a given type, such as a `numerical_value`, a `text_value`, or a `categorical_value`; + and possibly one or more additional measurement properties that describe the measurement in a + non-standardized manner. -The Python type signature for the schema is as follows: +## Core MEDS Data Organization + +MEDS consists of four main data components/schemas: + 1. A _data schema_. This schema describes the underlying medical data, organized as sequences of patient + observations, in the dataset. + 2. A _patient subsequence label schema_. This schema describes labels that may be predicted about a patient + at a given time in the patient record. + 3. A _code metadata schema_. This schema contains metadata describing the codes used to categorize the + observed measurements in the dataset. + 4. A _dataset metadata schema_. This schema contains metadata about the MEDS dataset itself, such as when it + was produced, using what version of what code, etc. + 5. A _patient split schema_. This schema contains metadata about how patients in the MEDS dataset are + assigned to different subpopulations, most commonly used to dictate ML splits. + +### Organization on Disk +Given a MEDS dataset stored in the `$MEDS_ROOT` directory data of the various schemas outlined above can be +found in the following subfolders: + - `$MEDS_ROOT/data/`: This directory will contain data in the _data schema_, organized as a + series of possibly nested sharded dataframes stored in `parquet` files. In particular, the file glob + `glob("$MEDS_ROOT/data/**/*.parquet)` will capture all sharded data files of the raw MEDS data, all + organized into _data schema_ files, sharded by patient and sorted, for each patient, by + time. + - `$MEDS_ROOT/metadata/codes.parquet`: This file contains per-code metadata in the _code metadata schema_ + about the MEDS dataset. As this dataset describes all codes observed in the full MEDS dataset, it is _not_ + sharded. Note that some pre-processing operations may, at times, produce sharded code metadata files, but + these will always appear in subdirectories of `$MEDS_ROOT/metadata/` rather than at the top level, and + should generally not be used for overall metadata operations. + - `$MEDS_ROOT/metadata/dataset.json`: This schema contains metadata in the _dataset metadata schema_ about + the dataset and its production process. + - `$MEDS_ROOT/metdata/patient_splits.parquet`: This schema contains information in the _patient split + schema_ about what splits different patients are in. + +Task label dataframes are stored in the _TODO label_ schema, in a file path that depends on both a +`$TASK_ROOT` directory where task label dataframes are stored and a `$TASK_NAME` parameter that separates +different tasks from one another. In particular, the file glob `glob($TASK_ROOT/$TASK_NAME/**/*.parquet)` will +retrieve a sharded set of dataframes in the _TODO label_ schema where the sharding matches up precisely with +the sharding used in the raw `$MEDS_ROOT/data/**/*.parquet` files (e.g., the file +`$TASK_ROOT/$TASK_NAME/$SHARD_NAME.parquet` will cover the labels for the same set of patients as are +contained in the raw data file at `$MEDS_ROOT/data/**/*.parquet`). Note that (1) `$TASK_ROOT` may be a subdir +of `$MEDS_ROOT` (e.g., often `$TASK_ROOT` will be set to `$MEDS_ROOT/tasks`), (2) `$TASK_NAME` may have `/`s +in it, thereby rendering the task label directory a deep, nested subdir of `$TASK_ROOT`, and (3) in some +cases, there may be no task labels for a shard of the raw data, if no patient in that shard qualifies for that +task, in which case it may be true that either `$TASK_ROOT/$TASK_NAME/$SHARD_NAME.parquet` is empty or that it +does not exist. + +### Schemas + +#### The Data Schema +MEDS data also must satisfy two important properties: + 1. Data about a single patient cannot be split across parquet files. If a patient is in a dataset it must be + in one and only one parquet file. + 2. Data about a single patient must be contiguous within a particular parquet file and sorted by time. + +The data schema has four mandatory fields: + 1. `patient_id`: The ID of the patient this event is about. + 2. `time`: The time of the event. This field is nullable for static events. + 3. `code`: The code of the event. + 4. `numeric_value`: The numeric value of the event. This field is nullable for non-numeric events. + +In addition, it can contain any number of custom properties to further enrich observations. The python +function below generates a pyarrow schema for a given set of custom properties. ```python +def data(custom_properties=[]): + return pa.schema( + [ + ("patient_id", pa.int64()), + ("time", pa.timestamp("us")), # Static events will have a null timestamp + ("code", pa.string()), + ("numeric_value", pa.float32()), + ] + custom_properties + ) +``` + +#### The label schema. +Models, when predicting this label, are allowed to use all data about a patient up to and including the +prediction time. Exclusive prediction times are not currently supported, but if you have a use case for them +please add a GitHub issue. + +```python +label = pa.schema( + [ + ("patient_id", pa.int64()), + ("prediction_time", pa.timestamp("us")), + ("boolean_value", pa.bool_()), + ("integer_value", pa.int64()), + ("float_value", pa.float64()), + ("categorical_value", pa.string()), + ] +) -Patient = TypedDict('Patient', { - 'patient_id': int, - 'events': List[Event], -}) - -Event = TypedDict('Event',{ - 'time': NotRequired[datetime.datetime], # Static events will have a null timestamp here - 'code': str, - 'text_value': NotRequired[str], - 'numeric_value': NotRequired[float], - 'datetime_value': NotRequired[datetime.datetime], - 'metadata': NotRequired[Mapping[str, Any]], -}) +Label = TypedDict("Label", { + "patient_id": int, + "prediction_time": datetime.datetime, + "boolean_value": Optional[bool], + "integer_value" : Optional[int], + "float_value" : Optional[float], + "categorical_value" : Optional[str], +}, total=False) ``` -We also provide ETLs to convert common data formats to this schema: https://github.com/Medical-Event-Data-Standard/meds_etl +#### The patient split schema. -An example patient following this schema +Three sentinel split names are defined for convenience and shared processing: + 1. A training split, named `train`, used for ML model training. + 2. A tuning split, named `tuning`, used for hyperparameter tuning. This is sometimes also called a + "validation" split or a "dev" split. In many cases, standardizing on a tuning split is not necessary and + models should feel free to merge this split with the training split if desired. + 3. A held-out split, named `held_out`, used for final model evaluation. In many cases, this is also called a + "test" split. When performing benchmarking, this split should not be used at all for model selection, + training, or for any purposes up to final validation. -```python +Additional split names can be used by the user as desired. -patient_data = { - "patient_id": 123, - "events": [ - # Store static events like gender with a null timestamp - { - "time": None, - "code": "Gender/F", - }, +``` +train_split = "train" +tuning_split = "tuning" +held_out_split = "held_out" - # It's recommended to record birth using the birth_code - { - "time": datetime.datetime(1995, 8, 20), - "code": meds.birth_code, +patient_split = pa.schema( + [ + ("patient_id", pa.int64()), + ("split", pa.string()), + ] +) +``` + +#### The dataset metadata schema. + +```python +dataset_metadata = { + "type": "object", + "properties": { + "dataset_name": {"type": "string"}, + "dataset_version": {"type": "string"}, + "etl_name": {"type": "string"}, + "etl_version": {"type": "string"}, + "meds_version": {"type": "string"}, }, +} - # Arbitrary events with sophisticated data can also be added +# Python type for the above schema + +DatasetMetadata = TypedDict( + "DatasetMetadata", { - "time": datetime.datetime(2020, 1, 1, 12, 0, 0), - "code": "some_code", - "text_value": "Example", - "numeric_value": 10.0, - "datetime_value": datetime.datetime(2020, 1, 1, 12, 0, 0), - "properties": None + "dataset_name": NotRequired[str], + "dataset_version": NotRequired[str], + "etl_name": NotRequired[str], + "etl_version": NotRequired[str], + "meds_version": NotRequired[str], }, - ] -} + total=False, +) +``` + +#### The code metadata schema. + +```python +def code_metadata(custom_per_code_properties=[]): + return pa.schema( + [ + ("code", pa.string()), + ("description", pa.string()), + ("parent_codes", pa.list(pa.string()), + ] + custom_per_code_properties + ) + +# Python type for the above schema +CodeMetadata = TypedDict("CodeMetadata", {"code": str, "description": str, "parent_codes": List[str]}, total=False) ``` diff --git a/src/meds/__init__.py b/src/meds/__init__.py index 2c76ef6..3d8d36c 100644 --- a/src/meds/__init__.py +++ b/src/meds/__init__.py @@ -1,26 +1,26 @@ from meds._version import __version__ # noqa -from .schema import (patient_schema, Event, Patient, label, Label, - code_metadata_entry, code_metadata, dataset_metadata, - CodeMetadataEntry, CodeMetadata, DatasetMetadata, birth_code, - death_code) +from .schema import ( + data_schema, label_schema, Label, train_split, tuning_split, held_out_split, patient_split_schema, + code_metadata_schema, dataset_metadata_schema, CodeMetadata, DatasetMetadata, birth_code, death_code +) # List all objects that we want to export _exported_objects = { - 'patient_schema': patient_schema, - 'Event': Event, - 'Patient': Patient, - 'label': label, + 'data_schema': data_schema, + 'label_schema': label_schema, 'Label': Label, - 'code_metadata_entry': code_metadata_entry, - 'code_metadata': code_metadata, - 'dataset_metadata': dataset_metadata, - 'CodeMetadataEntry': CodeMetadataEntry, + 'train_split': train_split, + 'tuning_split': tuning_split, + 'held_out_split': held_out_split, + 'patient_split_schema': patient_split_schema, + 'code_metadata_schema': code_metadata_schema, + 'dataset_metadata_schema': dataset_metadata_schema, 'CodeMetadata': CodeMetadata, 'DatasetMetadata': DatasetMetadata, 'birth_code': birth_code, - 'death_code': death_code + 'death_code': death_code, } __all__ = list(_exported_objects.keys()) diff --git a/src/meds/schema.py b/src/meds/schema.py index edbc9e1..5b263f4 100644 --- a/src/meds/schema.py +++ b/src/meds/schema.py @@ -1,73 +1,61 @@ +"""The core schemas for the MEDS format. + +Please see the README for more information, including expected file organization on disk, more details on what +each schema should capture, etc. +""" import datetime from typing import Any, List, Mapping, Optional import pyarrow as pa from typing_extensions import NotRequired, TypedDict -# Medical Event Data Standard consists of three main components: -# 1. A patient data schema -# 2. A label schema -# 3. A dataset metadata schema. -# -# Patient data and labels are specified using pyarrow. Dataset metadata is specified using JSON. - -# We also provide TypedDict Python type signatures for these schemas. ############################################################ -# The patient data schema. +# The data schema. +# +# MEDS data also must satisfy two important properties: +# +# 1. Data about a single patient cannot be split across parquet files. If a patient is in a dataset it must be in one and only one parquet file. +# 2. Data about a single patient must be contiguous within a particular parquet file and sorted by time. + +# Both of these restrictions allow the stream rolling processing (see https://docs.pola.rs/api/python/stable/reference/dataframe/api/polars.DataFrame.rolling.html), +# which vastly simplifies many data analysis pipelines. # We define some codes for particularly important events -birth_code = "SNOMED/184099003" -death_code = "SNOMED/419620001" +birth_code = "MEDS_BIRTH" +death_code = "MEDS_DEATH" -def patient_schema(per_event_properties_schema=pa.null()): - # Return a patient schema with a particular per event metadata subschema - event = pa.struct( +def data_schema(custom_properties=[]): + return pa.schema( [ + ("patient_id", pa.int64()), ("time", pa.timestamp("us")), # Static events will have a null timestamp ("code", pa.string()), - ("text_value", pa.string()), ("numeric_value", pa.float32()), - ("datetime_value", pa.timestamp("us")), - ("properties", per_event_properties_schema), - ] - ) - - patient = pa.schema( - [ - ("patient_id", pa.int64()), - ("events", pa.list_(event)), # Require ordered by time, nulls must be first - ] + ] + custom_properties ) - return patient - - -# Python types for the above schema - -Event = TypedDict( - "Event", - { - "time": NotRequired[datetime.datetime], - "code": str, - "text_value": NotRequired[str], - "numeric_value": NotRequired[float], - "datetime_value": NotRequired[datetime.datetime], - "properties": NotRequired[Any], - }, -) - -Patient = TypedDict("Patient", {"patient_id": int, "events": List[Event]}) +# No python type is provided because Python tools for processing MEDS data will often provide their own types. +# See https://github.com/EthanSteinberg/meds_reader/blob/0.0.6/src/meds_reader/__init__.pyi#L55 for example. ############################################################ -# The label schema. +# The label schema. Models, when predicting this label, are allowed to use all data about a patient up to and +# including the prediction time. Exclusive prediction times are not currently supported, but if you have a use +# case for them please add a GitHub issue. -label = pa.schema( +label_schema = pa.schema( [ ("patient_id", pa.int64()), - ("prediction_time", pa.timestamp("us")), + # The patient who is being labeled. + + ("prediction_time", pa.timestamp("us")), + # The time the prediction is made. + # Machine learning models are allowed to use features that have timestamps less than or equal + # to this timestamp. + + # Possible values for the label. ("boolean_value", pa.bool_()), ("integer_value", pa.int64()), ("float_value", pa.float64()), @@ -84,43 +72,43 @@ def patient_schema(per_event_properties_schema=pa.null()): "integer_value" : Optional[int], "float_value" : Optional[float], "categorical_value" : Optional[str], -}) +}, total=False) + + +############################################################ + +# The patient split schema. + +train_split = "train" # For ML training. +tuning_split = "tuning" # For ML hyperparameter tuning. Also often called "validation" or "dev". +held_out_split = "held_out" # For final ML evaluation. Also often called "test". + +patient_split_schema = pa.schema( + [ + ("patient_id", pa.int64()), + ("split", pa.string()), + ] +) ############################################################ # The dataset metadata schema. # This is a JSON schema. -# This data should be stored in metadata.json within the dataset folder. - -code_metadata_entry = { - "type": "object", - "properties": { - "description": {"type": "string"}, - "parent_codes": {"type": "array", "items": {"type": "string"}}, - }, -} -code_metadata = { - "type": "object", - "additionalProperties": code_metadata_entry, -} -dataset_metadata = { +dataset_metadata_schema = { "type": "object", "properties": { "dataset_name": {"type": "string"}, "dataset_version": {"type": "string"}, "etl_name": {"type": "string"}, "etl_version": {"type": "string"}, - "code_metadata": code_metadata, "meds_version": {"type": "string"}, }, } -# Python types for the above schema +# Python type for the above schema -CodeMetadataEntry = TypedDict("CodeMetadataEntry", {"description": str, "parent_codes": List[str]}) -CodeMetadata = Mapping[str, CodeMetadataEntry] DatasetMetadata = TypedDict( "DatasetMetadata", { @@ -128,7 +116,25 @@ def patient_schema(per_event_properties_schema=pa.null()): "dataset_version": NotRequired[str], "etl_name": NotRequired[str], "etl_version": NotRequired[str], - "code_metadata": NotRequired[CodeMetadata], "meds_version": NotRequired[str], }, + total=False, ) + +############################################################ + +# The code metadata schema. +# This is a parquet schema. + +def code_metadata_schema(custom_per_code_properties=[]): + return pa.schema( + [ + ("code", pa.string()), + ("description", pa.string()), + ("parent_codes", pa.list_(pa.string())), + ] + custom_per_code_properties + ) + +# Python type for the above schema + +CodeMetadata = TypedDict("CodeMetadata", {"code": str, "description": str, "parent_codes": List[str]}, total=False) diff --git a/tests/test_schema.py b/tests/test_schema.py index ccdf31b..b945909 100644 --- a/tests/test_schema.py +++ b/tests/test_schema.py @@ -4,30 +4,63 @@ import pyarrow as pa import pytest -from meds import patient_schema, label, dataset_metadata +from meds import ( + data_schema, label_schema, dataset_metadata_schema, patient_split_schema, code_metadata_schema, + train_split, tuning_split, held_out_split +) - -def test_patient_schema(): +def test_data_schema(): """ - Test that mock patient data follows the patient_schema schema. + Test that mock data follows the data schema. """ # Each element in the list is a row in the table - patient_data = [ + raw_data = [ { "patient_id": 123, - "events": [{ # Nested list for events - "time": datetime.datetime(2020, 1, 1, 12, 0, 0), - "code": "some_code", - "text_value": "Example", - "numeric_value": 10.0, - "datetime_value": datetime.datetime(2020, 1, 1, 12, 0, 0), - "properties": None - }] + "time": datetime.datetime(2020, 1, 1, 12, 0, 0), + "code": "some_code", + "text_value": "Example", + "numeric_value": 10.0, } ] - patient_table = pa.Table.from_pylist(patient_data, schema=patient_schema()) - assert patient_table.schema.equals(patient_schema()), "Patient schema does not match" + schema = data_schema([("text_value", pa.string())]) + + table = pa.Table.from_pylist(raw_data, schema=schema) + assert table.schema.equals(schema), "Patient schema does not match" + +def test_code_metadata_schema(): + """ + Test that mock code metadata follows the schema. + """ + # Each element in the list is a row in the table + code_metadata = [ + { + "code": "some_code", + "description": "foo", + "parent_code": ["parent_code"], + } + ] + + schema = code_metadata_schema() + + table = pa.Table.from_pylist(code_metadata, schema=schema) + assert table.schema.equals(schema), "Code metadata schema does not match" + +def test_patient_split_schema(): + """ + Test that mock data follows the data schema. + """ + # Each element in the list is a row in the table + patient_split_data = [ + {"patient_id": 123, "split": train_split}, + {"patient_id": 123, "split": tuning_split}, + {"patient_id": 123, "split": held_out_split}, + {"patient_id": 123, "split": "special"}, + ] + + table = pa.Table.from_pylist(patient_split_data, schema=patient_split_schema) + assert table.schema.equals(patient_split_schema), "Patient split schema does not match" def test_label_schema(): """ @@ -41,8 +74,8 @@ def test_label_schema(): "boolean_value": True } ] - label_table = pa.Table.from_pylist(label_data, schema=label) - assert label_table.schema.equals(label), "Label schema does not match" + label_table = pa.Table.from_pylist(label_data, schema=label_schema) + assert label_table.schema.equals(label_schema), "Label schema does not match" label_data = [ { @@ -51,8 +84,8 @@ def test_label_schema(): "integer_value": 4 } ] - label_table = pa.Table.from_pylist(label_data, schema=label) - assert label_table.schema.equals(label), "Label schema does not match" + label_table = pa.Table.from_pylist(label_data, schema=label_schema) + assert label_table.schema.equals(label_schema), "Label schema does not match" label_data = [ { @@ -61,8 +94,8 @@ def test_label_schema(): "float_value": 0.4 } ] - label_table = pa.Table.from_pylist(label_data, schema=label) - assert label_table.schema.equals(label), "Label schema does not match" + label_table = pa.Table.from_pylist(label_data, schema=label_schema) + assert label_table.schema.equals(label_schema), "Label schema does not match" label_data = [ { @@ -71,8 +104,8 @@ def test_label_schema(): "categorical_value": "text" } ] - label_table = pa.Table.from_pylist(label_data, schema=label) - assert label_table.schema.equals(label), "Label schema does not match" + label_table = pa.Table.from_pylist(label_data, schema=label_schema) + assert label_table.schema.equals(label_schema), "Label schema does not match" def test_dataset_metadata_schema(): """ @@ -83,13 +116,7 @@ def test_dataset_metadata_schema(): "dataset_version": "1.0", "etl_name": "Test ETL", "etl_version": "1.0", - "code_metadata": { - "test_code": { - "description": "A test code", - "standard_ontology_codes": ["12345"], - } - }, } - jsonschema.validate(instance=metadata, schema=dataset_metadata) + jsonschema.validate(instance=metadata, schema=dataset_metadata_schema) assert True, "Dataset metadata schema validation failed"