Core service for serverless ML training/serving
- Upon invocation, validate input config
{
"project_id": String ("UUID4, ML project ID"),
"run_id": String ("UUID4, ML experiment ID"),
"code_hash": String("SHA1"),
"pipeline_config": {
"data": {
"location": {
"source": "DATA/SAMPLE/LOCATION",
},
"prep_config": Object,
},
"model": {
"hyperparameters": {
"param1": Number,
"param2": String,
"param3": Number,
},
"version": "MODEL/VERSION/IN/PACKAGE",
},
},
}
- Instantiate
model
class - Read data sample into memory
- Perform model training
- Save model
- Push notification to the topic
- Upon invocation, validate input config
{
"project_id": String ("UUID4, ML project ID"),
"run_id": String ("UUID4, pipeline run ID"),
"pipeline_config": {
"train_id": String ("UUID4, ML experiment ID"),
"data": {
"location": {
"source": "DATA/SAMPLE/LOCATION",
"destination": "DATA/SAMPLE/LOCATION",
},
},
}.
}
- Read model metadata
- Instantiate
model
class - Read dataset
- Run prediction
- Output results
Deployed separately, with the fixed model.