diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 4e97bf2..ca5d38c 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -16,7 +16,7 @@ repos: hooks: - id: black-jupyter - repo: https://github.com/kynan/nbstripout - rev: "0.6.1" + rev: "0.7.1" hooks: - id: nbstripout - repo: https://github.com/hadialqattan/pycln diff --git a/docs/tutorials/first.ipynb b/docs/tutorials/first.ipynb index 9aa422f..28c3451 100644 --- a/docs/tutorials/first.ipynb +++ b/docs/tutorials/first.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "ad9d5df3", + "id": "0", "metadata": {}, "source": [ "(first)=\n", @@ -20,7 +20,7 @@ { "cell_type": "code", "execution_count": null, - "id": "975491fc", + "id": "1", "metadata": {}, "outputs": [], "source": [ @@ -56,7 +56,7 @@ }, { "cell_type": "markdown", - "id": "dc91b632", + "id": "2", "metadata": {}, "source": [ "Now, let's fit this dataset using a mixture of `SHOTerm` terms: one quasi-periodic component and one non-periodic component.\n", @@ -66,7 +66,7 @@ { "cell_type": "code", "execution_count": null, - "id": "df8082ee", + "id": "3", "metadata": {}, "outputs": [], "source": [ @@ -89,7 +89,7 @@ }, { "cell_type": "markdown", - "id": "4f6a8dfd", + "id": "4", "metadata": {}, "source": [ "Let's look at the underlying power spectral density of this initial model:" @@ -98,7 +98,7 @@ { "cell_type": "code", "execution_count": null, - "id": "e555464e", + "id": "5", "metadata": {}, "outputs": [], "source": [ @@ -122,7 +122,7 @@ }, { "cell_type": "markdown", - "id": "1d88da99", + "id": "6", "metadata": {}, "source": [ "And then we can also plot the prediction that this model makes for the missing data and compare it to the truth:" @@ -131,7 +131,7 @@ { "cell_type": "code", "execution_count": null, - "id": "4c549946", + "id": "7", "metadata": {}, "outputs": [], "source": [ @@ -158,7 +158,7 @@ }, { "cell_type": "markdown", - "id": "3b249feb", + "id": "8", "metadata": {}, "source": [ "Ok, that looks pretty terrible, but we can get a better fit by numerically maximizing the likelihood as described in the following section.\n", @@ -172,7 +172,7 @@ { "cell_type": "code", "execution_count": null, - "id": "3b28a76a", + "id": "9", "metadata": {}, "outputs": [], "source": [ @@ -202,7 +202,7 @@ }, { "cell_type": "markdown", - "id": "9af1515e", + "id": "10", "metadata": {}, "source": [ "Now let's make the same plots for the maximum likelihood model:" @@ -211,7 +211,7 @@ { "cell_type": "code", "execution_count": null, - "id": "63ffb7a6", + "id": "11", "metadata": {}, "outputs": [], "source": [ @@ -226,7 +226,7 @@ }, { "cell_type": "markdown", - "id": "31c1fb90", + "id": "12", "metadata": {}, "source": [ "These predictions are starting to look much better!\n", @@ -241,7 +241,7 @@ { "cell_type": "code", "execution_count": null, - "id": "34d4103a", + "id": "13", "metadata": {}, "outputs": [], "source": [ @@ -270,7 +270,7 @@ }, { "cell_type": "markdown", - "id": "45c9aa84", + "id": "14", "metadata": {}, "source": [ "After running our MCMC, we can plot the predictions that the model makes for a handful of samples from the chain.\n", @@ -280,7 +280,7 @@ { "cell_type": "code", "execution_count": null, - "id": "86bc6816", + "id": "15", "metadata": {}, "outputs": [], "source": [ @@ -297,7 +297,7 @@ }, { "cell_type": "markdown", - "id": "d443d8ed", + "id": "16", "metadata": {}, "source": [ "Similarly, we can plot the posterior expectation for the power spectral density:" @@ -306,7 +306,7 @@ { "cell_type": "code", "execution_count": null, - "id": "6336d052", + "id": "17", "metadata": {}, "outputs": [], "source": [ @@ -325,7 +325,7 @@ }, { "cell_type": "markdown", - "id": "f4b49974", + "id": "18", "metadata": {}, "source": [ "## Posterior inference using PyMC\n", @@ -336,7 +336,7 @@ { "cell_type": "code", "execution_count": null, - "id": "7261591c", + "id": "19", "metadata": {}, "outputs": [], "source": [ @@ -382,7 +382,7 @@ }, { "cell_type": "markdown", - "id": "e82ad4fc", + "id": "20", "metadata": {}, "source": [ "Like before, we can plot the posterior estimate of the power spectrum to show that the results are qualitatively similar:" @@ -391,7 +391,7 @@ { "cell_type": "code", "execution_count": null, - "id": "85c008c7", + "id": "21", "metadata": {}, "outputs": [], "source": [ @@ -410,7 +410,7 @@ }, { "cell_type": "markdown", - "id": "81e50441", + "id": "22", "metadata": {}, "source": [ "## Posterior inference using numpyro\n", @@ -421,7 +421,7 @@ { "cell_type": "code", "execution_count": null, - "id": "c7840a39", + "id": "23", "metadata": {}, "outputs": [], "source": [ @@ -479,7 +479,7 @@ }, { "cell_type": "markdown", - "id": "6f3abdcf", + "id": "24", "metadata": {}, "source": [ "This runtime was similar to the PyMC result from above, and (as we'll see below) the convergence is also similar.\n", @@ -491,7 +491,7 @@ { "cell_type": "code", "execution_count": null, - "id": "0f58f1e2", + "id": "25", "metadata": {}, "outputs": [], "source": [ @@ -510,7 +510,7 @@ }, { "cell_type": "markdown", - "id": "2781b23e", + "id": "26", "metadata": {}, "source": [ "## Comparison\n", @@ -522,7 +522,7 @@ { "cell_type": "code", "execution_count": null, - "id": "e29ff05b", + "id": "27", "metadata": {}, "outputs": [], "source": [ @@ -574,7 +574,7 @@ }, { "cell_type": "markdown", - "id": "4e7f5ac4", + "id": "28", "metadata": {}, "source": [ "That looks pretty consistent.\n", @@ -585,7 +585,7 @@ { "cell_type": "code", "execution_count": null, - "id": "c1b8e18a", + "id": "29", "metadata": {}, "outputs": [], "source": [ @@ -606,7 +606,7 @@ { "cell_type": "code", "execution_count": null, - "id": "0969c015", + "id": "30", "metadata": {}, "outputs": [], "source": [ @@ -627,7 +627,7 @@ { "cell_type": "code", "execution_count": null, - "id": "337f182b", + "id": "31", "metadata": {}, "outputs": [], "source": [ @@ -647,7 +647,7 @@ }, { "cell_type": "markdown", - "id": "835b4542", + "id": "32", "metadata": {}, "source": [ "Overall these results are consistent, but the $\\hat{R}$ values are a bit high for the emcee run, so I'd probably run that for longer.\n",