Skip to content

Commit

Permalink
Fix conflict with master.
Browse files Browse the repository at this point in the history
  • Loading branch information
Matthew Kerr committed Nov 22, 2022
2 parents 0d797b4 + b43dbf4 commit 8a7ffe6
Show file tree
Hide file tree
Showing 159 changed files with 4,640 additions and 1,439 deletions.
32 changes: 28 additions & 4 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,36 @@ and this project, at least loosely, adheres to [Semantic Versioning](https://sem

## Unreleased
### Changed
- Can extract single TOAs as length=1 table
- Minimum supported versions updated to numpy 1.18.5, matplotlib 3.2.0
- `introduces_correlated_errors` is now a class attribute of `NoiseComponent`s
### Added
- Can ignore pulse_number column on TOA read or write (to help merging)
- Can add in missing columns when merging unless told not to
- Can initialize observatories with lat/lon/altitude
- Can output observatories as JSON
- Can extract single TOAs as length=1 table
- SWM=1 models can be used
- SWX models to fit the solar wind over various intervals
- Added a pintk helper function to delete jumped TOAs/remove existing jumps. Fixed indexing issue for single clicks.
- Added PLDMNoise component which allows modeling of stochastic DM variations as red noise with a power law spectrum
- Added Bayesian interface (Timing model and white noise only)
- New tests to improve test coverage
- Documentation: Instructions to checkout development branch
- Clock file for effix
### Fixed
- global clock files now emit a warning instead of an exception if expired and the download fails
- dmxparse outputs to dmxparse.out if save=True
- Excluded noise parameters from the design matrix.
- Split the computation of correlated noise basis matrix and weights into two functions.
- Fixed bug in combining design matrices
- Fixed bug in dmxparse
- Fixed bug in photonphase with polycos
- Made clock file loading log entries a little friendlier
- Typo fixes in documentation
### Removed
- Removed obsolete `ltinterface` module
- Removed old and WIP functions from `gridutils` module


## [0.9.1] 2022-08-12
### Changed
Expand All @@ -36,7 +60,7 @@ and this project, at least loosely, adheres to [Semantic Versioning](https://sem
### Changed
- `model.phase()` now defaults to `abs_phase=True` when TZR* params are in the model
- TOAs no longer need to be grouped by observatory
- removed explicit download of IERS and leapsecond data (handled now by astropy)
- removed explicit download of IERS and leap second data (handled now by astropy)
- The default version of TT(BIPM) uses BIPM2021
- ClockFile no longer uses metaclass magic or many subclasses, and have friendly names for use in messages
- `model.setup()` now gets called automatically after removing a parameter as part of `remove_param`
Expand Down Expand Up @@ -181,7 +205,7 @@ and this project, at least loosely, adheres to [Semantic Versioning](https://sem
## [0.8] - 2020-12-21
### Fixed
- Fixed an indentation bug in Wideband TOA fitting.
- The CombinedResidual class has API change on the get_data_error(), child residueal class in save as dictionary.
- The CombinedResidual class has API change on the get_data_error(), child residual class in save as dictionary.
### Removed
- Removed Python 2.7 support from travis and tox testing suites and from requirements files
- Removed "landscape" code checker since that package is no longer supported by its author
Expand Down Expand Up @@ -276,7 +300,7 @@ and this project, at least loosely, adheres to [Semantic Versioning](https://sem
- Add function to compute epoch averaged residuals based on ECORR
- Added model comparison pretty printer
- Added functions to change PEPOCH, DMEPOCH, and binary epoch
- Aded dmxparse function
- Added dmxparse function
- Added code to ensure that IERS B table is up to date
- Added fitter.print_summary()
### Changed
Expand Down
2 changes: 1 addition & 1 deletion CONTRIBUTING.rst
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ Before you submit a pull request, check that it meets these guidelines:
1. Try to write clear :ref:`pythonic` code, follow our :ref:`CodingStyle`, and think
about how others might use your new code.
2. The pull request should include tests that cover both the expected
behaviour and sensible error reporting when given bad input.
behavior and sensible error reporting when given bad input.
3. If the pull request adds or changes functionality, the docs should
be updated. Put your new functionality into a function with a
docstring. Check the HTML documentation produced by ``make docs``
Expand Down
9 changes: 8 additions & 1 deletion HISTORY.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,11 @@
History
=======

TEMPO was originally written in the 1980s, aiming for microsecond-level accuracy. TEMPO2 was written more recently, with an attention to nanosecond-level effects. Both are still in current use. But TEMPO is in FORTRAN, TEMPO2 is in C++, and neither is easy to extend for use in tasks different from plain pulsar timing. Most of TEMPO2 is also a direct conversion of TEMPO, so many bugs from TEMPO were carried over to TEMPO2. PINT was created to be, as far as possible, an independent reimplementation based on existing libraries - notably astropy - and to be a flexible toolkit for working with pulsar timing models and data.
TEMPO was originally written in the 1980s, aiming for microsecond-level accuracy.
TEMPO2 was written more recently, with an attention to nanosecond-level effects.
Both are still in current use. But TEMPO is in FORTRAN, TEMPO2 is in C++, and neither
is easy to extend for use in tasks different from plain pulsar timing. Most of TEMPO2
is also a direct conversion of TEMPO, so many bugs from TEMPO were carried over to
TEMPO2. PINT was created to be, as far as possible, an independent re-implementation
based on existing libraries - notably astropy - and to be a flexible toolkit for
working with pulsar timing models and data.
4 changes: 2 additions & 2 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -92,11 +92,11 @@ the source code, example notebooks, and tests, you can install from source, by
cloning the source repository from GitHub, then install
it, ensuring that all dependencies needed to run PINT are available::

$ git checkout https://github.com/nanograv/PINT.git
$ git clone https://github.com/nanograv/PINT.git
$ cd PINT
$ pip install .

Complete installation instructions are availble here_.
Complete installation instructions are available here_.

.. _here: https://nanograv-pint.readthedocs.io/en/latest/installation.html

Expand Down
4 changes: 2 additions & 2 deletions docs/coding-style.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ PINT coding style
- Every public function or class should have a docstring. It
should be in the correct format (numpy guidelines_).
- Do not abbreviate public names (for example, use "Residuals"
not "resids"). If *aboslutely* necessary, make certain that there
not "resids"). If *absolutely* necessary, make certain that there
is One True Abbreviation and that it is used everywhere.
- Raise an exception if the code cannot continue and produce
correct results.
Expand All @@ -44,7 +44,7 @@ PINT coding style
available the ``pulsar_mjd`` time format.
- PINT should work with python 2.7, 3.5, 3.6, 3.7, and later. There
is no need to maintain compatibility with older versions, so use
approprate modern constructs like sets, iterators, and the string
appropriate modern constructs like sets, iterators, and the string
``.format()`` method.
- Use six_ to manage python 2/3 compatibility problems.

Expand Down
2 changes: 1 addition & 1 deletion docs/command-line.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ pintk
pintbary
--------

``pintbary`` does quick barycentering calculations, convering an
``pintbary`` does quick barycentering calculations, converting an
MJD(UTC) on the command line to TDB with barycentric delays applied. The
position used for barycentering can be read from a par file or from the
command line
Expand Down
2 changes: 1 addition & 1 deletion docs/development-setup.rst
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ able to do the following:
- Jump to the definition of a function, class, or method with a keypress.
- Obey ``.editorconfig`` settings.

A little Googling should reveal how to get all this working in your favourite
A little Googling should reveal how to get all this working in your favorite
editor, but if you have some helpful links for a particular editor feel free to
add them to the documentation right here.

Expand Down
2 changes: 1 addition & 1 deletion docs/editing-documentation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ You may also want to run, from time to time::

This will try to make sure that all the external links in the documentation
still work, though of course they can't verify that the page that comes up is
still the indended one and not `something else`_.
still the intended one and not `something else`_.

.. _Sphinx: http://www.sphinx-doc.org/en/master/
.. _reStructuredText: http://docutils.sourceforge.net/rst.html
Expand Down
204 changes: 204 additions & 0 deletions docs/examples/bayesian-example-NGC6440E.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,204 @@
# %% [markdown]
# # PINT Bayesian Interface Examples

# %%
from pint.models import get_model, get_model_and_toas
from pint.bayesian import BayesianTiming
from pint.config import examplefile
from pint.models.priors import Prior
from scipy.stats import uniform

# %%
import numpy as np
import emcee
import nestle
import corner
import io
import matplotlib.pyplot as plt

# %%
# Read the par and tim files
parfile = examplefile("NGC6440E.par.good")
timfile = examplefile("NGC6440E.tim")
model, toas = get_model_and_toas(parfile, timfile)

# %%
# This is optional, but the likelihood function behaves better if
# we have the pulse numbers. Make sure that your timing solution is
# phase connected before doing this.
toas.compute_pulse_numbers(model)

# %%
# Now set the priors.
# I am cheating here by setting the priors around the maximum likelihood estimates.
# This is a bad idea for real datasets and can bias the estimates. I am doing this
# here just to make everything finish faster. In the real world, these priors should
# be informed by, e.g. previous (independent) timing solutions, pulsar search results,
# VLBI localization etc. Note that unbounded uniform priors don't work here.
for par in model.free_params:
param = getattr(model, par)
param_min = float(param.value - 10 * param.uncertainty_value)
param_span = float(20 * param.uncertainty_value)
param.prior = Prior(uniform(param_min, param_span))

# %%
# Now let us create a BayesianTiming object. This is a wrapper around the
# PINT API that provides provides lnlikelihood, lnprior and prior_transform
# functions which can be passed to a sampler of your choice.
bt = BayesianTiming(model, toas, use_pulse_numbers=True)

# %%
print("Number of parameters = ", bt.nparams)
print("Likelihood method = ", bt.likelihood_method)

# %% [markdown]
# ## MCMC sampling using emcee

# %%
nwalkers = 20
sampler = emcee.EnsembleSampler(nwalkers, bt.nparams, bt.lnposterior)

# %%
# Choose the MCMC start points in the vicinity of the maximum likelihood estimate
# available in the `model` object. This helps the MCMC chains converge faster.
# We can also draw these points from the prior, but the MCMC chains will converge
# slower in that case.
maxlike_params = np.array([param.value for param in bt.params], dtype=float)
maxlike_errors = [param.uncertainty_value for param in bt.params]
start_points = (
np.repeat([maxlike_params], nwalkers).reshape(bt.nparams, nwalkers).T
+ np.random.randn(nwalkers, bt.nparams) * maxlike_errors
)

# %%
# Use longer chain_length for real runs. It is kept small here so that
# the sampling finishes quickly (and because I know the burn in is short
# because of the cheating priors above).
print("Running emcee...")
chain_length = 1000
sampler.run_mcmc(
start_points,
chain_length,
progress=True,
)

# %%
# Merge all the chains together after discarding the first 100 samples as 'burn-in'.
# The burn-in should be decided after looking at the chains in the real world.
samples_emcee = sampler.get_chain(flat=True, discard=100)

# %%
# Plot the MCMC chains to make sure that the burn-in has been removed properly.
# Otherwise, go back and discard more points.
for idx, param_chain in enumerate(samples_emcee.T):
plt.subplot(bt.nparams, 1, idx + 1)
plt.plot(param_chain, label=bt.param_labels[idx])
plt.legend()
plt.show()

# %%
# Plot the posterior distribution.
fig = corner.corner(samples_emcee, labels=bt.param_labels)
plt.show()

# %% [markdown]
# ## Nested sampling with nestle

# %% [markdown]
# Nested sampling computes the Bayesian evidence along with posterior samples.
# This allows us to do compare two models. Let us compare the model above with
# and without an EFAC.

# %%
# Let us run the model without EFAC first. We can reuse the `bt` object from before.

# Nesle is really simple :)
# method='multi' runs the MultiNest algorithm.
# `npoints` is the number of live points.
# `dlogz` is the target accuracy in the computed Bayesian evidence.
# Increasing `npoints` or decreasing `dlogz` gives more accurate results,
# but at the cost of time.
print("Running nestle...")
result_nestle_1 = nestle.sample(
bt.lnlikelihood,
bt.prior_transform,
bt.nparams,
method="multi",
npoints=150,
dlogz=0.5,
callback=nestle.print_progress,
)

# %%
# Plot the posterior
# The nested samples come with weights, which must be taken into account
# while plotting.
fig = corner.corner(
result_nestle_1.samples,
weights=result_nestle_1.weights,
labels=bt.param_labels,
range=[0.999] * bt.nparams,
)
plt.show()

# %% [markdown]
# Let us create a new model with an EFAC applied to all toas (all
# TOAs in this dataset are from GBT).

# %%
parfile = str(model) # casting the model to str gives the par file representation.
parfile += "EFAC TEL gbt 1 1" # Add an EFAC to the par file and make it unfrozen.
model2 = get_model(io.StringIO(parfile))

# %%
# Now set the priors.
# Again, don't do this with real data. Use uninformative priors or priors
# motivated by previous experiments. This is done here with the sole purpose
# of making the run finish fast. Let us try this with the prior_info option now.
prior_info = dict()
for par in model2.free_params:
param = getattr(model2, par)
param_min = float(param.value - 10 * param.uncertainty_value)
param_max = float(param.value + 10 * param.uncertainty_value)
prior_info[par] = {"distr": "uniform", "pmin": param_min, "pmax": param_max}

prior_info["EFAC1"] = {"distr": "normal", "mu": 1, "sigma": 0.1}

# %%
bt2 = BayesianTiming(model2, toas, use_pulse_numbers=True, prior_info=prior_info)
print(bt2.likelihood_method)

# %%
result_nestle_2 = nestle.sample(
bt2.lnlikelihood,
bt2.prior_transform,
bt2.nparams,
method="multi",
npoints=150,
dlogz=0.5,
callback=nestle.print_progress,
)

# %%
# Plot the posterior.
# The EFAC looks consistent with 1.
fig2 = corner.corner(
result_nestle_2.samples,
weights=result_nestle_2.weights,
labels=bt2.param_labels,
range=[0.999] * bt2.nparams,
)
plt.show()

# %% [markdown]
# Now let us look at the evidences and compute the Bayes factor.

# %%
print(f"Evidence without EFAC : {result_nestle_1.logz} +/- {result_nestle_1.logzerr}")
print(f"Evidence with EFAC : {result_nestle_2.logz} +/- {result_nestle_2.logzerr}")

bf = np.exp(result_nestle_1.logz - result_nestle_2.logz)
print(f"Bayes factor : {bf} (in favor of no EFAC)")

# %% [markdown]
# The Bayes factor tells us that the EFAC is unncessary for this dataset.
Loading

0 comments on commit 8a7ffe6

Please sign in to comment.