pmrf.fitting.results.AnestheticResults

class pmrf.fitting.results.AnestheticResults(measured=None, initial_model=None, fitted_model=None, solver_results=None, settings=None)[source]

Bases: BayesianResults

Parameters:
  • measured (Network | dict[str, Network] | None)

  • initial_model (Model | None)

  • fitted_model (Model | None)

  • solver_results (Any)

  • settings (FitSettings | None)

__init__(measured=None, initial_model=None, fitted_model=None, solver_results=None, settings=None)
Parameters:
  • measured (Network | dict[str, Network] | None)

  • initial_model (Model | None)

  • fitted_model (Model | None)

  • solver_results (Any)

  • settings (FitSettings | None)

Return type:

None

Methods

__delattr__(name, /)

Implement delattr(self, name).

__dir__()

Default dir() implementation.

__eq__(other)

Return self==value.

__format__(format_spec, /)

Default object formatter.

__ge__(value, /)

Return self>=value.

__getattribute__(name, /)

Return getattr(self, name).

__getstate__()

Helper for pickle.

__gt__(value, /)

Return self>value.

__init__([measured, initial_model, ...])

__init_subclass__

This method is called when a class is subclassed.

__le__(value, /)

Return self<=value.

__lt__(value, /)

Return self<value.

__ne__(value, /)

Return self!=value.

__new__(*args, **kwargs)

__reduce__()

Helper for pickle.

__reduce_ex__(protocol, /)

Helper for pickle.

__repr__()

Return repr(self).

__setattr__(name, value, /)

Implement setattr(self, name, value).

__sizeof__()

Size of object in memory, in bytes.

__str__()

Return str(self).

__subclasshook__

Abstract classes can override this to customize issubclass().

decode_solver_results(group)

encode_solver_results(group)

load_hdf(path)

prior_samples()

prior_weights()

samples()

save_hdf(path[, metadata])

weights()

Attributes

__annotations__

__dataclass_fields__

__dataclass_params__

__dict__

__doc__

__hash__

__match_args__

__module__

__weakref__

list of weak references to the object

fitted_model

initial_model

measured

nested_samples

settings

solver_results

class NestedSamples(*args, **kwargs)

Bases: Samples

Storage and plotting tools for Nested Sampling samples.

We extend the Samples class with the additional methods:

  • self.live_points(logL)

  • self.set_beta(beta)

  • self.prior()

  • self.posterior_points(beta)

  • self.prior_points()

  • self.stats()

  • self.logZ()

  • self.D_KL()

  • self.d()

  • self.recompute()

  • self.gui()

  • self.importance_sample()

Parameters:
  • data (np.array) – Coordinates of samples. shape = (nsamples, ndims).

  • columns (list(str)) – reference names of parameters

  • logL (np.array) – loglikelihoods of samples.

  • logL_birth (np.array or int) – birth loglikelihoods, or number of live points.

  • labels (dict) – optional mapping from column names to plot labels

  • label (str) – Legend label default: basename of root

  • beta (float) – thermodynamic inverse temperature default: 1.

  • logzero (float) – The threshold for log(0) values assigned to rejected sample points. Anything equal or below this value is set to -np.inf. default: -1e30

D(nsamples=None)
D_KL(nsamples=None, beta=None)

Kullback–Leibler divergence.

Parameters:
  • nsamples (int, optional) –

    • If nsamples is not supplied, calculate mean value

    • If nsamples is integer, draw nsamples from the distribution of values inferred by nested sampling

    • If nsamples is array, nsamples is assumed to be logw

  • beta (float, array-like, optional) – inverse temperature(s) beta=1/kT. Default self.beta

Returns:

  • if nsamples is array-likepandas.Series, index nsamples.columns

  • elif beta is scalar and nsamples is None – float

  • elif beta is array-like and nsamples is Nonepandas.Series, index beta

  • elif beta is scalar and nsamples is intpandas.Series, index range(nsamples)

  • elif beta is array-like and nsamples is intpandas.Series, pandas.MultiIndex columns the product of beta and range(nsamples)

property beta

Thermodynamic inverse temperature.

contour(logL=None)

Convert contour from (index or None) to a float loglikelihood.

Convention is that live points are inclusive of the contour.

Helper function for:
  • NestedSamples.live_points,

  • NestedSamples.dead_points,

  • NestedSamples.truncate.

Parameters:

logL (float or int, optional) – Loglikelihood or iteration number If not provided, return the contour containing the last set of live points.

Returns:

logL – Loglikelihood of contour

Return type:

float

d(nsamples=None)
d_G(nsamples=None, beta=None)

Bayesian model dimensionality.

Parameters:
  • nsamples (int, optional) –

    • If nsamples is not supplied, calculate mean value

    • If nsamples is integer, draw nsamples from the distribution of values inferred by nested sampling

    • If nsamples is array, nsamples is assumed to be logw

  • beta (float, array-like, optional) – inverse temperature(s) beta=1/kT. Default self.beta

Returns:

  • if nsamples is array-likepandas.Series, index nsamples.columns

  • elif beta is scalar and nsamples is None – float

  • elif beta is array-like and nsamples is Nonepandas.Series, index beta

  • elif beta is scalar and nsamples is intpandas.Series, index range(nsamples)

  • elif beta is array-like and nsamples is intpandas.Series, pandas.MultiIndex columns the product of beta and range(nsamples)

dead_points(logL=None)

Get the dead points at a given contour.

Convention is that dead points are exclusive of the contour.

Parameters:

logL (float or int, optional) – Loglikelihood or iteration number to return dead points. If not provided, return the last set of dead points.

Returns:

dead_points

Dead points at either:
  • contour logL (if input is float)

  • ith iteration (if input is integer)

  • last set of dead points if no argument provided

Return type:

Samples

dlogX(nsamples=None)
gui(params=None)

Construct a graphical user interface for viewing samples.

importance_sample(logL_new, action='add', inplace=False)

Perform importance re-weighting on the log-likelihood.

Parameters:
  • logL_new (np.array) – New log-likelihood values. Should have the same shape as logL.

  • action (str, default='add') –

    Can be any of {‘add’, ‘replace’, ‘mask’}.

    • add: Add the new logL_new to the current logL.

    • replace: Replace the current logL with the new logL_new.

    • mask: treat logL_new as a boolean mask and only keep the corresponding (True) samples.

  • inplace (bool, optional) – Indicates whether to modify the existing array, or return a new frame with importance sampling applied. default: False

Returns:

samples – Importance re-weighted samples.

Return type:

NestedSamples

live_points(logL=None)

Get the live points within a contour.

Parameters:

logL (float or int, optional) – Loglikelihood or iteration number to return live points. If not provided, return the last set of active live points.

Returns:

live_points

Live points at either:
  • contour logL (if input is float)

  • ith iteration (if input is integer)

  • last set of live points if no argument provided

Return type:

Samples

logL_P(nsamples=None, beta=None)

Posterior averaged loglikelihood.

Parameters:
  • nsamples (int, optional) –

    • If nsamples is not supplied, calculate mean value

    • If nsamples is integer, draw nsamples from the distribution of values inferred by nested sampling

    • If nsamples is array, nsamples is assumed to be logw

  • beta (float, array-like, optional) – inverse temperature(s) beta=1/kT. Default self.beta

Returns:

  • if nsamples is array-likepandas.Series, index nsamples.columns

  • elif beta is scalar and nsamples is None – float

  • elif beta is array-like and nsamples is Nonepandas.Series, index beta

  • elif beta is scalar and nsamples is intpandas.Series, index range(nsamples)

  • elif beta is array-like and nsamples is intpandas.Series, pandas.MultiIndex columns the product of beta and range(nsamples)

logX(nsamples=None)

Log-Volume.

The log of the prior volume contained within each iso-likelihood contour.

Parameters:

nsamples (int, optional) –

  • If nsamples is not supplied, calculate mean value

  • If nsamples is integer, draw nsamples from the distribution of values inferred by nested sampling

Returns:

  • if nsamples is None – WeightedSeries like self

  • elif nsamples is int – WeightedDataFrame like self, columns range(nsamples)

logZ(nsamples=None, beta=None)

Log-Evidence.

Parameters:
  • nsamples (int, optional) –

    • If nsamples is not supplied, calculate mean value

    • If nsamples is integer, draw nsamples from the distribution of values inferred by nested sampling

    • If nsamples is array, nsamples is assumed to be logw

  • beta (float, array-like, optional) – inverse temperature(s) beta=1/kT. Default self.beta

Returns:

  • if nsamples is array-likepandas.Series, index nsamples.columns

  • elif beta is scalar and nsamples is None – float

  • elif beta is array-like and nsamples is Nonepandas.Series, index beta

  • elif beta is scalar and nsamples is intpandas.Series, index range(nsamples)

  • elif beta is array-like and nsamples is intpandas.Series, pandas.MultiIndex columns the product of beta and range(nsamples)

logdX(nsamples=None)

Compute volume of shell of loglikelihood.

Parameters:

nsamples (int, optional) –

  • If nsamples is not supplied, calculate mean value

  • If nsamples is integer, draw nsamples from the distribution of values inferred by nested sampling

Returns:

  • if nsamples is None – WeightedSeries like self

  • elif nsamples is int – WeightedDataFrame like self, columns range(nsamples)

logw(nsamples=None, beta=None)

Log-nested sampling weight.

The logarithm of the (unnormalised) sampling weight log(L**beta*dX).

Parameters:
  • nsamples (int, optional) –

    • If nsamples is not supplied, calculate mean value

    • If nsamples is integer, draw nsamples from the distribution of values inferred by nested sampling

    • If nsamples is array, nsamples is assumed to be logw and returned (implementation convenience functionality)

  • beta (float, array-like, optional) – inverse temperature(s) beta=1/kT. Default self.beta

Returns:

  • if nsamples is array-like – WeightedDataFrame equal to nsamples

  • elif beta is scalar and nsamples is None – WeightedSeries like self

  • elif beta is array-like and nsamples is None – WeightedDataFrame like self, columns of beta

  • elif beta is scalar and nsamples is int – WeightedDataFrame like self, columns of range(nsamples)

  • elif beta is array-like and nsamples is int – WeightedDataFrame like self, MultiIndex columns the product of beta and range(nsamples)

ns_output(*args, **kwargs)
posterior_points(beta=1)

Get equally weighted posterior points at temperature beta.

prior(inplace=False)

Re-weight samples at infinite temperature to get prior samples.

prior_points(params=None)

Get equally weighted prior points.

recompute(logL_birth=None, inplace=False)

Re-calculate the nested sampling contours and live points.

Parameters:
  • logL_birth (array-like or int, optional) –

    • array-like: the birth contours.

    • int: the number of live points.

    • default: use the existing birth contours to compute nlive

  • inplace (bool, default=False) – Indicates whether to modify the existing array, or return a new frame with contours resorted and nlive recomputed

set_beta(beta, inplace=False)

Change the inverse temperature.

Parameters:
  • beta (float) – Inverse temperature to set. (beta=0 corresponds to the prior distribution.)

  • inplace (bool, default=False) – Indicates whether to modify the existing array, or return a copy with the inverse temperature changed.

stats(nsamples=None, beta=None, norm=None)

Compute Nested Sampling statistics.

Using nested sampling we can compute:

  • logZ: Bayesian evidence

    \[\log Z = \int L \pi d\theta\]
  • D_KL: Kullback–Leibler divergence

    \[D_{KL} = \int P \log(P / \pi) d\theta\]
  • logL_P: posterior averaged log-likelihood

    \[\langle\log L\rangle_P = \int P \log L d\theta\]
  • d_G: Gaussian model dimensionality (or posterior variance of the log-likelihood)

    \[d_G/2 = \langle(\log L)^2\rangle_P - \langle\log L\rangle_P^2\]

    see Handley and Lemos (2019) for more details on model dimensionalities.

(Note that all of these are available as individual functions with the same signature.)

In addition to point estimates nested sampling provides an error bar or more generally samples from a (correlated) distribution over the variables. Samples from this distribution can be computed by providing an integer nsamples.

Nested sampling as an athermal algorithm is also capable of producing these as a function of inverse thermodynamic temperature beta. This is provided as a vectorised function. If nsamples is also provided a MultiIndex dataframe is generated.

These obey Occam’s razor equation:

\[\log Z = \langle\log L\rangle_P - D_{KL},\]

which splits a model’s quality logZ into a goodness-of-fit logL_P and a complexity penalty D_KL. See Hergt et al. (2021) for more detail.

Parameters:
  • nsamples (int, optional) –

    • If nsamples is not supplied, calculate mean value

    • If nsamples is integer, draw nsamples from the distribution of values inferred by nested sampling

  • beta (float, array-like, optional) – inverse temperature(s) beta=1/kT. Default self.beta

  • norm (Series, Samples, optional) – NestedSamples.stats() output used for normalisation. Can be either a Series of mean values or Samples produced with matching nsamples and beta. In addition to the columns [‘logZ’, ‘D_KL’, ‘logL_P’, ‘d_G’], this adds the normalised versions [‘Delta_logZ’, ‘Delta_D_KL’, ‘Delta_logL_P’, ‘Delta_d_G’].

Returns:

  • if beta is scalar and nsamples is None – Series, index [‘logZ’, ‘d_G’, ‘D_KL’, ‘logL_P’]

  • elif beta is scalar and nsamples is intSamples, index range(nsamples), columns [‘logZ’, ‘d_G’, ‘D_KL’, ‘logL_P’]

  • elif beta is array-like and nsamples is NoneSamples, index beta, columns [‘logZ’, ‘d_G’, ‘D_KL’, ‘logL_P’]

  • elif beta is array-like and nsamples is intSamples, index pandas.MultiIndex the product of beta and range(nsamples) columns [‘logZ’, ‘d_G’, ‘D_KL’, ‘logL_P’]

truncate(logL=None)

Truncate the run at a given contour.

Returns the union of the live_points and dead_points.

Parameters:

logL (float or int, optional) – Loglikelihood or iteration number to truncate run. If not provided, truncate at the last set of dead points.

Returns:

truncated_run

Run truncated at either:
  • contour logL (if input is float)

  • ith iteration (if input is integer)

  • last set of dead points if no argument provided

Return type:

NestedSamples

property nested_samples: NestedSamples
samples()[source]
prior_samples()[source]
weights()[source]
prior_weights()[source]
encode_solver_results(group)[source]
Parameters:

group (Group)

classmethod decode_solver_results(group)[source]
Parameters:

group (Group)

Return type:

Any