MIT bag EOS Inference pipline
The MIT bag model, which has commonly been applied to strange quark stars, relates pressure to energy density with the simple equation of state \(p=\frac{\epsilon}{3}-\frac{4B}{3}\). There is only one parameter, the “bag constant” \(B\). This represents the vacuum energy density, which creates a “bag” in which quarks are confined. See Chodos et al. (1974).
In this notebook, we use our Bayesian inference code to constrain the value of \(B\) using observations.
(a) Import packages
[ ]:
import InferenceWorkflow.BayesianSampler as sampler
import InferenceWorkflow.Likelihood as likelihood
import InferenceWorkflow.prior as prior
import math
import numpy as np
import EOSgenerators.MITbag_EOS as MITbag
from TOVsolver.unit import MeV, fm, g_cm_3, dyn_cm_2, km, Msun
import TOVsolver.main as main
(b) Set up priors
Next, we need to set up the priors. We first use a parameters array to specify the variable name. This process should be consistent with what you need to call them.
Define a prior transform function to define prior. Cube is a set of random numbers from 0 to 1. This prior setting is the standard set-up of UltraNest package, since we are using UltraNest to do nest-sampling.
We provided two options call from prior:”normal_Prior” and “flat_prior”.
We note that since we are doing Equation of state inference from mass-radius data of neutron star measurement. The central density of the star should be also sampled. Otherwise, this will be a partially-defined prior that did not span all of the parameter space, and proved to be different from full-scope inference.
This request will randomly generate a density from a EoS range, however, this process is not that trivial, since we need to determine the upper limit of the central density of the compact star — different equations of state will predict different upper bounds, so here we need to use the prior-setting EoS parameters computing the mass-radius curve for this equation of state, then find out the last stable point of this equation of state (first mass points that give the direvative to be negative). We can find that index with the len() function, then reset this max_d to be upper limit of this density range.
[ ]:
parameters = ['B','d1']
def prior_transform(cube):
params = cube.copy()
params[0] = prior.flat_prior(20,100,cube[0])
B = params[0]
epsilon,p = MITbag.MITbag_compute_EOS(B)
RFSU2R = []
MFSU2R = []
density = np.logspace(14.3, 15.6, 50)
if all(x<y for x,y in zip(epsilon[:], epsilon[1:])) and all(x<y for x, y in zip(p[:], p[1:])):
MR = main.OutputMR("",epsilon,p).T
else:
MR = []
if len(MR) == False:
params[1] = 0
#this line for showing how to add one more observation
else:
for i in range(len(MR[1])):
RFSU2R.append(MR[1][i])
MFSU2R.append(MR[0][i])
if i > 20 and MR[0][i] - MR[0][i-1]< 0:
break
if len(MFSU2R)==False:
params[1] = 0
#params[3] = 0
#this line for showing how to add one more observation
else:
max_index = len(MFSU2R)
max_d = np.log10(density[max_index-1])
params[1] = 14.3 + (max_d - 14.3) * cube[1]
#params[3] = 14.3 + (max_d - 14.3) * cube[3]
#this line for showing how to add one more observation
return params
In the upper part, we define a flat (uniform) prior for the parameters in the strangeon matter equation of state, due to the lack of constraints from terrestrial experiments.
Note that the above code is an example of Bayesian analysis for a given mass and radius observation measurement. For example, if you use the NICER data for the measurements of J0030, then you should define another parameter, except the strangeon EOS parameters, e.g. “d1” for the centre density of this measurement, in the meantime add “params[2]” to this code.
If you further consider the adjoint analysis with J0030+J0740, then you should define the other two parameters, e.g. “d1” and “d2” for the centre density of these two measurements, in the meantime add “params[3]” to the above code.
(c) Set up likehood
We need to set up a likelihood, Using standard definition way of UltraNest, that is below.
Here the likelihood is generated from a simulated mass radius measurement, which is 𝑀=1.4𝑀⊙ and 𝑅=13 km, With a 5% Mass radius measurement uncertainty,
so here
likelihood.MRlikihood_Gaussian
function will be use for our likelihood, please check likelihood.MRlikihood_Gaussian to see the original code, and more choice of likelihood. eg:
1.If we have some real mass-radius measurements, say PSR J0030 or PSR J0740, come from NICER, a KDE kernel could be trained to feed into
likelihood.MRlikihood_kernel(eps_total,pres_total,x,d1)
set the KDE kernel as a input for this function
2.If we gain measurement from radio-timing, say only measure the neutron star mass, then
likelihood.Masslikihood_Gaussian(eps_total,pres_total,x,d1)
Which will give the likelihood from single mass measurement, x is the parameters of that measurement, you should specify where this measurement mass is located and what is the sigma width of this mass measurement.
3.If we have nuclear measurements, and want to constrain this RMF model by nuclear properties like K(The Incompressibility of nuclear matter),J ( the symmetry energy at saturation density) and L( the slope of symmetry energy at saturation density). You can choose:
likelihood.Kliklihood(theta,K_low,K_up) likelihood.Jliklihood(theta,K_low,K_up) likelihood.Lliklihood(theta,K_low,K_up)
We are defaulting a hard-cut flat constrain, so if you don’t like this default hard cut, also could define the likelihood by youself with similiar style.
4.If we have a Tidal measurements from Gravitational wave detector, we can use it to do constraint:
likelihood.TidalLikihood_kernel(eps_total,pres_total,x,d1)
Where x is sampled distribution from real measurements, the standard is
kernel, chrip = x,
where the kernel is a whole set sampling from GW event, that is [chrip mass, M2/M1, tidal of M1, tidal of M2] four quantities. Chrip is the single smapling that comes only the chrip mass sampling.
[ ]:
import scipy.stats as stats
def likelihood_transform(theta):
# This is a demonstration code for only introduce one constraint from one mass-radius observation.
# Could be very easy to implement more constraint from nuclear quantity, since that do not need to
# sample more central density of real neutron star. If user want to expand to two mass radius measurement
# the code could be:
B, d1 = theta
####################################################################################################################
############ This is the block to compute out all the EoS you need based on your parameters#########################
epsilon,p = MITbag.MITbag_compute_EOS(B)
####################################################################################################################
probMRgaussian = likelihood.MRlikihood_Gaussian(epsilon,p,(1.4,13,0.07,0.65),d1)
prob = probMRgaussian
return prob
(d) Set up sampler
Here next, we define sampler, there is two different sampler we provided for.
Considering where you need resume file:
sampler.UltranestSampler and sampler.UltranestSamplerResume
Here since it is our first run, so we only use first one. Some of the sampler parameters is requested, first is step number, our choice for UltraNest sampler is slicesampler, which could easily be sliced up your total computation load, and parallelize, speed up sampling. So step as suggested by documentation of UltraNest, we use 2*len(parameters).
live_point we set 2000, it will influence the sampling precision, We suggest for 7 dimension space, maybe 5000 is a better choice, however, since my computer only have limited resources, we set 2000.
max_calls set 10000, it is how many iteration after it will stop, we suggest to set this number significantly higher, otherwise maybe will broken before the inference converging to a definite value. That result will be un-phyiscal.
[5]:
step = 2 * len(parameters)
live_point = 400
max_calls = 60000
samples = sampler.UltranestSampler(parameters,likelihood_transform,prior_transform,step,live_point,max_calls)