{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## Polytrope EOS inference pipeline\n", "\n", "This is an example notebook about how to use our tools to analysis a observation constraint on neutron star equation of state. \n", "\n", "Here in this notebook, we are using a polytrope EoS model (in the future we will implement more equation of state)" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import InferenceWorkflow.prior as prior\n", "import InferenceWorkflow.BayesianSampler as sampler\n", "import numpy as np\n", "import ultranest\n", "import ultranest.stepsampler\n", "\n", "import EOSgenerators.Polytrope_EOS as Polytrope\n", "from TOVsolver.maxium_central_density import maxium_central_density\n", "from TOVsolver.solver_code import solveTOV\n", "from TOVsolver.unit import g_cm_3, dyn_cm_2, km, Msun, e0, MeV, fm\n", "import TOVsolver.main as main" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We need to define the outer crust, this crust and interface is unversial for all the equation of state here, we just change the inner crust part and the core part of equation of state and that is come from the polytrope computation. Crust as below, \"Tolos_crust_out.txt\" is BPS crust model." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "#Tolos_crust_out = np.loadtxt('Tolos_crust_out.txt', delimiter=' ')\n", "Tolos_crust_out = np.loadtxt('Tolos_crust_out.txt', delimiter=None, comments='#', usecols=(0, 1, 2, 3, 4))\n", "eps_crust_T = Tolos_crust_out[:,3] \n", "pres_crust_T = Tolos_crust_out[:,4]\n", "\n", "# eps_crust is the density array from inner crust to core, is the palce where we want to establish the polytrope EoS\n", "# xs_polytro is the central density range of a neutron star\n", "\n", "xs_polytro = np.logspace(11.7, 14.28, num=1000, base=10)\n", "\n", "ys_polytropic = (4.75764e29 + 9.04238e13 *xs_polytro**(4/3))\n", "xs_polytropic = xs_polytro\n", "\n", "eps_crust = np.append(eps_crust_T, xs_polytropic)\n", "pres_crust = np.append(pres_crust_T, ys_polytropic)\n", "\n", "# rho_crust_end and P_crust_end are the end of outer crust EoS and the start point of polytrope EoS\n", "rho_crust_end = eps_crust[-1]\n", "P_crust_end = pres_crust[-1]\n", "\n", "rho_ns = 267994004080000.03\n", "c = 3e10\n", "G = 6.67428e-8" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Set up prior\n", "\n", "Next step, we need to set up the prior, first use parameters array to specify the variable name, should be consistent with what you need to call them. Second, we need to define a list to store some intermediate parameters derived from `prior_transform`. These parameters may be used in `likelihood_transform` or somewhere else and it will be stored in the final result, specifically 'd_max' in this case.\n", " \n", "Define a prior transform function to define prior. Cube are set of random number from 0 to 1. This prior setting is standard set-up of UltraNest package, since we are using `UltraNest` to do nest-sampling. We provided \n", "\n", "`normal_Prior` and `flat_prior`\n", "\n", "two options call from prior. Here then the Parameters prior should all set\n", "\n", "------------------\n", "\n", "However, since we are doing Equation of state Inference from mass radius of neutron star measurement. The center density of the star should be also sampled. Otherwise will be a partially-defined prior, did not span all parameters space, and proved to be different with full-scope inference.\n", "\n", "This request as randomly generate a density from a EoS range, however, this process is not that trivial, since we need to determine the upper limit of the central density of neutron star --- different equation of state will predict different upper bound, so here we need to use the prior-setting EoS parameters computing the EOS by\n", "\n", "```sh\n", "Polytrope.eos_core_pp\n", "```\n", "\n", "Compute out EOS, and find out Mass Radius of this equation of state, find out the last stable point of this equation of state.(first mass points that give the direvative to be negative)" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "parameters = ['rho_t_1', 'rho_t_2', 'gamma_1', 'gamma_2', 'gamma_3', 'd1']\n", "# for two or more MR measurements, define d2 or more depend on complexity.\n", "\n", "#derived_param_names = ['d_max']??\n", "\n", "gamma_low_limits = [1, 0, 0.5]\n", "gamma_up_limits = [4.5, 8, 8]\n", "def prior_transform(cube):\n", " para = cube.copy()\n", "\n", " # Output appropriate gamma\n", " # When we randomly scatter a gamma at one segement, its maxium value is constrained by the start point's pressure, density,\n", " # and the end point's density. Therefore in each segement, we need call 'fun_gamma_max' to get a sutiable range of gamma.\n", " para[0] = prior.flat_prior(1.5*rho_ns,8.3*rho_ns, cube[0])\n", " para[1] = prior.flat_prior(para[0], 8.3*rho_ns, cube[1])\n", " gammas = []\n", "\n", " for i in range(3):\n", " para[2+i] = prior.flat_prior(gamma_low_limits[i], gamma_up_limits[i], cube[2+i])\n", "\n", " gammas = np.array([para[2],para[3],para[4]])\n", " rho_ts= np.array([para[0],para[1]]) * g_cm_3\n", "\n", " density = np.logspace(14.28, 15.6, 1000)\n", " pres = np.zeros(len(density))\n", " index = Polytrope.eos_core_pp(gammas, rho_ts, rho_crust_end, density[0], P_crust_end)/P_crust_end/P_crust_end\n", " for i in range(len(density)):\n", " pres[i] = Polytrope.eos_core_pp(gammas, rho_ts, rho_crust_end, density[i], P_crust_end)/P_crust_end/index\n", " \n", " eps_total = np.array([*eps_crust, *density[1::]])\n", " pres_total = np.array([*pres_crust, *pres[1::]])\n", " \n", " RFSU2R = []\n", " MFSU2R = []\n", " density = np.logspace(14.3, 15.6, 50)\n", " if all(x 20 and MR[0][i] - MR[0][i-1]< 0:\n", " break\n", " if len(MFSU2R)==False:\n", " para[5] = 0\n", " #para[6] = 0\n", " # this line for showing how to add one more observation\n", " else:\n", " max_index = len(MFSU2R)\n", " max_d = np.log10(density[max_index-1])\n", " para[5] = 14.3 + (max_d - 14.3) * cube[5]\n", " #para[6] = 14.3 + (max_d - 14.3) * cube[6]\n", " # this line for showing how to add one more observation\n", " return para\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Set up likelihood\n", "\n", "We need to set up a likelihood, Using standard definition way of UltraNest, that is below.\n", "\n", "Here the likelihood is generated from a simulated mass radius measurement, which is $M = 1.97, 1.44, 2.07$ $M_{\\odot}$ and $R = 10, 13, 12$ km, With a 5% Mass radius measurement uncertainty, so here \n", "\n", " ```sh\n", " likelihood.MRlikihood_Gaussian\n", " ```\n", "function will be use for our likelihood, please check [likelihood.MRlikihood_Gaussian](https://github.com/ChunHuangPhy/EoS_inference/blob/main/InferenceWorkflow/Likelihood.py) to see the original code, and more choice of likelihood.\n", "eg:\n", "1. If we have some real mass-radius measurements, say PSR J0030 or PSR J0740, come from NICER, a KDE kernel could be trained to feed into \n", "\n", " ```sh\n", " likelihood.MRlikihood_kernel(eps_total,pres_total,x,d1)\n", " ```\n", "set the KDE kernel as a input for this function\n", "\n", "2. If we gain measurement from radio-timing, say only measure the neutron star mass, then\n", "\n", " ```sh\n", " likelihood.Masslikihood_Gaussian(eps_total,pres_total,x,d1)\n", " ```\n", "Which will give the likelihood from single mass measurement, x is the parameters of that measurement, you should specify where this measurement mass is located and what is the sigma width of this mass measurement" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "from scipy.interpolate import interp1d\n", "import InferenceWorkflow.Likelihood as likelihood\n", "\n", "datas = [\n", " [10, 1.97, 0.5, 0.0985]\n", "]\n", "datas = pd.DataFrame(datas, index=[1], columns=['R', 'M', 'Rsigma', 'Msigma'])\n", "# add more rows for extra observations and don's forget to add numbers to the index!\n", "\n", "Rvalue = datas['R'].values\n", "Mvalue = datas['M'].values\n", "\n", "R_sigma = datas['Rsigma'].values\n", "M_sigma = datas['Msigma'].values\n", "\n", "def likelihood_transform(theta):\n", " gammas = np.array([theta[2],theta[3],theta[4]])\n", " rho_ts = np.array([theta[0],theta[1]]) * g_cm_3\n", " d1 = theta[5]\n", "\n", " density = np.logspace(14.28, 15.6, 1000)\n", " pres = np.zeros(len(density))\n", " index = Polytrope.eos_core_pp(gammas, rho_ts, rho_crust_end, density[0], P_crust_end)/P_crust_end/P_crust_end\n", " for i in range(len(density)):\n", " pres[i] = Polytrope.eos_core_pp(gammas, rho_ts, rho_crust_end, density[i], P_crust_end)/P_crust_end/index\n", "\n", " eps_total = np.array([*eps_crust, *density[1::]])\n", " pres_total = np.array([*pres_crust, *pres[1::]])\n", "\n", " #1. This line is to compute MR likelihood from a Simulated MR measurement:\n", " \n", " probMRgaussian1 = likelihood.MRlikihood_Gaussian(eps_total * g_cm_3,pres_total * dyn_cm_2,(Mvalue[0],Rvalue[0],M_sigma[0],R_sigma[0]),d1)\n", " #probMRgaussian2 = likelihood.MRlikihood_Gaussian(eps_total,pres_total,(Mvalue[1]/Msun,Rvalue[1]/km,M_sigma[1]/Msun,R_sigma[1]/km),d2)\n", "\n", " #2. This is a block that constrain from given real MR measurement, say J0030:\n", " #J0030 = numpy.loadtxt('data/PST_equal_sampled_MR.txt', delimiter=' ')\n", " #J30R_list, J30M_list = zip(*J0030)\n", " #J30R_list = numpy.array(J30R_list).T \n", " #J30M_list = numpy.array(J30M_list).T\n", " #Rmin = J30R_list.min()\n", " #Rmax = J30R_list.max()\n", " #Mmin = J30M_list.min()\n", " #Mmax = J30M_list.max()\n", " #X3, Y3 = numpy.mgrid[Rmin:Rmax:500j, Mmin:Mmax:100j]\n", " #positions = numpy.vstack([X3.ravel(), Y3.ravel()])\n", " #values = numpy.vstack([J30R_list, J30M_list])\n", " #kernel3 = stats.gaussian_kde(values)\n", " #probMRJ0030 = likelihood.MRlikelihhood_kernel(eps_total,pres_total,kernel3,d1)\n", " \n", " #3. This is to compute the constraint from experiment of nuclearmatter\n", " # 250" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "import matplotlib.pyplot as plt\n", "import random\n", "\n", "gamma_low_limits = [1, 0, 0.5]\n", "gamma_up_limits = [4.5, 8, 8]\n", "fig, ax = plt.subplots(1, 1, figsize=(9, 6))\n", "for j in range(20):\n", " para = [0]*7\n", " para[0] = prior.flat_prior(1.5*rho_ns,8.3*rho_ns, random.uniform(0, 1))\n", " para[1] = prior.flat_prior(para[0], 8.3*rho_ns, random.uniform(0, 1))\n", " #para[2] = prior.flat_prior(para[1], 8.3*rho_ns, random.uniform(0, 1))\n", " rhots = [para[0], para[1], xs_polytro[-1]]\n", " gammas = []\n", " k = 0\n", " pt = 0\n", " for i in range(3):\n", " para[2+i] = prior.flat_prior(gamma_low_limits[i], gamma_up_limits[i], random.uniform(0,1))\n", "\n", " gammas = np.array([para[2],para[3],para[4]])\n", " rho_ts= np.array([para[0],para[1]])* g_cm_3\n", " \n", " density = np.logspace(14.28, 15.6, 1000)\n", " pres = np.zeros(len(density))\n", " index = Polytrope.eos_core_pp(gammas, rho_ts, rho_crust_end, density[0], P_crust_end)/P_crust_end/P_crust_end\n", " for i in range(len(density)):\n", " pres[i] = Polytrope.eos_core_pp(gammas, rho_ts, rho_crust_end, density[i], P_crust_end)/P_crust_end/index\n", "\n", " eps_total = np.array([*eps_crust, *density[1::]])\n", " pres_total = np.array([*pres_crust, *pres[1::]])\n", " \n", " \n", "\n", "\n", " RFSU2R = []\n", " MFSU2R = []\n", " density = np.logspace(14.28, 15.6, 50)\n", " if all(x 20 and MR[0][i] - MR[0][i-1]< 0:\n", " break\n", "\n", " RFSU2R = np.array(RFSU2R)\n", " MFSU2R = np.array(MFSU2R)\n", "\n", "\n", " ax.plot(RFSU2R/km , MFSU2R/Msun, lw=2)\n", "ax.set_ylabel(r\"M [$M_{\\odot}$]\", fontsize=16)\n", "ax.set_xlabel(\"R [km]\", fontsize=16)\n", "ax.set_xlim(8.0, 16.0)\n", "ax.set_ylim(0, 5)\n", "ax.tick_params(top=1, right=1, which=\"both\", direction=\"in\", labelsize=14)\n", "ax.tick_params(top=1, right=1, which=\"both\", direction=\"in\", labelsize=14)\n", "fig.tight_layout()\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Set up sampler\n", "\n", "Here next, we define sampler.\n", "\n", "Some of the sampler parameters is requested, first is step number, our choice for UltraNest sampler is slicesampler, which could easily be sliced up your total computation load, and parallelize, speed up sampling. So step as suggested by documentation of UltraNest, we use 2*len(parameters).\n", "\n", "live_point we set 400, it will influence the sampling precision, We suggest for 10 dimension space, maybe 5000 is a better choice, however, since my computer only have limited resources, we set 400.\n", "\n", "max_calls set 500000, it is how many iteration after it will stop, we suggest to set this number significantly higher, maybe 5000000, otherwise maybe will broken before the inference converging to a definite value. That result will be un-phyiscal.\n", "\n", "derived_param_names is calculated from prior_transform and is the intermediate quantity necessary for calculating likelihood_transform. It will not be involved in the inference process.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "step = 2 * len(parameters)\n", "live_point = 400\n", "\n", "max_calls = 60000\n", "samples = sampler.UltranestSampler(parameters,likelihood_transform,prior_transform,step,live_point,max_calls)" ] } ], "metadata": { "kernelspec": { "display_name": "base", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.4" } }, "nbformat": 4, "nbformat_minor": 2 }