Introduction

Here I show how to produce P-value, S-value, likelihood, and deviance functions with the concurve package using fake data and data from real studies. Simply put, these functions are rich sources of information for scientific inference and the image below, taken from Xie & Singh, 20131 displays why.

For a more extensive discussion of these concepts, see the following references.113

To get started, we could generate some normal data and combine two vectors in a dataframe

library(concurve)
set.seed(1031)
GroupA <- rnorm(500)
GroupB <- rnorm(500)
RandomData <- data.frame(GroupA, GroupB)

and look at the differences between the two vectors. We’ll plug these vectors and the dataframe they’re in inside of the curve_mean() function. Here, the default method involves calculating CIs using the Wald method.

Each of the functions within concurve will generally produce a list with three items, and the first will usually contain the function of interest.

We can view the function using the ggcurve() function. The two basic arguments that must be provided are the data argument and the “type” argument. To plot a consonance function, we would write “c”.

(function1 <- ggcurve(data = intervalsdf[[1]], type = "c"))

We can see that the consonance “curve” is every interval estimate plotted, and provides the P-values, CIs, along with the median unbiased estimate It can be defined as such,

\[C V_{n}(\theta)=1-2\left|H_{n}(\theta)-0.5\right|=2 \min \left\{H_{n}(\theta), 1-H_{n}(\theta)\right\}\]

Its information counterpart, the surprisal function, can be constructed by taking the \(-log_{2}\) of the P-value.3,14,15

To view the surprisal function, we simply change the type to “s”.

(function1 <- ggcurve(data = intervalsdf[[1]], type = "s"))

We can also view the consonance distribution by changing the type to “cdf”, which is a cumulative probability distribution. The point at which the curve reaches 50% is known as the “median unbiased estimate”. It is the same estimate that is typically at the peak of the P-value curve from above.

(function1s <- ggcurve(data = intervalsdf[[2]], type = "cdf"))

We can also get relevant statistics that show the range of values by using the curve_table() function. There are several formats that can be exported such as .docx, .ppt, and TeX.

(x <- curve_table(data = intervalsdf[[1]], format = "image"))

Lower Limit

Upper Limit

Interval Width

Interval Level (%)

CDF

P-value

S-value (bits)

-0.132

-0.093

0.039

25.0

0.625

0.750

0.415

-0.154

-0.071

0.083

50.0

0.750

0.500

1.000

-0.183

-0.042

0.142

75.0

0.875

0.250

2.000

-0.192

-0.034

0.158

80.0

0.900

0.200

2.322

-0.201

-0.024

0.177

85.0

0.925

0.150

2.737

-0.214

-0.011

0.203

90.0

0.950

0.100

3.322

-0.233

0.008

0.242

95.0

0.975

0.050

4.322

-0.251

0.026

0.276

97.5

0.988

0.025

5.322

-0.271

0.046

0.318

99.0

0.995

0.010

6.644

Comparing Functions

If we wanted to compare two studies to see the amount of “consonance”, we could use the curve_compare() function to get a numerical output.

First, we generate some more fake data

GroupA2 <- rnorm(500)
GroupB2 <- rnorm(500)
RandomData2 <- data.frame(GroupA2, GroupB2)
model <- lm(GroupA2 ~ GroupB2, data = RandomData2)
randomframe <- curve_gen(model, "GroupB2")

Once again, we’ll plot this data with ggcurve(). We can also indicate whether we want certain interval estimates to be plotted in the function with the “levels” argument. If we wanted to plot the 50%, 75%, and 95% intervals, we’d provide the argument this way:

(function2 <- ggcurve(type = "c", randomframe[[1]], levels = c(0.50, 0.75, 0.95), nullvalue = TRUE))

Now that we have two datasets and two functions, we can compare them using the curve_compare() function.

This function will provide us with the area that is shared between the curve, along with a ratio of overlap to non-overlap.

We can also do this for the surprisal function simply by changing type to “s”.

It’s clear that the outputs have changed and indicate far more overlap than before.

Constructing Functions From Single Intervals

We can also take a set of confidence limits and use them to construct a consonance, surprisal, likelihood or deviance function using the curve_rev() function.

Here, we’ll use two epidemiological studies16,17 that studied the impact of SSRI exposure in pregnant mothers, and the rate of autism in children.

Both of these studies suggested a null effect of SSRI exposure on autism rates in children.

curve1 <- curve_rev(point = 1.7, LL = 1.1, UL = 2.6, type = "c", measure = "ratio", steps = 10000)
(ggcurve(data = curve1[[1]], type = "c", measure = "ratio", nullvalue = TRUE))

curve2 <- curve_rev(point = 1.61, LL = 0.997, UL = 2.59,type = "c", measure = "ratio", steps = 10000)
(ggcurve(data = curve2[[1]], type = "c", measure = "ratio", nullvalue = TRUE))

The null value is shown via the red line and it’s clear that a large mass of the function is away from it.

We can also see this by plotting the likelihood functions via the curve_rev() function.

lik1 <- curve_rev(point = 1.7, LL = 1.1, UL = 2.6, type = "l", measure = "ratio", steps = 10000)
(ggcurve(data = lik1[[1]], type = "l1", measure = "ratio", nullvalue = TRUE))

lik2 <- curve_rev(point = 1.61, LL = 0.997, UL = 2.59,type = "l", measure = "ratio", steps = 10000)
(ggcurve(data = lik2[[1]], type = "l1", measure = "ratio", nullvalue = TRUE))

We can also view the amount of agreement between the likelihood functions of these two studies.

and the consonance functions

The Bootstrap and Consonance Functions

Some authors have shown that the bootstrap distribution is equal to the confidence distribution because it meets the definition of a consonance distribution.1,18,19 The bootstrap distribution and the asymptotic consonance distribution would be defined as:

\[H_{n}(\theta)=1-P\left(\hat{\theta}-\hat{\theta}^{*} \leq \hat{\theta}-\theta | \mathbf{x}\right)=P\left(\hat{\theta}^{*} \leq \theta | \mathbf{x}\right)\]

Certain bootstrap methods such as the BCa method and t-bootstrap method also yield second order accuracy of consonance distributions.

\[H_{n}(\theta)=1-P\left(\frac{\hat{\theta}^{*}-\hat{\theta}}{\widehat{S E}^{*}\left(\hat{\theta}^{*}\right)} \leq \frac{\hat{\theta}-\theta}{\widehat{S E}(\hat{\theta})} | \mathbf{x}\right)\]

Here, I demonstrate how to use these particular bootstrap methods to arrive at consonance curves and densities.

We’ll use the Iris dataset and construct a function that’ll yield a parameter of interest.

We can now use the curve_boot() method to construct a function. The default method used for this function is the “BCa” method provided by the bcaboot package.19

I will suppress the output of the function because it is unnecessarily long. But we’ve placed all the estimates into a list object called y.

The first item in the list will be the consonance distribution constructed by typical means, while the third item will be the bootstrap approximation to the consonance distribution.

ggcurve(data = y[[1]])

ggcurve(data = y[[3]])

We can also print out a table for TeX documents

(gg <- curve_table(data = y[[1]], format = "image"))

Lower Limit

Upper Limit

Interval Width

Interval Level (%)

CDF

P-value

S-value (bits)

-0.142

-0.093

25

0.048

0.75

0.415

0.625

-0.169

-0.067

50

0.102

0.50

1.000

0.750

-0.205

-0.031

75

0.174

0.25

2.000

0.875

-0.214

-0.021

80

0.194

0.20

2.322

0.900

-0.266

0.031

95

0.296

0.05

4.322

0.975

-0.312

0.077

99

0.389

0.01

6.644

0.995

More bootstrap replications will lead to a smoother function. But for now, we can compare these two functions to see how similar they are.

plot_compare(y[[1]], y[[3]])

The densities can also be calculated accurately using the t-bootstrap method. Here we use a different dataset to show this

library(Lock5Data)
dataz <- data(CommuteAtlanta)
func = function(data, index) {
  x <- as.numeric(unlist(data[1]))
  y <- as.numeric(unlist(data[2]))
  return(mean(x[index]) - mean(y[index]))
}

Our function is a simple mean difference. This time, we’ll set the method to “t” for the t-bootstrap method

z <- curve_boot(data = CommuteAtlanta, func = func, method = "t", replicates = 2000, steps = 1000)
#> Warning in norm.inter(t, alpha): extreme order statistics used as endpoints
ggcurve(data = z[[1]])

ggcurve(data = z[[2]], type = "cd")

The consonance curve and density are nearly identical. With more bootstrap replications, they are very likely to converge.

(zz <- curve_table(data = z[[1]], format = "image"))

Lower Limit

Upper Limit

Interval Width

Interval Level (%)

CDF

P-value

S-value (bits)

-39.400

-39.075

0.325

25.0

0.625

0.750

0.415

-39.611

-38.876

0.735

50.0

0.750

0.500

1.000

-39.873

-38.608

1.265

75.0

0.875

0.250

2.000

-39.932

-38.530

1.402

80.0

0.900

0.200

2.322

-40.026

-38.456

1.570

85.0

0.925

0.150

2.737

-40.118

-38.354

1.763

90.0

0.950

0.100

3.322

-40.294

-38.174

2.120

95.0

0.975

0.050

4.322

-40.442

-38.026

2.416

97.5

0.988

0.025

5.322

-40.636

-37.806

2.830

99.0

0.995

0.010

6.644

Using Profile Likelihoods

For this last example, we’ll explore the curve_lik() function, which can help generate profile likelihood functions, and deviance statistics with the help of the ProfileLikelihood package.

We’ll use a simple example taken directly from the ProfileLikelihood documentation where we’ll calculate the likelihoods from a glm model

data(dataglm)
xx <- profilelike.glm(y ~ x1 + x2, data=dataglm, profile.theta="group",
family=binomial(link="logit"), length=500, round=2)
#> Warning message: provide lo.theta and hi.theta

Then, we’ll use curve_lik() on the object that the ProfileLikelihood package created.

Next, we’ll plot three functions, the relative likelihood, the log likelihood, the likelihoodm, and the deviance function.

ggcurve(lik[[1]], type = "l1")

ggcurve(lik[[1]], type = "l2")

ggcurve(lik[[1]], type = "l3")

ggcurve(lik[[1]], type = "d")

The obvious advantage of using reduced likelihoods is that they are free of nuisance parameters

\[L_{t_{n}}(\theta)=f_{n}\left(F_{n}^{-1}\left(H_{p i v}(\theta)\right)\right)\left|\frac{\partial}{\partial t} \psi\left(t_{n}, \theta\right)\right|=h_{p i v}(\theta)\left|\frac{\partial}{\partial t} \psi(t, \theta)\right| /\left.\left|\frac{\partial}{\partial \theta} \psi(t, \theta)\right|\right|_{t=t_{n}}\] thus, giving summaries of the data that can be incorporated into combined analyses.


References


1. Xie M-g, Singh K. Confidence Distribution, the Frequentist Distribution Estimator of a Parameter: A Review. International Statistical Review. 2013;81(1):3-39. doi:10.1111/insr.12000

2. Birnbaum A. A unified theory of estimation, I. The Annals of Mathematical Statistics. 1961;32(1):112-135. doi:10.1214/aoms/1177705145

3. Chow ZR, Greenland S. Semantic and Cognitive Tools to Aid Statistical Inference: Replace Confidence and Significance by Compatibility and Surprise. arXiv:190908579 [statME]. September 2019. http://arxiv.org/abs/1909.08579.

4. Fraser D. P-Values: The Insight to Modern Statistical Inference. Annual Review of Statistics and Its Application. 2017;4(1):1-14. doi:10.1146/annurev-statistics-060116-054139

5. Fraser DAS. The P-value function and statistical inference. The American Statistician. 2019;73(sup1):135-147. doi:10.1080/00031305.2018.1556735

6. Poole C. Beyond the confidence interval. American Journal of Public Health. 1987;77(2):195-199. doi:10.2105/AJPH.77.2.195

7. Poole C. Confidence intervals exclude nothing. American Journal of Public Health. 1987;77(4):492-493. doi:10.2105/ajph.77.4.492

8. Schweder T, Hjort NL. Confidence and Likelihood*. Scand J Stat. 2002;29(2):309-332. doi:10.1111/1467-9469.00285

9. Schweder T, Hjort NL. Confidence, Likelihood, Probability: Statistical Inference with Confidence Distributions. Cambridge University Press; 2016.

10. Singh K, Xie M, Strawderman WE. Confidence distribution (CD) – distribution estimator of a parameter. August 2007. http://arxiv.org/abs/0708.0976.

11. Sullivan KM, Foster DA. Use of the confidence interval function. Epidemiology. 1990;1(1):39-42. doi:10.1097/00001648-199001000-00009

12. Whitehead J. The case for frequentism in clinical trials. Statistics in Medicine. 1993;12(15-16):1405-1413. doi:10.1002/sim.4780121506

13. Rothman KJ, Greenland S, Lash TL. Precision and statistics in epidemiologic studies. In: Rothman KJ, Greenland S, Lash TL, eds. Modern Epidemiology. 3rd ed. Lippincott Williams & Wilkins; 2008:148-167.

14. Greenland S. Valid P-values behave exactly as they should: Some misleading criticisms of P-values and their resolution with S-values. The American Statistician. 2019;73(sup1):106-114. doi:10.1080/00031305.2018.1529625

15. Shannon CE. A mathematical theory of communication. The Bell System Technical Journal. 1948;27(3):379-423. doi:10.1002/j.1538-7305.1948.tb01338.x

16. Brown HK, Ray JG, Wilton AS, Lunsky Y, Gomes T, Vigod SN. Association between serotonergic antidepressant use during pregnancy and autism spectrum disorder in children. JAMA. 2017;317(15):1544-1552. doi:10.1001/jama.2017.3415

17. Brown HK, Hussain-Shamsy N, Lunsky Y, Dennis C-LE, Vigod SN. The association between antenatal exposure to selective serotonin reuptake inhibitors and autism: A systematic review and meta-analysis. The Journal of Clinical Psychiatry. 2017;78(1):e48-e58. doi:10.4088/JCP.15r10194

18. Efron B, Tibshirani RJ. An Introduction to the Bootstrap. CRC Press; 1994.

19. Efron B, Narasimhan B. The automatic construction of bootstrap confidence intervals. October 2018:17.