Calculate parametric, non-parametric, robust, and Bayes Factor pairwise comparisons between group levels with corrections for multiple testing.

pairwise_comparisons(
  data,
  x,
  y,
  subject.id = NULL,
  type = "parametric",
  paired = FALSE,
  var.equal = FALSE,
  tr = 0.2,
  bf.prior = 0.707,
  p.adjust.method = "holm",
  k = 2L,
  ...
)

Arguments

data

A dataframe (or a tibble) from which variables specified are to be taken. Other data types (e.g., matrix,table, array, etc.) will not be accepted.

x

The grouping (or independent) variable from the dataframe data. In case of a repeated measures or within-subjects design, if subject.id argument is not available or not explicitly specified, the function assumes that the data has already been sorted by such an id by the user and creates an internal identifier. So if your data is not sorted, the results can be inaccurate when there are more than two levels in x and there are NAs present. The data is expected to be sorted by user in subject-1,subject-2, ..., pattern.

y

The response (or outcome or dependent) variable from the dataframe data.

subject.id

Relevant in case of a repeated measures or within-subjects design (paired = TRUE, i.e.), it specifies the subject or repeated measures identifier. Important: Note that if this argument is NULL (which is the default), the function assumes that the data has already been sorted by such an id by the user and creates an internal identifier. So if your data is not sorted and you leave this argument unspecified, the results can be inaccurate when there are more than two levels in x and there are NAs present.

type

Type of statistic expected ("parametric" or "nonparametric" or "robust" or "bayes").Corresponding abbreviations are also accepted: "p" (for parametric), "np" (nonparametric), "r" (robust), or "bf"resp.

paired

Logical that decides whether the experimental design is repeated measures/within-subjects or between-subjects. The default is FALSE.

var.equal

a logical variable indicating whether to treat the two variances as being equal. If TRUE then the pooled variance is used to estimate the variance otherwise the Welch (or Satterthwaite) approximation to the degrees of freedom is used.

tr

Trim level for the mean when carrying out robust tests. If you get error stating "Standard error cannot be computed because of Winsorized variance of 0 (e.g., due to ties). Try to decrease the trimming level.", try to play around with the value of tr, which is by default set to 0.2. Lowering the value might help.

bf.prior

A number between 0.5 and 2 (default 0.707), the prior width to use in calculating Bayes factors and posterior estimates. In addition to numeric arguments, several named values are also recognized: "medium", "wide", and "ultrawide", corresponding to r scale values of 1/2, sqrt(2)/2, and 1, respectively. In case of an ANOVA, this value corresponds to scale for fixed effects.

p.adjust.method

Adjustment method for p-values for multiple comparisons. Possible methods are: "holm" (default), "hochberg", "hommel", "bonferroni", "BH", "BY", "fdr", "none".

k

Number of digits after decimal point (should be an integer) (Default: k = 2L).

...

Additional arguments passed to other methods.

Value

A tibble dataframe containing two columns corresponding to group levels being compared with each other (group1 and group2) and p.value column corresponding to this comparison. The dataframe will also contain a p.value.label column containing a label for this p-value, in case this needs to be displayed in ggsignif::geom_ggsignif. In addition to these common columns across the different types of statistics, there will be additional columns specific to the type of test being run.

This function provides a unified syntax to carry out pairwise comparison tests and internally relies on other packages to carry out these tests. For more details about the included tests, see the documentation for the respective functions:

Examples

# \donttest{ # for reproducibility set.seed(123) library(pairwiseComparisons) library(statsExpressions) # for data # show all columns and make the column titles bold # as a user, you don't need to do this; this is just for the package website options(tibble.width = Inf, pillar.bold = TRUE, pillar.neg = TRUE, pillar.subtle_num = TRUE) #------------------- between-subjects design ---------------------------- # parametric # if `var.equal = TRUE`, then Student's t-test will be run pairwise_comparisons( data = mtcars, x = cyl, y = wt, type = "parametric", var.equal = TRUE, paired = FALSE, p.adjust.method = "none" )
#> # A tibble: 3 × 6 #> group1 group2 p.value test.details p.value.adjustment #> <chr> <chr> <dbl> <chr> <chr> #> 1 4 6 0.0106 Student's t-test None #> 2 4 8 0.000000207 Student's t-test None #> 3 6 8 0.00516 Student's t-test None #> label #> <chr> #> 1 list(~italic(p)[uncorrected]==0.011) #> 2 list(~italic(p)[uncorrected]==2.07e-07) #> 3 list(~italic(p)[uncorrected]==0.005)
# if `var.equal = FALSE`, then Games-Howell test will be run pairwise_comparisons( data = mtcars, x = cyl, y = wt, type = "parametric", var.equal = FALSE, paired = FALSE, p.adjust.method = "bonferroni" )
#> # A tibble: 3 × 11 #> group1 group2 statistic p.value alternative method distribution #> <chr> <chr> <dbl> <dbl> <chr> <chr> <chr> #> 1 4 6 5.39 0.0125 two.sided Games-Howell test q #> 2 4 8 9.11 0.0000124 two.sided Games-Howell test q #> 3 6 8 5.12 0.0148 two.sided Games-Howell test q #> p.adjustment test.details p.value.adjustment #> <chr> <chr> <chr> #> 1 none Games-Howell test Bonferroni #> 2 none Games-Howell test Bonferroni #> 3 none Games-Howell test Bonferroni #> label #> <chr> #> 1 list(~italic(p)[Bonferroni-corrected]==0.012) #> 2 list(~italic(p)[Bonferroni-corrected]==1.24e-05) #> 3 list(~italic(p)[Bonferroni-corrected]==0.015)
# non-parametric (Dunn test) pairwise_comparisons( data = mtcars, x = cyl, y = wt, type = "nonparametric", paired = FALSE, p.adjust.method = "none" )
#> # A tibble: 3 × 11 #> group1 group2 statistic p.value alternative method #> <chr> <chr> <dbl> <dbl> <chr> <chr> #> 1 4 6 1.84 0.0663 two.sided Dunn's all-pairs test #> 2 4 8 4.76 0.00000198 two.sided Dunn's all-pairs test #> 3 6 8 2.22 0.0263 two.sided Dunn's all-pairs test #> distribution p.adjustment test.details p.value.adjustment #> <chr> <chr> <chr> <chr> #> 1 z none Dunn test None #> 2 z none Dunn test None #> 3 z none Dunn test None #> label #> <chr> #> 1 list(~italic(p)[uncorrected]==0.066) #> 2 list(~italic(p)[uncorrected]==1.98e-06) #> 3 list(~italic(p)[uncorrected]==0.026)
# robust (Yuen's trimmed means t-test) pairwise_comparisons( data = mtcars, x = cyl, y = wt, type = "robust", paired = FALSE, p.adjust.method = "fdr" )
#> # A tibble: 3 × 10 #> group1 group2 estimate conf.level conf.low conf.high p.value #> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 4 6 -0.909 0.95 -1.64 -0.173 0.00872 #> 2 4 8 -1.62 0.95 -2.50 -0.746 0.000549 #> 3 6 8 -0.713 0.95 -1.58 0.155 0.0438 #> test.details p.value.adjustment #> <chr> <chr> #> 1 Yuen's trimmed means test FDR #> 2 Yuen's trimmed means test FDR #> 3 Yuen's trimmed means test FDR #> label #> <chr> #> 1 list(~italic(p)[FDR-corrected]==0.009) #> 2 list(~italic(p)[FDR-corrected]==5.49e-04) #> 3 list(~italic(p)[FDR-corrected]==0.044)
# Bayes Factor (Student's t-test) pairwise_comparisons( data = mtcars, x = cyl, y = wt, type = "bayes", paired = FALSE )
#> # A tibble: 3 × 18 #> group1 group2 term estimate conf.level conf.low conf.high pd #> <chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 4 6 Difference 0.686 0.95 0.194 1.25 0.992 #> 2 4 8 Difference 1.62 0.95 1.00 2.18 1 #> 3 6 8 Difference 0.699 0.95 0.0640 1.33 0.986 #> rope.percentage prior.distribution prior.location prior.scale bf10 #> <dbl> <chr> <dbl> <dbl> <dbl> #> 1 0 cauchy 0 0.707 11.4 #> 2 0 cauchy 0 0.707 5222. #> 3 0.00447 cauchy 0 0.707 5.36 #> method log_e_bf10 expression label #> <chr> <dbl> <list> <chr> #> 1 Bayesian t-test 2.44 <language> list(~log[e](BF['01'])==-2.44) #> 2 Bayesian t-test 8.56 <language> list(~log[e](BF['01'])==-8.56) #> 3 Bayesian t-test 1.68 <language> list(~log[e](BF['01'])==-1.68) #> test.details #> <chr> #> 1 Student's t-test #> 2 Student's t-test #> 3 Student's t-test
#------------------- within-subjects design ---------------------------- # parametric (Student's t-test) pairwise_comparisons( data = bugs_long, x = condition, y = desire, subject.id = subject, type = "parametric", paired = TRUE, p.adjust.method = "BH" )
#> # A tibble: 6 × 6 #> group1 group2 p.value test.details p.value.adjustment #> <chr> <chr> <dbl> <chr> <chr> #> 1 HDHF HDLF 1.06e- 3 Student's t-test FDR #> 2 HDHF LDHF 7.02e- 2 Student's t-test FDR #> 3 HDHF LDLF 3.95e-12 Student's t-test FDR #> 4 HDLF LDHF 6.74e- 2 Student's t-test FDR #> 5 HDLF LDLF 1.99e- 3 Student's t-test FDR #> 6 LDHF LDLF 6.66e- 9 Student's t-test FDR #> label #> <chr> #> 1 list(~italic(p)[FDR-corrected]==0.001) #> 2 list(~italic(p)[FDR-corrected]==0.070) #> 3 list(~italic(p)[FDR-corrected]==3.95e-12) #> 4 list(~italic(p)[FDR-corrected]==0.067) #> 5 list(~italic(p)[FDR-corrected]==0.002) #> 6 list(~italic(p)[FDR-corrected]==6.66e-09)
# non-parametric (Durbin-Conover test) pairwise_comparisons( data = bugs_long, x = condition, y = desire, subject.id = subject, type = "nonparametric", paired = TRUE, p.adjust.method = "BY" )
#> # A tibble: 6 × 11 #> group1 group2 statistic p.value alternative #> <chr> <chr> <dbl> <dbl> <chr> #> 1 HDHF HDLF 4.78 1.44e- 5 two.sided #> 2 HDHF LDHF 2.44 4.47e- 2 two.sided #> 3 HDHF LDLF 8.01 5.45e-13 two.sided #> 4 HDLF LDHF 2.34 4.96e- 2 two.sided #> 5 HDLF LDLF 3.23 5.05e- 3 two.sided #> 6 LDHF LDLF 5.57 4.64e- 7 two.sided #> method #> <chr> #> 1 Durbin's all-pairs test for a two-way balanced incomplete block design #> 2 Durbin's all-pairs test for a two-way balanced incomplete block design #> 3 Durbin's all-pairs test for a two-way balanced incomplete block design #> 4 Durbin's all-pairs test for a two-way balanced incomplete block design #> 5 Durbin's all-pairs test for a two-way balanced incomplete block design #> 6 Durbin's all-pairs test for a two-way balanced incomplete block design #> distribution p.adjustment test.details p.value.adjustment #> <chr> <chr> <chr> <chr> #> 1 t none Durbin-Conover test BY #> 2 t none Durbin-Conover test BY #> 3 t none Durbin-Conover test BY #> 4 t none Durbin-Conover test BY #> 5 t none Durbin-Conover test BY #> 6 t none Durbin-Conover test BY #> label #> <chr> #> 1 list(~italic(p)[BY-corrected]==1.44e-05) #> 2 list(~italic(p)[BY-corrected]==0.045) #> 3 list(~italic(p)[BY-corrected]==5.45e-13) #> 4 list(~italic(p)[BY-corrected]==0.050) #> 5 list(~italic(p)[BY-corrected]==0.005) #> 6 list(~italic(p)[BY-corrected]==4.64e-07)
# robust (Yuen's trimmed means t-test) pairwise_comparisons( data = bugs_long, x = condition, y = desire, subject.id = subject, type = "robust", paired = TRUE, p.adjust.method = "hommel" )
#> # A tibble: 6 × 11 #> group1 group2 estimate conf.level conf.low conf.high p.value p.crit #> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 HDHF HDLF 1.03 0.95 0.140 1.92 0.00999 0.0127 #> 2 HDHF LDHF 0.454 0.95 -0.104 1.01 0.0520 0.025 #> 3 HDHF LDLF 1.95 0.95 1.09 2.82 0.000000564 0.00851 #> 4 HDLF LDHF -0.676 0.95 -1.61 0.256 0.0520 0.05 #> 5 HDLF LDLF 0.889 0.95 0.0244 1.75 0.0203 0.0169 #> 6 LDHF LDLF 1.35 0.95 0.560 2.14 0.000102 0.0102 #> test.details p.value.adjustment #> <chr> <chr> #> 1 Yuen's trimmed means test Hommel #> 2 Yuen's trimmed means test Hommel #> 3 Yuen's trimmed means test Hommel #> 4 Yuen's trimmed means test Hommel #> 5 Yuen's trimmed means test Hommel #> 6 Yuen's trimmed means test Hommel #> label #> <chr> #> 1 list(~italic(p)[Hommel-corrected]==0.010) #> 2 list(~italic(p)[Hommel-corrected]==0.052) #> 3 list(~italic(p)[Hommel-corrected]==5.64e-07) #> 4 list(~italic(p)[Hommel-corrected]==0.052) #> 5 list(~italic(p)[Hommel-corrected]==0.020) #> 6 list(~italic(p)[Hommel-corrected]==1.02e-04)
# Bayes Factor (Student's t-test) pairwise_comparisons( data = bugs_long, x = condition, y = desire, subject.id = subject, type = "bayes", paired = TRUE )
#> # A tibble: 6 × 18 #> group1 group2 term estimate conf.level conf.low conf.high pd #> <chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 HDHF HDLF Difference -1.10 0.95 -1.77 -0.505 1.00 #> 2 HDHF LDHF Difference -0.454 0.95 -0.949 0.0320 0.966 #> 3 HDHF LDLF Difference -2.13 0.95 -2.67 -1.65 1 #> 4 HDLF LDHF Difference 0.650 0.95 -0.0145 1.31 0.973 #> 5 HDLF LDLF Difference -0.975 0.95 -1.56 -0.383 1.00 #> 6 LDHF LDLF Difference -1.66 0.95 -2.19 -1.18 1 #> rope.percentage prior.distribution prior.location prior.scale bf10 #> <dbl> <chr> <dbl> <dbl> <dbl> #> 1 0 cauchy 0 0.707 4.16e+ 1 #> 2 0.184 cauchy 0 0.707 5.83e- 1 #> 3 0 cauchy 0 0.707 1.20e+10 #> 4 0.155 cauchy 0 0.707 6.98e- 1 #> 5 0 cauchy 0 0.707 1.81e+ 1 #> 6 0 cauchy 0 0.707 4.81e+ 6 #> method log_e_bf10 expression label #> <chr> <dbl> <list> <chr> #> 1 Bayesian t-test 3.73 <language> list(~log[e](BF['01'])==-3.73) #> 2 Bayesian t-test -0.539 <language> list(~log[e](BF['01'])==0.54) #> 3 Bayesian t-test 23.2 <language> list(~log[e](BF['01'])==-23.21) #> 4 Bayesian t-test -0.359 <language> list(~log[e](BF['01'])==0.36) #> 5 Bayesian t-test 2.90 <language> list(~log[e](BF['01'])==-2.90) #> 6 Bayesian t-test 15.4 <language> list(~log[e](BF['01'])==-15.39) #> test.details #> <chr> #> 1 Student's t-test #> 2 Student's t-test #> 3 Student's t-test #> 4 Student's t-test #> 5 Student's t-test #> 6 Student's t-test
# }