Calculate parametric, non-parametric, robust, and Bayes Factor pairwise comparisons between group levels with corrections for multiple testing.

pairwise_comparisons(
data,
x,
y,
subject.id = NULL,
type = "parametric",
paired = FALSE,
var.equal = FALSE,
tr = 0.2,
bf.prior = 0.707,
k = 2L,
...
)

## Arguments

data A dataframe (or a tibble) from which variables specified are to be taken. Other data types (e.g., matrix,table, array, etc.) will not be accepted. The grouping (or independent) variable from the dataframe data. The response (or outcome or dependent) variable from the dataframe data. Relevant in case of repeated measures or within-subjects design (paired = TRUE, i.e.), it specifies the subject or repeated measures identifier. Important: Note that if this argument is NULL (which is the default), the function assumes that the data has already been sorted by such an id by the user and creates an internal identifier. So if your data is not sorted and you leave this argument unspecified, the results can be inaccurate. Type of statistic expected ("parametric" or "nonparametric" or "robust" or "bayes").Corresponding abbreviations are also accepted: "p" (for parametric), "np" (nonparametric), "r" (robust), or "bf"resp. Logical that decides whether the experimental design is repeated measures/within-subjects or between-subjects. The default is FALSE. a logical variable indicating whether to treat the two variances as being equal. If TRUE then the pooled variance is used to estimate the variance otherwise the Welch (or Satterthwaite) approximation to the degrees of freedom is used. Trim level for the mean when carrying out robust tests. If you get error stating "Standard error cannot be computed because of Winsorized variance of 0 (e.g., due to ties). Try to decrease the trimming level.", try to play around with the value of tr, which is by default set to 0.2. Lowering the value might help. A number between 0.5 and 2 (default 0.707), the prior width to use in calculating Bayes factors. Adjustment method for p-values for multiple comparisons. Possible methods are: "holm" (default), "hochberg", "hommel", "bonferroni", "BH", "BY", "fdr", "none". Number of digits after decimal point (should be an integer) (Default: k = 2L). Additional arguments passed to other methods.

## Value

A tibble dataframe containing two columns corresponding to group levels being compared with each other (group1 and group2) and p.value column corresponding to this comparison. The dataframe will also contain a p.value.label column containing a label for this p-value, in case this needs to be displayed in ggsignif::geom_ggsignif. In addition to these common columns across the different types of statistics, there will be additional columns specific to the type of test being run.

This function provides a unified syntax to carry out pairwise comparison tests and internally relies on other packages to carry out these tests. For more details about the included tests, see the documentation for the respective functions:

• parametric : stats::pairwise.t.test() (paired) and PMCMRplus::gamesHowellTest() (unpaired)

• non-parametric : PMCMRplus::durbinAllPairsTest() (paired) and PMCMRplus::kwAllPairsDunnTest() (unpaired)

• robust : WRS2::rmmcp() (paired) and WRS2::lincon() (unpaired)

• Bayes Factor : BayesFactor::ttestBF()

## Examples

# \donttest{
# for reproducibility
set.seed(123)
library(pairwiseComparisons)

# show all columns and make the column titles bold
# as a user, you don't need to do this; this is just for the package website
options(tibble.width = Inf, pillar.bold = TRUE, pillar.neg = TRUE, pillar.subtle_num = TRUE)

#------------------- between-subjects design ----------------------------

# parametric
# if var.equal = TRUE, then Student's t-test will be run
pairwise_comparisons(
data = mtcars,
x = cyl,
y = wt,
type = "parametric",
var.equal = TRUE,
paired = FALSE,
)
#> # A tibble: 3 x 6
#>   group1 group2     p.value test.details     p.value.adjustment
#>   <chr>  <chr>        <dbl> <chr>            <chr>
#> 1 4      6      0.0106      Student's t-test None
#> 2 4      8      0.000000207 Student's t-test None
#> 3 6      8      0.00516     Student's t-test None
#>   label
#>   <chr>
#> 1 list(~italic(p)[uncorrected]==0.011)
#> 2 list(~italic(p)[uncorrected]==2.07e-07)
#> 3 list(~italic(p)[uncorrected]==0.005)
# if var.equal = FALSE, then Games-Howell test will be run
pairwise_comparisons(
data = mtcars,
x = cyl,
y = wt,
type = "parametric",
var.equal = FALSE,
paired = FALSE,
)
#> # A tibble: 3 x 11
#>   group1 group2 statistic   p.value alternative method            distribution
#>   <chr>  <chr>      <dbl>     <dbl> <chr>       <chr>             <chr>
#> 1 4      6           5.39 0.0125    two.sided   Games-Howell test q
#> 2 4      8           9.11 0.0000124 two.sided   Games-Howell test q
#> 3 6      8           5.12 0.0148    two.sided   Games-Howell test q
#>   <chr>        <chr>             <chr>
#> 1 none         Games-Howell test Bonferroni
#> 2 none         Games-Howell test Bonferroni
#> 3 none         Games-Howell test Bonferroni
#>   label
#>   <chr>
#> 1 list(~italic(p)[Bonferroni-corrected]==0.012)
#> 2 list(~italic(p)[Bonferroni-corrected]==1.24e-05)
#> 3 list(~italic(p)[Bonferroni-corrected]==0.015)
# non-parametric (Dunn test)
pairwise_comparisons(
data = mtcars,
x = cyl,
y = wt,
type = "nonparametric",
paired = FALSE,
)
#> # A tibble: 3 x 11
#>   group1 group2 statistic    p.value alternative method
#>   <chr>  <chr>      <dbl>      <dbl> <chr>       <chr>
#> 1 4      6           1.84 0.0663     two.sided   Dunn's all-pairs test
#> 2 4      8           4.76 0.00000198 two.sided   Dunn's all-pairs test
#> 3 6      8           2.22 0.0263     two.sided   Dunn's all-pairs test
#>   <chr>        <chr>        <chr>        <chr>
#> 1 z            none         Dunn test    None
#> 2 z            none         Dunn test    None
#> 3 z            none         Dunn test    None
#>   label
#>   <chr>
#> 1 list(~italic(p)[uncorrected]==0.066)
#> 2 list(~italic(p)[uncorrected]==1.98e-06)
#> 3 list(~italic(p)[uncorrected]==0.026)
# robust (Yuen's trimmed means t-test)
pairwise_comparisons(
data = mtcars,
x = cyl,
y = wt,
type = "robust",
paired = FALSE,
)
#> # A tibble: 3 x 10
#>   group1 group2 estimate conf.level conf.low conf.high p.value
#>   <chr>  <chr>     <dbl>      <dbl>    <dbl>     <dbl>   <dbl>
#> 1 4      6        -0.909       0.95    -1.64    -0.173 0.0174
#> 2 4      8        -1.62        0.95    -2.50    -0.746 0.00165
#> 3 6      8        -0.713       0.95    -1.58     0.155 0.0438
#>   <chr>                     <chr>
#> 1 Yuen's trimmed means test FDR
#> 2 Yuen's trimmed means test FDR
#> 3 Yuen's trimmed means test FDR
#>   label
#>   <chr>
#> 1 list(~italic(p)[FDR-corrected]==0.017)
#> 2 list(~italic(p)[FDR-corrected]==0.002)
#> 3 list(~italic(p)[FDR-corrected]==0.044)
# Bayes Factor (Student's t-test)
pairwise_comparisons(
data = mtcars,
x = cyl,
y = wt,
type = "bayes",
paired = FALSE
)
#> # A tibble: 3 x 17
#>   group1 group2 term       estimate conf.level conf.low conf.high    pd
#>   <chr>  <chr>  <chr>         <dbl>      <dbl>    <dbl>     <dbl> <dbl>
#> 1 4      6      Difference    0.686       0.89    0.252      1.10 0.992
#> 2 4      8      Difference    1.63        0.89    1.14       2.11 1
#> 3 6      8      Difference    0.715       0.89    0.203      1.21 0.987
#>   rope.percentage prior.distribution prior.location prior.scale    bf10
#>             <dbl> <chr>                       <dbl>       <dbl>   <dbl>
#> 1               0 cauchy                          0       0.707   11.4
#> 2               0 cauchy                          0       0.707 5222.
#> 3               0 cauchy                          0       0.707    5.36
#>   method          log_e_bf10 label                          test.details
#>   <chr>                <dbl> <chr>                          <chr>
#> 1 Bayesian t-test       2.44 list(~log[e](BF['01'])==-2.44) Student's t-test
#> 2 Bayesian t-test       8.56 list(~log[e](BF['01'])==-8.56) Student's t-test
#> 3 Bayesian t-test       1.68 list(~log[e](BF['01'])==-1.68) Student's t-test
#------------------- within-subjects design ----------------------------

# parametric (Student's t-test)
pairwise_comparisons(
data = bugs_long,
x = condition,
y = desire,
subject.id = subject,
type = "parametric",
paired = TRUE,
)
#> # A tibble: 6 x 6
#>   group1 group2  p.value test.details     p.value.adjustment
#>   <chr>  <chr>     <dbl> <chr>            <chr>
#> 1 HDHF   HDLF   1.06e- 3 Student's t-test FDR
#> 2 HDHF   LDHF   7.02e- 2 Student's t-test FDR
#> 3 HDHF   LDLF   3.95e-12 Student's t-test FDR
#> 4 HDLF   LDHF   6.74e- 2 Student's t-test FDR
#> 5 HDLF   LDLF   1.99e- 3 Student's t-test FDR
#> 6 LDHF   LDLF   6.66e- 9 Student's t-test FDR
#>   label
#>   <chr>
#> 1 list(~italic(p)[FDR-corrected]==0.001)
#> 2 list(~italic(p)[FDR-corrected]==0.070)
#> 3 list(~italic(p)[FDR-corrected]==3.95e-12)
#> 4 list(~italic(p)[FDR-corrected]==0.067)
#> 5 list(~italic(p)[FDR-corrected]==0.002)
#> 6 list(~italic(p)[FDR-corrected]==6.66e-09)
# non-parametric (Durbin-Conover test)
pairwise_comparisons(
data = bugs_long,
x = condition,
y = desire,
subject.id = subject,
type = "nonparametric",
paired = TRUE,
)
#> # A tibble: 6 x 11
#>   group1 group2 statistic  p.value alternative
#>   <chr>  <chr>      <dbl>    <dbl> <chr>
#> 1 HDHF   HDLF        4.78 1.44e- 5 two.sided
#> 2 HDHF   LDHF        2.44 4.47e- 2 two.sided
#> 3 HDHF   LDLF        8.01 5.45e-13 two.sided
#> 4 HDLF   LDHF        2.34 4.96e- 2 two.sided
#> 5 HDLF   LDLF        3.23 5.05e- 3 two.sided
#> 6 LDHF   LDLF        5.57 4.64e- 7 two.sided
#>   method
#>   <chr>
#> 1 Durbin's all-pairs test for a two-way balanced incomplete block design
#> 2 Durbin's all-pairs test for a two-way balanced incomplete block design
#> 3 Durbin's all-pairs test for a two-way balanced incomplete block design
#> 4 Durbin's all-pairs test for a two-way balanced incomplete block design
#> 5 Durbin's all-pairs test for a two-way balanced incomplete block design
#> 6 Durbin's all-pairs test for a two-way balanced incomplete block design
#>   <chr>        <chr>        <chr>               <chr>
#> 1 t            none         Durbin-Conover test BY
#> 2 t            none         Durbin-Conover test BY
#> 3 t            none         Durbin-Conover test BY
#> 4 t            none         Durbin-Conover test BY
#> 5 t            none         Durbin-Conover test BY
#> 6 t            none         Durbin-Conover test BY
#>   label
#>   <chr>
#> 1 list(~italic(p)[BY-corrected]==1.44e-05)
#> 2 list(~italic(p)[BY-corrected]==0.045)
#> 3 list(~italic(p)[BY-corrected]==5.45e-13)
#> 4 list(~italic(p)[BY-corrected]==0.050)
#> 5 list(~italic(p)[BY-corrected]==0.005)
#> 6 list(~italic(p)[BY-corrected]==4.64e-07)
# robust (Yuen's trimmed means t-test)
pairwise_comparisons(
data = bugs_long,
x = condition,
y = desire,
subject.id = subject,
type = "robust",
paired = TRUE,
)
#> # A tibble: 6 x 11
#>   group1 group2 estimate conf.level conf.low conf.high     p.value  p.crit
#>   <chr>  <chr>     <dbl>      <dbl>    <dbl>     <dbl>       <dbl>   <dbl>
#> 1 HDHF   HDLF      1.03        0.95   0.140      1.92  0.00999     0.0127
#> 2 HDHF   LDHF      0.454       0.95  -0.104      1.01  0.0520      0.025
#> 3 HDHF   LDLF      1.95        0.95   1.09       2.82  0.000000564 0.00851
#> 4 HDLF   LDHF     -0.676       0.95  -1.61       0.256 0.0520      0.05
#> 5 HDLF   LDLF      0.889       0.95   0.0244     1.75  0.0203      0.0169
#> 6 LDHF   LDLF      1.35        0.95   0.560      2.14  0.000102    0.0102
#>   <chr>                     <chr>
#> 1 Yuen's trimmed means test Hommel
#> 2 Yuen's trimmed means test Hommel
#> 3 Yuen's trimmed means test Hommel
#> 4 Yuen's trimmed means test Hommel
#> 5 Yuen's trimmed means test Hommel
#> 6 Yuen's trimmed means test Hommel
#>   label
#>   <chr>
#> 1 list(~italic(p)[Hommel-corrected]==0.010)
#> 2 list(~italic(p)[Hommel-corrected]==0.052)
#> 3 list(~italic(p)[Hommel-corrected]==5.64e-07)
#> 4 list(~italic(p)[Hommel-corrected]==0.052)
#> 5 list(~italic(p)[Hommel-corrected]==0.020)
#> 6 list(~italic(p)[Hommel-corrected]==1.02e-04)
# Bayes Factor (Student's t-test)
pairwise_comparisons(
data = bugs_long,
x = condition,
y = desire,
subject.id = subject,
type = "bayes",
paired = TRUE
)
#> # A tibble: 6 x 17
#>   group1 group2 term       estimate conf.level conf.low conf.high    pd
#>   <chr>  <chr>  <chr>         <dbl>      <dbl>    <dbl>     <dbl> <dbl>
#> 1 HDHF   HDLF   Difference   -1.10        0.89   -1.61    -0.575  1.00
#> 2 HDHF   LDHF   Difference   -0.455       0.89   -0.883   -0.0675 0.962
#> 3 HDHF   LDLF   Difference   -2.13        0.89   -2.56    -1.73   1
#> 4 HDLF   LDHF   Difference    0.661       0.89    0.126    1.21   0.97
#> 5 HDLF   LDLF   Difference   -0.991       0.89   -1.49    -0.500  0.999
#> 6 LDHF   LDLF   Difference   -1.65        0.89   -2.05    -1.24   1
#>   rope.percentage prior.distribution prior.location prior.scale     bf10
#>             <dbl> <chr>                       <dbl>       <dbl>    <dbl>
#> 1           0     cauchy                          0       0.707 4.16e+ 1
#> 2           0.161 cauchy                          0       0.707 5.83e- 1
#> 3           0     cauchy                          0       0.707 1.20e+10
#> 4           0.113 cauchy                          0       0.707 6.98e- 1
#> 5           0     cauchy                          0       0.707 1.81e+ 1
#> 6           0     cauchy                          0       0.707 4.81e+ 6
#>   method          log_e_bf10 label                           test.details
#>   <chr>                <dbl> <chr>                           <chr>
#> 1 Bayesian t-test      3.73  list(~log[e](BF['01'])==-3.73)  Student's t-test
#> 2 Bayesian t-test     -0.539 list(~log[e](BF['01'])==0.54)   Student's t-test
#> 3 Bayesian t-test     23.2   list(~log[e](BF['01'])==-23.21) Student's t-test
#> 4 Bayesian t-test     -0.359 list(~log[e](BF['01'])==0.36)   Student's t-test
#> 5 Bayesian t-test      2.90  list(~log[e](BF['01'])==-2.90)  Student's t-test
#> 6 Bayesian t-test     15.4   list(~log[e](BF['01'])==-15.39) Student's t-test# }