A dataframe containing results from a two-sample test and effect size plus confidence intervals.

To see details about functions which are internally used to carry out these analyses, see the following vignette- https://indrajeetpatil.github.io/statsExpressions/articles/stats_details.html

two_sample_test(
  data,
  x,
  y,
  subject.id = NULL,
  type = "parametric",
  paired = FALSE,
  k = 2L,
  conf.level = 0.95,
  effsize.type = "g",
  var.equal = FALSE,
  bf.prior = 0.707,
  tr = 0.2,
  nboot = 100L,
  top.text = NULL,
  ...
)

Arguments

data

A dataframe (or a tibble) from which variables specified are to be taken. Other data types (e.g., matrix,table, array, etc.) will not be accepted.

x

The grouping (or independent) variable from the dataframe data. In case of a repeated measures or within-subjects design, if subject.id argument is not available or not explicitly specified, the function assumes that the data has already been sorted by such an id by the user and creates an internal identifier. So if your data is not sorted, the results can be inaccurate when there are more than two levels in x and there are NAs present. The data is expected to be sorted by user in subject-1,subject-2, ..., pattern.

y

The response (or outcome or dependent) variable from the dataframe data.

subject.id

Relevant in case of a repeated measures or within-subjects design (paired = TRUE, i.e.), it specifies the subject or repeated measures identifier. Important: Note that if this argument is NULL (which is the default), the function assumes that the data has already been sorted by such an id by the user and creates an internal identifier. So if your data is not sorted and you leave this argument unspecified, the results can be inaccurate when there are more than two levels in x and there are NAs present.

type

A character specifying the type of statistical approach:

  • "parametric"

  • "nonparametric"

  • "robust"

  • "bayes"

This argument also accepts the following abbreviations: "p" (for parametric), "np" (for nonparametric), "r" (for robust), "bf" (for Bayes Factor or Bayesian).

paired

Logical that decides whether the experimental design is repeated measures/within-subjects or between-subjects. The default is FALSE.

k

Number of digits after decimal point (should be an integer) (Default: k = 2L).

conf.level

Confidence/Credible Interval (CI) level. Default to 0.95 (95%).

effsize.type

Type of effect size needed for parametric tests. The argument can be "d" (for Cohen's d) or "g" (for Hedge's g).

var.equal

a logical variable indicating whether to treat the two variances as being equal. If TRUE then the pooled variance is used to estimate the variance otherwise the Welch (or Satterthwaite) approximation to the degrees of freedom is used.

bf.prior

A number between 0.5 and 2 (default 0.707), the prior width to use in calculating Bayes factors and posterior estimates. In addition to numeric arguments, several named values are also recognized: "medium", "wide", and "ultrawide", corresponding to r scale values of 1/2, sqrt(2)/2, and 1, respectively. In case of an ANOVA, this value corresponds to scale for fixed effects.

tr

Trim level for the mean when carrying out robust tests. In case of an error, try reducing the value of tr, which is by default set to 0.2. Lowering the value might help.

nboot

Number of bootstrap samples for computing confidence interval for the effect size (Default: 100L).

top.text

Text to display on top of the Bayes Factor message. This is mostly relevant in the context of ggstatsplot functions.

...

Currently ignored.

Examples

# \donttest{ # for reproducibility set.seed(123) library(statsExpressions) options(tibble.width = Inf, pillar.bold = TRUE, pillar.neg = TRUE) # ----------------------- parametric ------------------------------------- # between-subjects design two_sample_test( data = sleep, x = group, y = extra, type = "p" )
#> # A tibble: 1 x 14 #> term group mean.group1 mean.group2 statistic df.error p.value #> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 extra group 0.75 2.33 -1.86 17.8 0.0794 #> method estimate conf.level conf.low conf.high effectsize #> <chr> <dbl> <dbl> <dbl> <dbl> <chr> #> 1 Welch Two Sample t-test -0.797 0.95 -1.67 0.0914 Hedges' g #> expression #> <list> #> 1 <language>
# within-subjects design two_sample_test( data = VR_dilemma, x = modality, y = score, paired = TRUE, subject.id = id, type = "p" )
#> # A tibble: 1 x 12 #> term group statistic df.error p.value method estimate conf.level #> <chr> <chr> <dbl> <dbl> <dbl> <chr> <dbl> <dbl> #> 1 score modality -3.96 33 0.000373 Paired t-test -0.664 0.95 #> conf.low conf.high effectsize expression #> <dbl> <dbl> <chr> <list> #> 1 -1.04 -0.299 Hedges' g <language>
# ----------------------- non-parametric ---------------------------------- # between-subjects design two_sample_test( data = sleep, x = group, y = extra, type = "np" )
#> # A tibble: 1 x 11 #> parameter1 parameter2 statistic p.value method estimate #> <chr> <chr> <dbl> <dbl> <chr> <dbl> #> 1 extra group 3.24 0.0693 Wilcoxon rank sum test -0.49 #> conf.level conf.low conf.high effectsize expression #> <dbl> <dbl> <dbl> <chr> <list> #> 1 0.95 -0.850 -0.0690 r (rank biserial) <language>
# within-subjects design two_sample_test( data = VR_dilemma, x = modality, y = score, paired = TRUE, subject.id = id, type = "np" )
#> # A tibble: 1 x 11 #> parameter1 parameter2 statistic p.value method estimate #> <chr> <chr> <dbl> <dbl> <chr> <dbl> #> 1 score modality 1.50 0.000886 Wilcoxon signed rank test -0.934 #> conf.level conf.low conf.high effectsize expression #> <dbl> <dbl> <dbl> <chr> <list> #> 1 0.95 -1 -0.761 r (rank biserial) <language>
# ------------------------------ robust ---------------------------------- # between-subjects design two_sample_test( data = sleep, x = group, y = extra, type = "r" )
#> # A tibble: 1 x 10 #> statistic df.error p.value #> <dbl> <dbl> <dbl> #> 1 1.62 8.26 0.143 #> method estimate conf.low #> <chr> <dbl> <dbl> #> 1 Yuen's test on trimmed means for independent samples 0.516 0 #> conf.high conf.level effectsize expression #> <dbl> <dbl> <chr> <list> #> 1 0.877 0.95 Explanatory measure of effect size <language>
# within-subjects design two_sample_test( data = VR_dilemma, x = modality, y = score, paired = TRUE, subject.id = id, type = "r" )
#> Warning: the standard deviation is zero
#> # A tibble: 1 x 10 #> statistic df.error p.value method #> <dbl> <dbl> <dbl> <chr> #> 1 -2.56 21 0.0182 Yuen's test on trimmed means for dependent samples #> estimate conf.low conf.high conf.level #> <dbl> <dbl> <dbl> <dbl> #> 1 -0.381 -0.704 -0.238 0.95 #> effectsize expression #> <chr> <list> #> 1 Algina-Keselman-Penfield robust standardized difference <language>
#' # ------------------------------ Bayesian ------------------------------ # between-subjects design two_sample_test( data = sleep, x = group, y = extra, type = "bayes" )
#> # A tibble: 2 x 13 #> term estimate conf.level conf.low conf.high pd rope.percentage #> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 Difference 1.15 0.95 -0.400 2.70 0.934 0.0674 #> 2 Cohens_d -0.601 0.95 -1.50 0.210 0.930 0.0834 #> prior.distribution prior.location prior.scale bf10 method expression #> <chr> <dbl> <dbl> <dbl> <chr> <list> #> 1 cauchy 0 0.707 1.27 Bayesian t-test <language> #> 2 cauchy 0 0.707 1.27 Bayesian t-test <language>
# within-subjects design two_sample_test( data = VR_dilemma, x = modality, y = score, paired = TRUE, subject.id = id, type = "bayes" )
#> # A tibble: 2 x 13 #> term estimate conf.level conf.low conf.high pd rope.percentage #> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 Difference 0.172 0.95 0.0818 0.266 1 0 #> 2 Cohens_d -0.637 0.95 -0.992 -0.258 0.999 0 #> prior.distribution prior.location prior.scale bf10 method expression #> <chr> <dbl> <dbl> <dbl> <chr> <list> #> 1 cauchy 0 0.707 77.1 Bayesian t-test <language> #> 2 cauchy 0 0.707 77.1 Bayesian t-test <language>
# }