A dataframe containing results from a twosample test and effect size plus confidence intervals.
To see details about functions which are internally used to carry out these analyses, see the following vignette https://indrajeetpatil.github.io/statsExpressions/articles/stats_details.html
two_sample_test( data, x, y, subject.id = NULL, type = "parametric", paired = FALSE, k = 2L, conf.level = 0.95, effsize.type = "g", var.equal = FALSE, bf.prior = 0.707, tr = 0.2, nboot = 100L, top.text = NULL, ... )
data  A dataframe (or a tibble) from which variables specified are to be taken. Other data types (e.g., matrix,table, array, etc.) will not be accepted. 

x  The grouping (or independent) variable from the dataframe 
y  The response (or outcome or dependent) variable from the
dataframe 
subject.id  Relevant in case of a repeated measures or withinsubjects
design ( 
type  A character specifying the type of statistical approach:
This argument also accepts the following abbreviations: 
paired  Logical that decides whether the experimental design is
repeated measures/withinsubjects or betweensubjects. The default is

k  Number of digits after decimal point (should be an integer)
(Default: 
conf.level  Confidence/Credible Interval (CI) level. Default to 
effsize.type  Type of effect size needed for parametric tests. The
argument can be 
var.equal  a logical variable indicating whether to treat the
two variances as being equal. If 
bf.prior  A number between 
tr  Trim level for the mean when carrying out 
nboot  Number of bootstrap samples for computing confidence interval
for the effect size (Default: 
top.text  Text to display on top of the Bayes Factor message. This is
mostly relevant in the context of 
...  Currently ignored. 
# \donttest{ # for reproducibility set.seed(123) library(statsExpressions) options(tibble.width = Inf, pillar.bold = TRUE, pillar.neg = TRUE) #  parametric  # betweensubjects design two_sample_test( data = sleep, x = group, y = extra, type = "p" )#> # A tibble: 1 x 14 #> term group mean.group1 mean.group2 statistic df.error p.value #> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 extra group 0.75 2.33 1.86 17.8 0.0794 #> method estimate conf.level conf.low conf.high effectsize #> <chr> <dbl> <dbl> <dbl> <dbl> <chr> #> 1 Welch Two Sample ttest 0.797 0.95 1.67 0.0914 Hedges' g #> expression #> <list> #> 1 <language># withinsubjects design two_sample_test( data = VR_dilemma, x = modality, y = score, paired = TRUE, subject.id = id, type = "p" )#> # A tibble: 1 x 12 #> term group statistic df.error p.value method estimate conf.level #> <chr> <chr> <dbl> <dbl> <dbl> <chr> <dbl> <dbl> #> 1 score modality 3.96 33 0.000373 Paired ttest 0.664 0.95 #> conf.low conf.high effectsize expression #> <dbl> <dbl> <chr> <list> #> 1 1.04 0.299 Hedges' g <language>#  nonparametric  # betweensubjects design two_sample_test( data = sleep, x = group, y = extra, type = "np" )#> # A tibble: 1 x 11 #> parameter1 parameter2 statistic p.value method estimate #> <chr> <chr> <dbl> <dbl> <chr> <dbl> #> 1 extra group 3.24 0.0693 Wilcoxon rank sum test 0.49 #> conf.level conf.low conf.high effectsize expression #> <dbl> <dbl> <dbl> <chr> <list> #> 1 0.95 0.850 0.0690 r (rank biserial) <language># withinsubjects design two_sample_test( data = VR_dilemma, x = modality, y = score, paired = TRUE, subject.id = id, type = "np" )#> # A tibble: 1 x 11 #> parameter1 parameter2 statistic p.value method estimate #> <chr> <chr> <dbl> <dbl> <chr> <dbl> #> 1 score modality 1.50 0.000886 Wilcoxon signed rank test 0.934 #> conf.level conf.low conf.high effectsize expression #> <dbl> <dbl> <dbl> <chr> <list> #> 1 0.95 1 0.761 r (rank biserial) <language>#  robust  # betweensubjects design two_sample_test( data = sleep, x = group, y = extra, type = "r" )#> # A tibble: 1 x 10 #> statistic df.error p.value #> <dbl> <dbl> <dbl> #> 1 1.62 8.26 0.143 #> method estimate conf.low #> <chr> <dbl> <dbl> #> 1 Yuen's test on trimmed means for independent samples 0.516 0 #> conf.high conf.level effectsize expression #> <dbl> <dbl> <chr> <list> #> 1 0.877 0.95 Explanatory measure of effect size <language># withinsubjects design two_sample_test( data = VR_dilemma, x = modality, y = score, paired = TRUE, subject.id = id, type = "r" )#> Warning: the standard deviation is zero#> # A tibble: 1 x 10 #> statistic df.error p.value method #> <dbl> <dbl> <dbl> <chr> #> 1 2.56 21 0.0182 Yuen's test on trimmed means for dependent samples #> estimate conf.low conf.high conf.level #> <dbl> <dbl> <dbl> <dbl> #> 1 0.381 0.704 0.238 0.95 #> effectsize expression #> <chr> <list> #> 1 AlginaKeselmanPenfield robust standardized difference <language>#' #  Bayesian  # betweensubjects design two_sample_test( data = sleep, x = group, y = extra, type = "bayes" )#> # A tibble: 2 x 13 #> term estimate conf.level conf.low conf.high pd rope.percentage #> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 Difference 1.15 0.95 0.400 2.70 0.934 0.0674 #> 2 Cohens_d 0.601 0.95 1.50 0.210 0.930 0.0834 #> prior.distribution prior.location prior.scale bf10 method expression #> <chr> <dbl> <dbl> <dbl> <chr> <list> #> 1 cauchy 0 0.707 1.27 Bayesian ttest <language> #> 2 cauchy 0 0.707 1.27 Bayesian ttest <language># withinsubjects design two_sample_test( data = VR_dilemma, x = modality, y = score, paired = TRUE, subject.id = id, type = "bayes" )#> # A tibble: 2 x 13 #> term estimate conf.level conf.low conf.high pd rope.percentage #> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 Difference 0.172 0.95 0.0818 0.266 1 0 #> 2 Cohens_d 0.637 0.95 0.992 0.258 0.999 0 #> prior.distribution prior.location prior.scale bf10 method expression #> <chr> <dbl> <dbl> <dbl> <chr> <list> #> 1 cauchy 0 0.707 77.1 Bayesian ttest <language> #> 2 cauchy 0 0.707 77.1 Bayesian ttest <language># }