id
stringclasses 10
values | source_type
stringclasses 1
value | title
stringclasses 10
values | url
stringclasses 10
values | language
stringclasses 1
value | year
stringclasses 1
value | topics
listlengths 0
0
| text
stringclasses 10
values |
|---|---|---|---|---|---|---|---|
wiki::en::A/B testing
|
wiki
|
A/B testing
|
https://en.wikipedia.org/wiki/A/B_testing
|
en
|
[] |
A/B testing (also known as bucket testing, split-run testing or split testing) is a user-experience research method. A/B tests consist of a randomized experiment that usually involves two variants (A and B), although the concept can be also extended to multiple variants of the same variable. It includes application of statistical hypothesis testing or "two-sample hypothesis testing" as used in the field of statistics. A/B testing is employed to compare multiple versions of a single variable, for example by testing a subject's response to variant A against variant B, and to determine which of the variants is more effective.
Multivariate testing or multinomial testing is similar to A/B testing but may test more than two versions at the same time or use more controls. Simple A/B tests are not valid for observational, quasi-experimental or other non-experimental situations—commonplace with survey data, offline data, and other, more complex phenomena.
Definition
"A/B testing" is a shorthand for a simple randomized controlled experiment, in which a number of samples (e.g. A and B) of a single vector-variable are compared. A/B tests are widely considered the simplest form of controlled experiment, especially when they only involve two variants. However, by adding more variants to the test, its complexity grows.
The following example illustrates an A/B test with a single variable:
A company has a customer database of 2,000 people and launches an email campaign with a discount code in order to generate sales through its website. The company creates two versions of the email with different calls to action (the part of the copy that encourages customers to act—in the case of a sales campaign, make a purchase) and identifying promotional codes.
To 1,000 people, the company sends an email with the call to action stating "Offer ends this Saturday! Use code A1",
To the remaining 1,000 people, it sends an email with the call to action stating "Offer ends soon! Use code B1".
All other elements of the emails' copy and layout are identical.
The company then monitors which campaign has the higher success rate by analyzing the use of the promotional codes. The email using the code A1 has a 5% response rate (50 of the 1,000 people emailed used the code to buy a product), and the email using the code B1 has a 3% response rate (30 of the recipients used the code to buy a product). The company therefore determines that in this instance, the first call to action is more effective and will use it in future sales. A more nuanced approach would involve applying statistical testing to determine whether the differences in response rates between A1 and B1 were statistically significant (highly likely that the differences are real, repeatable and the result to random chance).
In the previous example, the purpose of the test is to determine the more effective strategy to encourage customers to make a purchase. If, however, the aim of the test had been to determine which email would generate the higher clickthrough rate (the percentage of people who actually click the link after receiving the email), the results might have been different.
For example, even though more of the customers receiving the code B1 accessed the website, because the call to action did not state the end date of the promotion, many recipients may feel no urgency to make an immediate purchase. Consequently, if the purpose of the test had been simply to determine which email would bring more traffic to the website, the email containing code B1 might well have been more successful. An A/B test should have a defined, measurable outcome, such as sales converted, clickthrough rate or registration rate.
Common test statistics
Two-sample hypothesis tests are appropriate for comparing the two samples in which the samples are divided by the two control cases in the experiment. Z-tests are appropriate for comparing means under stringent conditions regarding normality and a known standard deviation. Student's t-tests are appropriate for comparing means under relaxed conditions when less is assumed. Welch's t-test assumes the least and is therefore the most commonly used two-sample hypothesis test in which the mean of a metric is to be optimized. While the mean of the variable to be optimized is the most common choice of estimator, others are regularly used.
Fisher's exact test can be employed to compare two binomial distributions, such as a click-through rate.
Segmentation and targeting
A/B tests most commonly apply the same variant (e.g., user interface element) with equal probability to all users. However, in some circumstances, responses to variants may be heterogeneous. While a variant A might have a higher response rate overall, variant B may have an even higher response rate within a specific segment of the customer base.
For instance, in the above example, the breakdown of the response rates by gender could have been:
In this case, while variant A attracted a higher response rate overall, variant B actually elicited a higher response rate with men.
As a result, the company might select a segmented strategy as a result of the A/B test, sending variant B to men and variant A to women in the future. In this example, a segmented strategy would yield a 30% increase in expected response rates from
5
%
=
40
+
10
500
+
500
{\textstyle 5\%={\frac {40+10}{500+500}}}
to
6.5
%
=
40
+
25
500
+
500
{\textstyle 6.5\%={\frac {40+25}{500+500}}}
.
If segmented results are expected from the A/B test, the test should be properly designed at the outset to be evenly distributed across key customer attributes, such as gender. The test should contain a representative sample of men vs. women and assign men and women randomly to each “variant” (variant A vs. variant B). Failure to do so could lead to experiment bias and inaccurate conclusions.
This segmentation and targeting approach can be further generalized to include multiple customer attributes rather than a single customer attribute—for example, customers' age and gender—to identify more nuanced patterns that may exist in the test results.
Tradeoffs
Positives
The results of A/B tests are simple to interpret to create a clear picture of real user preferences, as they directly test one option over another. A/B tests can also provide answers to highly specific design questions. One example of this is Google's A/B testing with hyperlink colors. In order to optimize revenue, Google tested dozens of hyperlink hues to determine which colors attract the most clicks.
Negatives
A/B tests are sensitive to variance; they require a large sample size in order to reduce standard error and produce a statistically significant result. In applications in which active users are abundant, such as with popular online social-media platforms, obtaining a large sample size is trivial. In other cases, large sample sizes are obtained by increasing the experiment enrollment period. However, using a technique coined by Microsoft as Controlled Experiment Using Pre-Experiment Data (CUPED), variance from before the experiment start can be taken into account so that fewer samples are required to produce a statistically significant result.
Because of its nature as an experiment, running an A/B test introduces the risk of wasted time and resources if the test produces unwanted or unhelpful results.
In December 2018, representatives with experience in large-scale A/B testing from 13 organizations (Airbnb, Amazon, Booking.com, Facebook, Google, LinkedIn, Lyft, Microsoft, Netflix, Twitter, Uber and Stanford University) summarized the top challenges in a paper. The challenges were grouped into four areas: analysis, engineering and culture, deviations from traditional A/B tests and data quality.
History
It is difficult to definitively establish when A/B testing was first used. The first randomized double-blind trial to assess the effectiveness of a homeopathic drug occurred in 1835. Experimentation with advertising campaigns, which has been compared to modern A/B testing, began in the early 20th century. The advertising pioneer Claude Hopkins used promotional coupons to test the effectiveness of his campaigns. However, this process, which Hopkins described in his 1923 book Scientific Advertising, did not incorporate concepts such as statistical significance and the null hypothesis, which are used in statistical hypothesis testing. Modern statistical methods for assessing the significance of sample data were developed separately in the same period. This work was conducted in 1908 by William Sealy Gosset when he altered the Z-test to create Student's t-test.
With the growth of the internet, new ways to sample populations have become available. Google engineers ran their first A/B test in 2000 to determine the optimum number of results to display in its search-engine results. The first test was unsuccessful because of glitches that resulted from slow loading times. Later A/B testing research was more advanced, but the foundation and underlying principles generally remain the same, and in 2011, Google ran more than 7,000 different A/B tests.
In 2012, a Microsoft employee working on the search engine Bing created an experiment to test different methods of displaying advertising headlines. Within hours, the alternative format produced a revenue increase of 12% with no impact on user-experience metrics. Today, major software companies such as Microsoft and Google each conduct over 10,000 A/B tests annually.
A/B testing has been claimed by some to be a change in philosophy and business-strategy in certain niches, although the approach is identical to a between-subjects design, which is commonly used in a variety of research traditions. A/B testing as a philosophy of web development brings the field into line with a broader movement toward evidence-based practice.
Many companies now use the "designed experiment" approach to making marketing decisions, with the expectation that relevant sample results can improve positive conversion results. It is an increasingly common practice as the tools and expertise grow in this area.
Applications
Online social media
A/B tests have been used by large social-media sites such as LinkedIn, Facebook and Instagram to understand user engagement and satisfaction of online features, such as a new feature or product. A/B tests have also been used to conduct complex experiments on subjects such as network effects when users are offline, how online services affect user actions and how users influence one another.
E-commerce
On an e-commerce website, the purchase funnel is typically a helpful candidate for A/B testing, as even marginal decreases in drop-off rates can represent a significant gain in sales. Significant improvements can be sometimes seen through testing elements such as copy text, layouts, images and colors. In these tests, users only see one of two versions, as the goal is to discover which of the two versions is preferable.
Product pricing
A/B testing can be used to determine the right price for a product, which is one of the most difficult challenges faced when a new product or service is launched. A/B testing (especially valid for digital goods) is an effective mechanism to identify the price point that maximizes the total revenue.
Political A/B testing
A/B tests have also been used by political campaigns. In 2007, Barack Obama's presidential campaign used A/B testing to garner online attraction and understand what voters wanted to see from Obama. For example, Obama's team tested four distinct buttons on their website that led users to register for newsletters. Additionally, the team used six different accompanying images to attract users.
HTTP routing and API feature testing
A/B testing is commonly employed when deploying a newer version of an API. For real-time user experience testing, an HTTP layer 7 reverse proxy is configured in such a way that n% of the HTTP traffic is routed to the newer version of the backend instance, while the remaining 100-n% of HTTP traffic hits the (stable) older version of the backend HTTP application service. This is usually achieved to limit the exposure of customers to a newer backend instance such that, if there is a bug with the newer version, only n% of the total user agents or clients are affected while others are routed to a stable backend, which is a common ingress control mechanism.
See also
Adaptive control
Between-group design experiment
Choice modelling
Multi-armed bandit
Multivariate testing
Randomized controlled trial
Scientific control
Stochastic dominance
Test statistic
Two-proportion Z-test
== References ==
|
|
wiki::en::Sequential analysis
|
wiki
|
Sequential analysis
|
https://en.wikipedia.org/wiki/Sequential_analysis
|
en
|
[] |
In statistics, sequential analysis or sequential hypothesis testing is statistical analysis where the sample size is not fixed in advance. Instead data is evaluated as it is collected, and further sampling is stopped in accordance with a pre-defined stopping rule as soon as significant results are observed. Thus a conclusion may sometimes be reached at a much earlier stage than would be possible with more classical hypothesis testing or estimation, at consequently lower financial and/or human cost.
History
The method of sequential analysis is first attributed to Abraham Wald with Jacob Wolfowitz, W. Allen Wallis, and Milton Friedman while at Columbia University's Statistical Research Group as a tool for more efficient industrial quality control during World War II. Its value to the war effort was immediately recognised, and led to its receiving a "restricted" classification. At the same time, George Barnard led a group working on optimal stopping in Great Britain. Another early contribution to the method was made by K.J. Arrow with D. Blackwell and M.A. Girshick.
A similar approach was independently developed from first principles at about the same time by Alan Turing, as part of the Banburismus technique used at Bletchley Park, to test hypotheses about whether different messages coded by German Enigma machines should be connected and analysed together. This work remained secret until the early 1980s.
Peter Armitage introduced the use of sequential analysis in medical research, especially in the area of clinical trials. Sequential methods became increasingly popular in medicine following Stuart Pocock's work that provided clear recommendations on how to control Type 1 error rates in sequential designs.
Alpha spending functions
When researchers repeatedly analyze data as more observations are added, the probability of a Type 1 error increases. Therefore, it is important to adjust the alpha level at each interim analysis, such that the overall Type 1 error rate remains at the desired level. This is conceptually similar to using the Bonferroni correction, but because the repeated looks at the data are dependent, more efficient corrections for the alpha level can be used. Among the earliest proposals is the Pocock boundary. Alternative ways to control the Type 1 error rate exist, such as the Haybittle–Peto bounds, and additional work on determining the boundaries for interim analyses has been done by O'Brien & Fleming and Wang & Tsiatis.
A limitation of corrections such as the Pocock boundary is that the number of looks at the data must be determined before the data is collected, and that the looks at the data should be equally spaced (e.g., after 50, 100, 150, and 200 patients). The alpha spending function approach developed by Demets & Lan does not have these restrictions, and depending on the parameters chosen for the spending function, can be very similar to Pocock boundaries or the corrections proposed by O'Brien and Fleming. Another approach that has no such restrictions at all is based on e-values and e-processes.
Applications of sequential analysis
Clinical trials
In a randomized trial with two treatment groups, group sequential testing may for example be conducted in the following manner: After n subjects in each group are available an interim analysis is conducted. A statistical test is performed to compare the two groups and if the null hypothesis is rejected the trial is terminated; otherwise, the trial continues, another n subjects per group are recruited, and the statistical test is performed again, including all subjects. If the null is rejected, the trial is terminated, and otherwise it continues with periodic evaluations until a maximum number of interim analyses have been performed, at which point the last statistical test is conducted and the trial is discontinued.
Other applications
Sequential analysis also has a connection to the problem of gambler's ruin that has been studied by, among others, Huygens in 1657.
Step detection is the process of finding abrupt changes in the mean level of a time series or signal. It is usually considered as a special kind of statistical method known as change point detection. Often, the step is small and the time series is corrupted by some kind of noise, and this makes the problem challenging because the step may be hidden by the noise. Therefore, statistical and/or signal processing algorithms are often required. When the algorithms are run online as the data is coming in, especially with the aim of producing an alert, this is an application of sequential analysis.
Bias
Trials that are terminated early because they reject the null hypothesis typically overestimate the true effect size. This is because in small samples, only large effect size estimates will lead to a significant effect, and the subsequent termination of a trial. Methods to correct effect size estimates in single trials have been proposed. Note that this bias is mainly problematic when interpreting single studies. In meta-analyses, overestimated effect sizes due to early stopping are balanced by underestimation in trials that stop late, leading Schou & Marschner to conclude that "early stopping of clinical trials is not a substantive source of bias in meta-analyses".
The meaning of p-values in sequential analyses also changes, because when using sequential analyses, more than one analysis is performed, and the typical definition of a p-value as the data “at least as extreme” as is observed needs to be redefined. One solution is to order the p-values of a series of sequential tests based on the time of stopping and how high the test statistic was at a given look, which is known as stagewise ordering, first proposed by Armitage.
See also
Optimal stopping
Sequential estimation
Sequential probability ratio test
CUSUM
Notes
References
Wald, Abraham (1947). Sequential Analysis. New York: John Wiley and Sons.
Bartroff, J., Lai T.L., and Shih, M.-C. (2013) Sequential Experimentation in Clinical Trials: Design and Analysis. Springer.
Ghosh, Bhaskar Kumar (1970). Sequential Tests of Statistical Hypotheses. Reading: Addison-Wesley.
Chernoff, Herman (1972). Sequential Analysis and Optimal Design. SIAM.
Siegmund, David (1985). Sequential Analysis. Springer Series in Statistics. New York: Springer-Verlag. ISBN 978-0-387-96134-7.
Bakeman, R., Gottman, J.M., (1997) Observing Interaction: An Introduction to Sequential Analysis, Cambridge: Cambridge University Press
Jennison, C. and Turnbull, B.W (2000) Group Sequential Methods With Applications to Clinical Trials. Chapman & Hall/CRC.
Whitehead, J. (1997). The Design and Analysis of Sequential Clinical Trials, 2nd Edition. John Wiley & Sons.
External links
R Package: Wald's Sequential Probability Ratio Test by OnlineMarketr.com
Software for conducting sequential analysis and applications of sequential analysis in the study of group interaction in computer-mediated communication by Dr. Allan Jeong at Florida State University
SAMBO Optimization – a Python framework for sequential, model-based optimization.
Commercial
PASS Sample Size Software includes features for the setup of group sequential designs.
|
|
wiki::en::False discovery rate
|
wiki
|
False discovery rate
|
https://en.wikipedia.org/wiki/False_discovery_rate
|
en
|
[] |
In statistics, the false discovery rate (FDR) is a method of conceptualizing the rate of type I errors in null hypothesis testing when conducting multiple comparisons. FDR-controlling procedures are designed to control the FDR, which is the expected proportion of "discoveries" (rejected null hypotheses) that are false (incorrect rejections of the null). Equivalently, the FDR is the expected ratio of the number of false positive classifications (false discoveries) to the total number of positive classifications (rejections of the null). The total number of rejections of the null include both the number of false positives (FP) and true positives (TP). Simply put, FDR = FP / (FP + TP). FDR-controlling procedures provide less stringent control of Type I errors compared to family-wise error rate (FWER) controlling procedures (such as the Bonferroni correction), which control the probability of at least one Type I error. Thus, FDR-controlling procedures have greater power, at the cost of increased numbers of Type I errors.
History
Technological motivations
The modern widespread use of the FDR is believed to stem from, and be motivated by, the development in technologies that allowed the collection and analysis of a large number of distinct variables in several individuals (e.g., the expression level of each of 10,000 different genes in 100 different persons). By the late 1980s and 1990s, the development of "high-throughput" sciences, such as genomics, allowed for rapid data acquisition. This, coupled with the growth in computing power, made it possible to seamlessly perform a very high number of statistical tests on a given data set. The technology of microarrays was a prototypical example, as it enabled thousands of genes to be tested simultaneously for differential expression between two biological conditions.
As high-throughput technologies became common, technological and/or financial constraints led researchers to collect datasets with relatively small sample sizes (e.g. few individuals being tested) and large numbers of variables being measured per sample (e.g. thousands of gene expression levels). In these datasets, too few of the measured variables showed statistical significance after classic correction for multiple tests with standard multiple comparison procedures. This created a need within many scientific communities to abandon FWER and unadjusted multiple hypothesis testing for other ways to highlight and rank in publications those variables showing marked effects across individuals or treatments that would otherwise be dismissed as non-significant after standard correction for multiple tests. In response to this, a variety of error rates have been proposed—and become commonly used in publications—that are less conservative than FWER in flagging possibly noteworthy observations. The FDR is useful when researchers are looking for "discoveries" that will give them followup work (E.g.: detecting promising genes for followup studies), and are interested in controlling the proportion of "false leads" they are willing to accept.
Literature
The FDR concept was formally described by Yoav Benjamini and Yosef Hochberg in 1995 (BH procedure) as a less conservative and arguably more appropriate approach for identifying the important few from the trivial many effects tested. The FDR has been particularly influential, as it was the first alternative to the FWER to gain broad acceptance in many scientific fields (especially in the life sciences, from genetics to biochemistry, oncology and plant sciences). In 2005, the Benjamini and Hochberg paper from 1995 was identified as one of the 25 most-cited statistical papers.
Prior to the 1995 introduction of the FDR concept, various precursor ideas had been considered in the statistics literature. In 1979, Holm proposed the Holm procedure, a stepwise algorithm for controlling the FWER that is at least as powerful as the well-known Bonferroni adjustment. This stepwise algorithm sorts the p-values and sequentially rejects the hypotheses starting from the smallest p-values.
Benjamini (2010) said that the false discovery rate, and the paper Benjamini and Hochberg (1995), had its origins in two papers concerned with multiple testing:
The first paper is by Schweder and Spjotvoll (1982) who suggested plotting the ranked p-values and assessing the number of true null hypotheses (
m
0
{\displaystyle m_{0}}
) via an eye-fitted line starting from the largest p-values. The p-values that deviate from this straight line then should correspond to the false null hypotheses. This idea was later developed into an algorithm and incorporated the estimation of
m
0
{\displaystyle m_{0}}
into procedures such as Bonferroni, Holm or Hochberg. This idea is closely related to the graphical interpretation of the BH procedure.
The second paper is by Branko Soric (1989) which introduced the terminology of "discovery" in the multiple hypothesis testing context. Soric used the expected number of false discoveries divided by the number of discoveries
(
E
[
V
]
/
R
)
{\displaystyle \left(E[V]/R\right)}
as a warning that "a large part of statistical discoveries may be wrong". This led Benjamini and Hochberg to the idea that a similar error rate, rather than being merely a warning, can serve as a worthy goal to control.
The BH procedure was proven to control the FDR for independent tests in 1995 by Benjamini and Hochberg. In 1986, R. J. Simes offered the same procedure as the "Simes procedure", in order to control the FWER in the weak sense (under the intersection null hypothesis) when the statistics are independent.
Definitions
Based on definitions below we can define Q as the proportion of false discoveries among the discoveries (rejections of the null hypothesis):
Q
=
V
R
=
V
V
+
S
.
{\displaystyle Q={\frac {V}{R}}={\frac {V}{V+S}}.}
where
V
{\displaystyle V}
is the number of false discoveries and
S
{\displaystyle S}
is the number of true discoveries.
The false discovery rate (FDR) is then simply the following:
F
D
R
=
Q
e
=
E
[
Q
]
,
{\displaystyle \mathrm {FDR} =Q_{e}=\mathrm {E} \!\left[Q\right],}
where
E
[
Q
]
{\displaystyle \mathrm {E} \!\left[Q\right]}
is the expected value of
Q
{\displaystyle Q}
. The goal is to keep FDR below a given threshold q. To avoid division by zero,
Q
{\displaystyle Q}
is defined to be 0 when
R
=
0
{\displaystyle R=0}
. Formally,
F
D
R
=
E
[
V
/
R
|
R
>
0
]
⋅
P
(
R
>
0
)
{\displaystyle \mathrm {FDR} =\mathrm {E} \!\left[V/R|R>0\right]\cdot \mathrm {P} \!\left(R>0\right)}
.
Classification of multiple hypothesis tests
The following table defines the possible outcomes when testing multiple null hypotheses.
Suppose we have a number m of null hypotheses, denoted by: H1, H2, ..., Hm.
Using a statistical test, we reject the null hypothesis if the test is declared significant. We do not reject the null hypothesis if the test is non-significant.
Summing each type of outcome over all Hi yields the following random variables:
m is the total number hypotheses tested
m
0
{\displaystyle m_{0}}
is the number of true null hypotheses, an unknown parameter
m
−
m
0
{\displaystyle m-m_{0}}
is the number of true alternative hypotheses
V is the number of false positives (Type I error) (also called "false discoveries")
S is the number of true positives (also called "true discoveries")
T is the number of false negatives (Type II error)
U is the number of true negatives
R
=
V
+
S
{\displaystyle R=V+S}
is the number of rejected null hypotheses (also called "discoveries", either true or false)
In m hypothesis tests of which
m
0
{\displaystyle m_{0}}
are true null hypotheses, R is an observable random variable, and S, T, U, and V are unobservable random variables.
Controlling procedures
The settings for many procedures is such that we have
H
1
…
H
m
{\displaystyle H_{1}\ldots H_{m}}
null hypotheses tested and
P
1
…
P
m
{\displaystyle P_{1}\ldots P_{m}}
their corresponding p-values. We list these p-values in ascending order and denote them by
P
(
1
)
…
P
(
m
)
{\displaystyle P_{(1)}\ldots P_{(m)}}
. A procedure that goes from a small test-statistic to a large one will be called a step-up procedure. In a similar way, in a "step-down" procedure we move from a large corresponding test statistic to a smaller one.
Benjamini–Hochberg procedure
The Benjamini–Hochberg procedure (BH step-up procedure) controls the FDR at level
α
{\displaystyle \alpha }
. It works as follows:
For a given
α
{\displaystyle \alpha }
, find the largest k such that
P
(
k
)
≤
k
m
α
{\displaystyle P_{(k)}\leq {\frac {k}{m}}\alpha }
Reject the null hypothesis (i.e., declare discoveries) for all
H
(
i
)
{\displaystyle H_{(i)}}
for
i
=
1
,
…
,
k
{\displaystyle i=1,\ldots ,k}
Geometrically, this corresponds to plotting
P
(
k
)
{\displaystyle P_{(k)}}
vs. k (on the y and x axes respectively), drawing the line through the origin with slope
α
m
{\displaystyle {\frac {\alpha }{m}}}
, and declaring discoveries for all points on the left, up to, and including the last point that is not above the line.
The BH procedure is valid when the m tests are independent, and also in various scenarios of dependence, but is not universally valid. It also satisfies the inequality:
E
(
Q
)
≤
m
0
m
α
≤
α
{\displaystyle E(Q)\leq {\frac {m_{0}}{m}}\alpha \leq \alpha }
If an estimator of
m
0
{\displaystyle m_{0}}
is inserted into the BH procedure, it is no longer guaranteed to achieve FDR control at the desired level. Adjustments may be needed in the estimator and several modifications have been proposed.
Note that the mean
α
{\displaystyle \alpha }
for these m tests is
α
(
m
+
1
)
2
m
{\displaystyle {\frac {\alpha (m+1)}{2m}}}
, the Mean(FDR
α
{\displaystyle \alpha }
) or MFDR,
α
{\displaystyle \alpha }
adjusted for m independent or positively correlated tests (see AFDR below). The MFDR expression here is for a single recomputed value of
α
{\displaystyle \alpha }
and is not part of the Benjamini and Hochberg method.
Benjamini–Yekutieli procedure
The Benjamini–Yekutieli procedure controls the false discovery rate under arbitrary dependence assumptions. This refinement modifies the threshold and finds the largest k such that:
P
(
k
)
≤
k
m
⋅
c
(
m
)
α
{\displaystyle P_{(k)}\leq {\frac {k}{m\cdot c(m)}}\alpha }
If the tests are independent or positively correlated (as in Benjamini–Hochberg procedure):
c
(
m
)
=
1
{\displaystyle c(m)=1}
Under arbitrary dependence (including the case of negative correlation), c(m) is the harmonic number:
c
(
m
)
=
∑
i
=
1
m
1
i
{\displaystyle c(m)=\sum _{i=1}^{m}{\frac {1}{i}}}
. Note that
c
(
m
)
{\displaystyle c(m)}
can be approximated by using the Taylor series expansion and the Euler–Mascheroni constant (
γ
=
0.57721...
{\displaystyle \gamma =0.57721...}
):
∑
i
=
1
m
1
i
≈
ln
(
m
)
+
γ
+
1
2
m
.
{\displaystyle \sum _{i=1}^{m}{\frac {1}{i}}\approx \ln(m)+\gamma +{\frac {1}{2m}}.}
Using MFDR and formulas above, an adjusted MFDR (or AFDR) is the minimum of the mean
α
{\displaystyle \alpha }
for m dependent tests, i.e.,
M
F
D
R
c
(
m
)
=
α
(
m
+
1
)
2
m
[
ln
(
m
)
+
γ
]
+
1
{\displaystyle {\frac {\mathrm {MFDR} }{c(m)}}={\frac {\alpha (m+1)}{2m[\ln(m)+\gamma ]+1}}}
.
Another way to address dependence is by bootstrapping and rerandomization.
Storey-Tibshirani procedure
In the Storey-Tibshirani procedure, q-values are used for controlling the FDR.
Properties
Adaptive and scalable
Using a multiplicity procedure that controls the FDR criterion is adaptive and scalable. Meaning that controlling the FDR can be very permissive (if the data justify it), or conservative (acting close to control of FWER for sparse problem) - all depending on the number of hypotheses tested and the level of significance.
The FDR criterion adapts so that the same number of false discoveries (V) will have different implications, depending on the total number of discoveries (R). This contrasts with the family-wise error rate criterion. For example, if inspecting 100 hypotheses (say, 100 genetic mutations or SNPs for association with some phenotype in some population):
If we make 4 discoveries (R), having 2 of them be false discoveries (V) is often very costly. Whereas,
If we make 50 discoveries (R), having 2 of them be false discoveries (V) is often not very costly.
The FDR criterion is scalable in that the same proportion of false discoveries out of the total number of discoveries (Q), remains sensible for different number of total discoveries (R). For example:
If we make 100 discoveries (R), having 5 of them be false discoveries (
q
=
5
%
{\displaystyle q=5\%}
) may not be very costly.
Similarly, if we make 1000 discoveries (R), having 50 of them be false discoveries (as before,
q
=
5
%
{\displaystyle q=5\%}
) may still not be very costly.
Dependency among the test statistics
Controlling the FDR using the linear step-up BH procedure, at level q, has several properties related to the dependency structure between the test statistics of the m null hypotheses that are being corrected for. If the test statistics are:
Independent:
F
D
R
≤
m
0
m
q
{\displaystyle \mathrm {FDR} \leq {\frac {m_{0}}{m}}q}
Independent and continuous:
F
D
R
=
m
0
m
q
{\displaystyle \mathrm {FDR} ={\frac {m_{0}}{m}}q}
Positive dependent:
F
D
R
≤
m
0
m
q
{\displaystyle \mathrm {FDR} \leq {\frac {m_{0}}{m}}q}
In the general case:
F
D
R
≤
m
0
m
q
1
+
1
2
+
1
3
+
⋯
+
1
m
≈
m
0
m
q
ln
(
m
)
+
γ
+
1
2
m
,
{\displaystyle \mathrm {FDR} \leq {\frac {m_{0}}{m}}{\frac {q}{1+{\frac {1}{2}}+{\frac {1}{3}}+\cdots +{\frac {1}{m}}}}\approx {\frac {m_{0}}{m}}{\frac {q}{\ln(m)+\gamma +{\frac {1}{2m}}}},}
where
γ
{\displaystyle \gamma }
is the Euler–Mascheroni constant.
Proportion of true hypotheses
If all of the null hypotheses are true (
m
0
=
m
{\displaystyle m_{0}=m}
), then controlling the FDR at level q guarantees control over the FWER (this is also called "weak control of the FWER"):
F
W
E
R
=
P
(
V
≥
1
)
=
E
(
V
R
)
=
F
D
R
≤
q
{\displaystyle \mathrm {FWER} =P\left(V\geq 1\right)=E\left({\frac {V}{R}}\right)=\mathrm {FDR} \leq q}
, simply because the event of rejecting at least one true null hypothesis
{
V
≥
1
}
{\displaystyle \{V\geq 1\}}
is exactly the event
{
V
/
R
=
1
}
{\displaystyle \{V/R=1\}}
, and the event
{
V
=
0
}
{\displaystyle \{V=0\}}
is exactly the event
{
V
/
R
=
0
}
{\displaystyle \{V/R=0\}}
(when
V
=
R
=
0
{\displaystyle V=R=0}
,
V
/
R
=
0
{\displaystyle V/R=0}
by definition). But if there are some true discoveries to be made (
m
0
<
m
{\displaystyle m_{0}<m}
) then FWER ≥ FDR. In that case there will be room for improving detection power. It also means that any procedure that controls the FWER will also control the FDR.
Average power
The average power of the Benjamini-Hochberg procedure can be computed analytically
Related concepts
The discovery of the FDR was preceded and followed by many other types of error rates. These include:
PCER (per-comparison error rate) is defined as:
P
C
E
R
=
E
[
V
m
]
{\displaystyle \mathrm {PCER} =E\left[{\frac {V}{m}}\right]}
. Testing individually each hypothesis at level α guarantees that
P
C
E
R
≤
α
{\displaystyle \mathrm {PCER} \leq \alpha }
(this is testing without any correction for multiplicity)
FWER (the family-wise error rate) is defined as:
F
W
E
R
=
P
(
V
≥
1
)
{\displaystyle \mathrm {FWER} =P(V\geq 1)}
. There are numerous procedures that control the FWER.
k
-FWER
{\displaystyle k{\text{-FWER}}}
(The tail probability of the False Discovery Proportion), suggested by Lehmann and Romano, van der Laan at al, is defined as:
k
-FWER
=
P
(
V
≥
k
)
≤
q
{\displaystyle k{\text{-FWER}}=P(V\geq k)\leq q}
.
k
-FDR
{\displaystyle k{\text{-FDR}}}
(also called the generalized FDR by Sarkar in 2007) is defined as:
k
-FDR
=
E
(
V
R
I
(
V
>
k
)
)
≤
q
{\displaystyle k{\text{-FDR}}=E\left({\frac {V}{R}}I_{(V>k)}\right)\leq q}
.
Q
′
{\displaystyle Q'}
is the proportion of false discoveries among the discoveries", suggested by Soric in 1989, and is defined as:
Q
′
=
E
[
V
]
R
{\displaystyle Q'={\frac {E[V]}{R}}}
. This is a mixture of expectations and realizations, and has the problem of control for
m
0
=
m
{\displaystyle m_{0}=m}
.
F
D
R
−
1
{\displaystyle \mathrm {FDR} _{-1}}
(or Fdr) was used by Benjamini and Hochberg, and later called "Fdr" by Efron (2008) and earlier. It is defined as:
F
D
R
−
1
=
F
d
r
=
E
[
V
]
E
[
R
]
{\displaystyle \mathrm {FDR} _{-1}=Fdr={\frac {E[V]}{E[R]}}}
. This error rate cannot be strictly controlled because it is 1 when
m
=
m
0
{\displaystyle m=m_{0}}
.
F
D
R
+
1
{\displaystyle \mathrm {FDR} _{+1}}
was used by Benjamini and Hochberg, and later called "pFDR" by Storey (2002). It is defined as:
F
D
R
+
1
=
p
F
D
R
=
E
[
V
R
|
R
>
0
]
{\displaystyle \mathrm {FDR} _{+1}=pFDR=E\left[\left.{\frac {V}{R}}\right|R>0\right]}
. This error rate cannot be strictly controlled because it is 1 when
m
=
m
0
{\displaystyle m=m_{0}}
. JD Storey promoted the use of the pFDR (a close relative of the FDR), and the q-value, which can be viewed as the proportion of false discoveries that we expect in an ordered table of results, up to the current line. Storey also promoted the idea (also mentioned by BH) that the actual number of null hypotheses,
m
0
{\displaystyle m_{0}}
, can be estimated from the shape of the probability distribution curve. For example, in a set of data where all null hypotheses are true, 50% of results will yield probabilities between 0.5 and 1.0 (and the other 50% will yield probabilities between 0.0 and 0.5). We can therefore estimate
m
0
{\displaystyle m_{0}}
by finding the number of results with
P
>
0.5
{\displaystyle P>0.5}
and doubling it, and this permits refinement of our calculation of the pFDR at any particular cut-off in the data-set.
False exceedance rate (the tail probability of FDP), defined as:
P
(
V
R
>
q
)
{\displaystyle \mathrm {P} \left({\frac {V}{R}}>q\right)}
W
-FDR
{\displaystyle W{\text{-FDR}}}
(Weighted FDR). Associated with each hypothesis i is a weight
w
i
≥
0
{\displaystyle w_{i}\geq 0}
, the weights capture importance/price. The W-FDR is defined as:
W
-FDR
=
E
(
∑
w
i
V
i
∑
w
i
R
i
)
{\displaystyle W{\text{-FDR}}=E\left({\frac {\sum w_{i}V_{i}}{\sum w_{i}R_{i}}}\right)}
.
FDCR (False Discovery Cost Rate). Stemming from statistical process control: associated with each hypothesis i is a cost
c
i
{\displaystyle \mathrm {c} _{i}}
and with the intersection hypothesis
H
00
{\displaystyle H_{00}}
a cost
c
0
{\displaystyle c_{0}}
. The motivation is that stopping a production process may incur a fixed cost. It is defined as:
F
D
C
R
=
E
(
c
0
V
0
+
∑
c
i
V
i
c
0
R
0
+
∑
c
i
R
i
)
{\displaystyle \mathrm {FDCR} =E\left(c_{0}V_{0}+{\frac {\sum c_{i}V_{i}}{c_{0}R_{0}+\sum c_{i}R_{i}}}\right)}
PFER (per-family error rate) is defined as:
P
F
E
R
=
E
(
V
)
{\displaystyle \mathrm {PFER} =E(V)}
.
FNR (False non-discovery rates) by Sarkar; Genovese and Wasserman is defined as:
F
N
R
=
E
(
T
m
−
R
)
=
E
(
m
−
m
0
−
(
R
−
V
)
m
−
R
)
{\displaystyle \mathrm {FNR} =E\left({\frac {T}{m-R}}\right)=E\left({\frac {m-m_{0}-(R-V)}{m-R}}\right)}
F
D
R
(
z
)
{\displaystyle \mathrm {FDR} (z)}
is defined as:
F
D
R
(
z
)
=
p
0
F
0
(
z
)
F
(
z
)
{\displaystyle \mathrm {FDR} (z)={\frac {p_{0}F_{0}(z)}{F(z)}}}
f
d
r
{\displaystyle \mathrm {fdr} }
, local-fdr is defined as:
f
d
r
=
p
0
f
0
(
z
)
f
(
z
)
{\displaystyle \mathrm {fdr} ={\frac {p_{0}f_{0}(z)}{f(z)}}}
in a local interval of
z
{\displaystyle \mathrm {z} }
.
False coverage rate
The false coverage rate (FCR) is, in a sense, the FDR analog to the confidence interval. FCR indicates the average rate of false coverage, namely, not covering the true parameters, among the selected intervals. The FCR gives a simultaneous coverage at a
1
−
α
{\displaystyle 1-\alpha }
level for all of the parameters considered in the problem. Intervals with simultaneous coverage probability 1−q can control the FCR to be bounded by q. There are many FCR procedures such as: Bonferroni-Selected–Bonferroni-Adjusted, Adjusted BH-Selected CIs (Benjamini and Yekutieli (2005)), Bayes FCR (Zhao and Hwang (2012)), and other Bayes methods.
Bayesian approaches
Connections have been made between the FDR and Bayesian approaches (including empirical Bayes methods), thresholding wavelets coefficients and model selection, and generalizing the confidence interval into the false coverage statement rate (FCR).
Structural False Discovery Rate (sFDR)
The Structural False Discovery Rate (sFDR) is a generalization of the classical False Discovery Rate (FDR) introduced by D. Meskaldji and collaborators in 2018.
The sFDR extends the FDR by replacing the linear denominator R in the expected ratio E[V/R] with a non-decreasing concave function s(R), yielding the criterion E[V/s(R)]. This approach allows the control of false discoveries to adapt to the scale of testing, so that prudence increases faster than linearly as the number of rejections grows.
When s(R)=R, the classical FDR is recovered, while specific choices of s(R) can interpolate between FDR control and family-wise error control (k-FWER). The sFDR provides a structural connection between classical, local, and generalized false discovery concepts, and has been extended to online and adaptive settings.
Software implementations
False Discovery Rate Analysis in R – Lists links with popular R packages
False Discovery Rate Analysis in Python – Python implementations of false discovery rate procedures
See also
Positive predictive value
References
External links
The False Discovery Rate - Yoav Benjamini, Ruth Heller & Daniel Yekutieli - Rousseeuw Prize for Statistics ceremony lecture from 2024.
False Discovery Rate: Corrected & Adjusted P-values - MATLAB/GNU Octave implementation and discussion on the difference between corrected and adjusted FDR p-values.
Understanding False Discovery Rate - blog post
StatQuest: FDR and the Benjamini-Hochberg Method clearly explained on YouTube
Understanding False Discovery Rate - Includes Excel VBA code to implement it, and an example in cell line development
|
|
wiki::en::Sample size determination
|
wiki
|
Sample size determination
|
https://en.wikipedia.org/wiki/Sample_size_determination
|
en
|
[] |
Sample size determination or estimation is the act of choosing the number of observations or replicates to include in a statistical sample. The sample size is an important feature of any empirical study in which the goal is to make inferences about a population from a sample. In practice, the sample size used in a study is usually determined based on the cost, time, or convenience of collecting the data, and the need for it to offer sufficient statistical power. In complex studies, different sample sizes may be allocated, such as in stratified surveys or experimental designs with multiple treatment groups. In a census, data is sought for an entire population, hence the intended sample size is equal to the population. In experimental design, where a study may be divided into different treatment groups, there may be different sample sizes for each group.
Sample sizes may be chosen in several ways:
using experience – small samples, though sometimes unavoidable, can result in wide confidence intervals and risk of errors in statistical hypothesis testing.
using a target variance for an estimate to be derived from the sample eventually obtained, i.e., if a high precision is required (narrow confidence interval) this translates to a low target variance of the estimator.
the use of a power target, i.e. the power of statistical test to be applied once the sample is collected.
using a confidence level, i.e. the larger the required confidence level, the larger the sample size (given a constant precision requirement).
Introduction
Sample size determination is a crucial aspect of research methodology that plays a significant role in ensuring the reliability and validity of study findings. In order to influence the accuracy of estimates, the power of statistical tests, and the general robustness of the research findings, it entails carefully choosing the number of participants or data points to be included in a study.
Consider the case where we are conducting a survey to determine the average satisfaction level of customers regarding a new product. To determine an appropriate sample size, we need to consider factors such as the desired level of confidence, margin of error, and variability in the responses. We might decide that we want a 95% confidence level, meaning we are 95% confident that the true average satisfaction level falls within the calculated range. We also decide on a margin of error, of ±3%, which indicates the acceptable range of difference between our sample estimate and the true population parameter. Additionally, we may have some idea of the expected variability in satisfaction levels based on previous data or assumptions.
Importance
Larger sample sizes generally lead to increased precision when estimating unknown parameters. For instance, to accurately determine the prevalence of pathogen infection in a specific species of fish, it is preferable to examine a sample of 200 fish rather than 100 fish. Several fundamental facts of mathematical statistics describe this phenomenon, including the law of large numbers and the central limit theorem.
In some situations, the increase in precision for larger sample sizes is minimal, or even non-existent. This can result from the presence of systematic errors or strong dependence in the data, or if the data follows a heavy-tailed distribution, or because the data is strongly dependent or biased.
Sample sizes may be evaluated by the quality of the resulting estimates, as follows. It is usually determined on the basis of the cost, time or convenience of data collection and the need for sufficient statistical power. For example, if a proportion is being estimated, one may wish to have the 95% confidence interval be less than 0.06 units wide. Alternatively, sample size may be assessed based on the power of a hypothesis test. For example, if we are comparing the support for a certain political candidate among women with the support for that candidate among men, we may wish to have 80% power to detect a difference in the support levels of 0.04 units.
Estimation
Estimation of a proportion
A relatively simple situation is estimation of a proportion. It is a fundamental aspect of statistical analysis, particularly when gauging the prevalence of a specific characteristic within a population. For example, we may wish to estimate the proportion of residents in a community who are at least 65 years old.
The estimator of a proportion is
p
^
=
X
/
n
{\displaystyle {\hat {p}}=X/n}
, where X is the number of 'positive' instances (e.g., the number of people out of the n sampled people who are at least 65 years old). When the observations are independent, this estimator has a (scaled) binomial distribution (and is also the sample mean of data from a Bernoulli distribution). The maximum variance of this distribution is 0.25, which occurs when the true parameter is p = 0.5. In practical applications, where the true parameter p is unknown, the maximum variance is often employed for sample size assessments. If a reasonable estimate for p is known the quantity
p
(
1
−
p
)
{\displaystyle p(1-p)}
may be used in place of 0.25.
As the sample size n grows sufficiently large, the distribution of
p
^
{\displaystyle {\hat {p}}}
will be closely approximated by a normal distribution. Using this and the Wald method for the binomial distribution, yields a confidence interval, with Z representing the standard Z-score for the desired confidence level (e.g., 1.96 for a 95% confidence interval), in the form:
(
p
^
−
Z
0.25
n
,
p
^
+
Z
0.25
n
)
{\displaystyle \left({\widehat {p}}-Z{\sqrt {\frac {0.25}{n}}},\quad {\widehat {p}}+Z{\sqrt {\frac {0.25}{n}}}\right)}
To determine an appropriate sample size n for estimating proportions, the equation below can be solved, where W represents the desired width of the confidence interval. The resulting sample size formula, is often applied with a conservative estimate of p (e.g., 0.5):
Z
0.25
n
=
W
/
2
{\displaystyle Z{\sqrt {\frac {0.25}{n}}}=W/2}
for n, yielding the sample size
n
=
Z
2
W
2
{\displaystyle n={\frac {Z^{2}}{W^{2}}}}
, in the case of using 0.5 as the most conservative estimate of the proportion. (Note: W/2 = margin of error.)
In the figure below one can observe how sample sizes for binomial proportions change given different confidence levels and margins of error.
Otherwise, the formula would be
Z
p
(
1
−
p
)
n
=
W
/
2
{\displaystyle Z{\sqrt {\frac {p(1-p)}{n}}}=W/2}
, which yields
n
=
4
Z
2
p
(
1
−
p
)
W
2
{\displaystyle n={\frac {4Z^{2}p(1-p)}{W^{2}}}}
.
For example, in estimating the proportion of the U.S. population supporting a presidential candidate with a 95% confidence interval width of 2 percentage points (0.02), a sample size of (1.96)2/ (0.022) = 9604 is required with the margin of error in this case is 1 percentage point. It is reasonable to use the 0.5 estimate for p in this case because the presidential races are often close to 50/50, and it is also prudent to use a conservative estimate. The margin of error in this case is 1 percentage point (half of 0.02).
In practice, the formula :
(
p
^
−
1.96
0.25
n
,
p
^
+
1.96
0.25
n
)
{\displaystyle \left({\widehat {p}}-1.96{\sqrt {\frac {0.25}{n}}},\quad {\widehat {p}}+1.96{\sqrt {\frac {0.25}{n}}}\right)}
is commonly used to form a 95% confidence interval for the true proportion. The equation
2
0.25
n
=
W
/
2
{\displaystyle 2{\sqrt {\frac {0.25}{n}}}=W/2}
can be solved for n, providing a minimum sample size needed to meet the desired margin of error W. The foregoing is commonly simplified: n = 4/W2 = 1/B2 where B is the error bound on the estimate, i.e., the estimate is usually given as within ± B. For B = 10% one requires n = 100, for B = 5% one needs n = 400, for B = 3% the requirement approximates to n = 1000, while for B = 1% a sample size of n = 10000 is required. These numbers are quoted often in news reports of opinion polls and other sample surveys. However, the results reported may not be the exact value as numbers are preferably rounded up. Knowing that the value of the n is the minimum number of sample points needed to acquire the desired result, the number of respondents then must lie on or above the minimum.
Estimation of a mean
Simply speaking, if we are trying to estimate the average time it takes for people to commute to work in a city. Instead of surveying the entire population, you can take a random sample of 100 individuals, record their commute times, and then calculate the mean (average) commute time for that sample. For example, person 1 takes 25 minutes, person 2 takes 30 minutes, ..., person 100 takes 20 minutes. Add up all the commute times and divide by the number of people in the sample (100 in this case). The result would be your estimate of the mean commute time for the entire population. This method is practical when it's not feasible to measure everyone in the population, and it provides a reasonable approximation based on a representative sample.
In a precisely mathematical way, when estimating the population mean using an independent and identically distributed (iid) sample of size n, where each data value has variance σ2, the standard error of the sample mean is:
σ
n
.
{\displaystyle {\frac {\sigma }{\sqrt {n}}}.}
This expression describes quantitatively how the estimate becomes more precise as the sample size increases. Using the central limit theorem to justify approximating the sample mean with a normal distribution yields a confidence interval of the form
(
x
¯
−
Z
σ
n
,
x
¯
+
Z
σ
n
)
{\displaystyle \left({\bar {x}}-{\frac {Z\sigma }{\sqrt {n}}},\quad {\bar {x}}+{\frac {Z\sigma }{\sqrt {n}}}\right)}
,
where Z is a standard Z-score for the desired level of confidence (1.96 for a 95% confidence interval).
To determine the sample size n required for a confidence interval of width W, with W/2 as the margin of error on each side of the sample mean, the equation
Z
σ
n
=
W
/
2
{\displaystyle {\frac {Z\sigma }{\sqrt {n}}}=W/2}
can be solved. This yields the sample size formula, for n:
n
=
4
Z
2
σ
2
W
2
{\displaystyle n={\frac {4Z^{2}\sigma ^{2}}{W^{2}}}}
.
For instance, if estimating the effect of a drug on blood pressure with a 95% confidence interval that is six units wide, and the known standard deviation of blood pressure in the population is 15, the required sample size would be
4
×
1.96
2
×
15
2
6
2
=
96.04
{\displaystyle {\frac {4\times 1.96^{2}\times 15^{2}}{6^{2}}}=96.04}
, which would be rounded up to 97, since sample sizes must be integers and must meet or exceed the calculated minimum value. Understanding these calculations is essential for researchers designing studies to accurately estimate population means within a desired level of confidence.
Required sample sizes for hypothesis tests
One of the prevalent challenges faced by statisticians revolves around the task of calculating the sample size needed to attain a specified statistical power for a test, all while maintaining a pre-determined Type I error rate α, which signifies the level of significance in hypothesis testing. It yields a certain power for a test, given a predetermined. As follows, this can be estimated by pre-determined tables for certain values, by formulas, by simulation, by Mead's resource equation, or by the cumulative distribution function:
Tables
The table shown on the right can be used in a two-sample t-test to estimate the sample sizes of an experimental group and a control group that are of equal size, that is, the total number of individuals in the trial is twice that of the number given, and the desired significance level is 0.05. The parameters used are:
The desired statistical power of the trial, shown in column to the left.
Cohen's d (= effect size), which is the expected difference between the means of the target values between the experimental group and the control group, divided by the expected standard deviation.
Formulas
Calculating a required sample size is often not easy since the distribution of the test statistic under the alternative hypothesis of interest is usually hard to work with. Approximate sample size formulas for specific problems are available - some general references are
and
A computational approach (QuickSize)
The QuickSize algorithm
is a very general approach that is simple to use yet versatile enough to give an exact solution for a broad range of problems. It uses simulation together with a search algorithm.
Mead's resource equation
Mead's resource equation is often used for estimating sample sizes of laboratory animals, as well as in many other laboratory experiments. It may not be as accurate as using other methods in estimating sample size, but gives a hint of what is the appropriate sample size where parameters such as expected standard deviations or expected differences in values between groups are unknown or very hard to estimate.
All the parameters in the equation are in fact the degrees of freedom of the number of their concepts, and hence, their numbers are subtracted by 1 before insertion into the equation.
The equation is:
E
=
N
−
B
−
T
,
{\displaystyle E=N-B-T,}
where:
N is the total number of individuals or units in the study (minus 1)
B is the blocking component, representing environmental effects allowed for in the design (minus 1)
T is the treatment component, corresponding to the number of treatment groups (including control group) being used, or the number of questions being asked (minus 1)
E is the degrees of freedom of the error component and should be somewhere between 10 and 20.
For example, if a study using laboratory animals is planned with four treatment groups (T=3), with eight animals per group, making 32 animals total (N=31), without any further stratification (B=0), then E would equal 28, which is above the cutoff of 20, indicating that sample size may be a bit too large, and six animals per group might be more appropriate.
Cumulative distribution function
Let Xi, i = 1, 2, ..., n be independent observations taken from a normal distribution with unknown mean μ and known variance σ2. Consider two hypotheses, a null hypothesis:
H
0
:
μ
=
0
{\displaystyle H_{0}:\mu =0}
and an alternative hypothesis:
H
a
:
μ
=
μ
∗
{\displaystyle H_{a}:\mu =\mu ^{*}}
for some 'smallest significant difference' μ* > 0. This is the smallest value for which we care about observing a difference. Now, for (1) to reject H0 with a probability of at least 1 − β when
Ha is true (i.e. a power of 1 − β), and (2) reject H0 with probability α when H0 is true, the following is necessary:
If zα is the upper α percentage point of the standard normal distribution, then
Pr
(
x
¯
>
z
α
σ
/
n
∣
H
0
)
=
α
{\displaystyle \Pr({\bar {x}}>z_{\alpha }\sigma /{\sqrt {n}}\mid H_{0})=\alpha }
and so
'Reject H0 if our sample average (
x
¯
{\displaystyle {\bar {x}}}
) is more than
z
α
σ
/
n
{\displaystyle z_{\alpha }\sigma /{\sqrt {n}}}
'
is a decision rule which satisfies (2). (This is a 1-tailed test.) In such a scenario, achieving this with a probability of at least 1−β when the alternative hypothesis Ha is true becomes imperative. Here, the sample average originates from a Normal distribution with a mean of μ*. Thus, the requirement is expressed as:
Pr
(
x
¯
>
z
α
σ
/
n
∣
H
a
)
≥
1
−
β
{\displaystyle \Pr({\bar {x}}>z_{\alpha }\sigma /{\sqrt {n}}\mid H_{a})\geq 1-\beta }
Through careful manipulation, this can be shown (see Statistical power Example) to happen when
n
≥
(
z
α
+
Φ
−
1
(
1
−
β
)
μ
∗
/
σ
)
2
{\displaystyle n\geq \left({\frac {z_{\alpha }+\Phi ^{-1}(1-\beta )}{\mu ^{*}/\sigma }}\right)^{2}}
where
Φ
{\displaystyle \Phi }
is the normal cumulative distribution function.
Stratified sample size
With more complicated sampling techniques, such as stratified sampling, the sample can often be split up into sub-samples. Typically, if there are H such sub-samples (from H different strata) then each of them will have a sample size nh, h = 1, 2, ..., H. These nh must conform to the rule that n1 + n2 + ... + nH = n (i.e., that the total sample size is given by the sum of the sub-sample sizes). Selecting these nh optimally can be done in various ways, using (for example) Neyman's optimal allocation.
There are many reasons to use stratified sampling: to decrease variances of sample estimates, to use partly non-random methods, or to study strata individually. A useful, partly non-random method would be to sample individuals where easily accessible, but, where not, sample clusters to save travel costs.
In general, for H strata, a weighted sample mean is
x
¯
w
=
∑
h
=
1
H
W
h
x
¯
h
,
{\displaystyle {\bar {x}}_{w}=\sum _{h=1}^{H}W_{h}{\bar {x}}_{h},}
with
Var
(
x
¯
w
)
=
∑
h
=
1
H
W
h
2
Var
(
x
¯
h
)
.
{\displaystyle \operatorname {Var} ({\bar {x}}_{w})=\sum _{h=1}^{H}W_{h}^{2}\operatorname {Var} ({\bar {x}}_{h}).}
The weights,
W
h
{\displaystyle W_{h}}
, frequently, but not always, represent the proportions of the population elements in the strata, and
W
h
=
N
h
/
N
{\displaystyle W_{h}=N_{h}/N}
. For a fixed sample size, that is
n
=
∑
n
h
{\displaystyle n=\sum n_{h}}
,
Var
(
x
¯
w
)
=
∑
h
=
1
H
W
h
2
Var
(
x
¯
h
)
(
1
n
h
−
1
N
h
)
,
{\displaystyle \operatorname {Var} ({\bar {x}}_{w})=\sum _{h=1}^{H}W_{h}^{2}\operatorname {Var} ({\bar {x}}_{h})\left({\frac {1}{n_{h}}}-{\frac {1}{N_{h}}}\right),}
which can be made a minimum if the sampling rate within each stratum is made
proportional to the standard deviation within each stratum:
n
h
/
N
h
=
k
S
h
{\displaystyle n_{h}/N_{h}=kS_{h}}
, where
S
h
=
Var
(
x
¯
h
)
{\displaystyle S_{h}={\sqrt {\operatorname {Var} ({\bar {x}}_{h})}}}
and
k
{\displaystyle k}
is a constant such that
∑
n
h
=
n
{\displaystyle \sum {n_{h}}=n}
.
An "optimum allocation" is reached when the sampling rates within the strata
are made directly proportional to the standard deviations within the strata
and inversely proportional to the square root of the sampling cost per element
within the strata,
C
h
{\displaystyle C_{h}}
:
n
h
N
h
=
K
S
h
C
h
,
{\displaystyle {\frac {n_{h}}{N_{h}}}={\frac {KS_{h}}{\sqrt {C_{h}}}},}
where
K
{\displaystyle K}
is a constant such that
∑
n
h
=
n
{\displaystyle \sum {n_{h}}=n}
, or, more generally, when
n
h
=
K
′
W
h
S
h
C
h
.
{\displaystyle n_{h}={\frac {K'W_{h}S_{h}}{\sqrt {C_{h}}}}.}
Qualitative research
Qualitative research approaches sample size determination with a distinctive methodology that diverges from quantitative methods. Rather than relying on predetermined formulas or statistical calculations, it involves a subjective and iterative judgment throughout the research process. In qualitative studies, researchers often adopt a subjective stance, making determinations as the study unfolds.
Sample size determination in qualitative studies takes a different approach. It is generally a subjective judgment, taken as the research proceeds. One common approach is to continually include additional participants or materials until a point of "saturation" is reached. Saturation occurs when new participants or data cease to provide fresh insights, indicating that the study has adequately captured the diversity of perspectives or experiences within the chosen sample saturation is reached. The number needed to reach saturation has been investigated empirically.
Unlike quantitative research, qualitative studies face a scarcity of reliable guidance regarding sample size estimation prior to beginning the research.
Imagine conducting in-depth interviews with cancer survivors, qualitative researchers may use data saturation to determine the appropriate sample size. If, over a number of interviews, no fresh themes or insights show up, saturation has been reached and more interviews might not add much to our knowledge of the survivor's experience. Thus, rather than following a preset statistical formula, the concept of attaining saturation serves as a dynamic guide for determining sample size in qualitative research. There is a paucity of reliable guidance on estimating sample sizes before starting the research, with a range of suggestions given. In an effort to introduce some structure to the sample size determination process in qualitative research, a tool analogous to quantitative power calculations has been proposed. This tool, based on the negative binomial distribution, is particularly tailored for thematic analysis.
See also
Design of experiments
Engineering response surface example under Stepwise regression
Cohen's h
Receiver operating characteristic
References
General references
Bartlett, J. E. II; Kotrlik, J. W.; Higgins, C. (2001). "Organizational research: Determining appropriate sample size for survey research" (PDF). Information Technology, Learning, and Performance Journal. 19 (1): 43–50. Archived from the original (PDF) on 2009-03-06. Retrieved 2009-09-07.
Kish, L. (1965). Survey Sampling. Wiley. ISBN 978-0-471-48900-9.
Smith, Scott (8 April 2013). "Determining Sample Size: How to Ensure You Get the Correct Sample Size". Qualtrics. Retrieved 19 September 2018.
Israel, Glenn D. (1992). "Determining Sample Size". University of Florida, PEOD-6. Retrieved 29 June 2019.
Rens van de Schoot, Milica Miočević (eds.). 2020. Small Sample Size Solutions (Open Access): A Guide for Applied Researchers and Practitioners. Routledge.
Further reading
NIST: Selecting Sample Sizes
ASTM E122-07: Standard Practice for Calculating Sample Size to Estimate, With Specified Precision, the Average for a Characteristic of a Lot or Process
External links
A MATLAB script implementing Cochran's sample size formula
Sample Size Calculator for various statistical tests
Statulator for various statistical tests
|
|
wiki::en::Power (statistics)
|
wiki
|
Power (statistics)
|
https://en.wikipedia.org/wiki/Power_(statistics)
|
en
|
[] |
In frequentist statistics, power is the probability of detecting an effect (i.e. rejecting the null hypothesis) given that some prespecified effect actually exists using a given test in a given context. In typical use, it is a function of the specific test that is used (including the choice of test statistic and significance level), the sample size (more data tends to provide more power), and the effect size (effects or correlations that are large relative to the variability of the data tend to provide more power).
More formally, in the case of a simple hypothesis test with two hypotheses, the power of the test is the probability that the test correctly rejects the null hypothesis (
H
0
{\displaystyle H_{0}}
) when the alternative hypothesis (
H
1
{\displaystyle H_{1}}
) is true. It is commonly denoted by
1
−
β
{\displaystyle 1-\beta }
, where
β
{\displaystyle \beta }
is the probability of making a type II error (a false negative) conditional on there being a true effect or association.
Background
Statistical testing uses data from samples to assess, or make inferences about, a statistical population. For example, we may measure the yields of samples of two varieties of a crop, and use a two sample test to assess whether the mean values of this yield differs between varieties.
Under a frequentist hypothesis testing framework, this is done by calculating a test statistic (such as a t-statistic) for the dataset, which has a known theoretical probability distribution if there is no difference (the so called null hypothesis). If the actual value calculated on the sample is sufficiently unlikely to arise under the null hypothesis, we say we identified a statistically significant effect.
The threshold for significance can be set small to ensure there is little chance of falsely detecting a non-existent effect. However, failing to identify a significant effect does not imply there was none. If we insist on being careful to avoid false positives, we may create false negatives instead. It may simply be too much to expect that we will be able to find satisfactorily strong evidence of a very subtle difference even if it exists. Statistical power is an attempt to quantify this issue.
In the case of the comparison of the two crop varieties, it enables us to answer questions like:
Is there a big danger of two very different varieties producing samples that just happen to look indistinguishable by pure chance?
How much effort do we need to put into this comparison to avoid that danger?
How different do these varieties need to be before we can expect to notice a difference?
Description
Suppose we are conducting a hypothesis test. We define two hypotheses
H
0
{\displaystyle H_{0}}
the null hypothesis, and
H
1
{\displaystyle H_{1}}
the alternative hypothesis. If we design the test such that α is the significance level (α being the probability of rejecting
H
0
{\displaystyle H_{0}}
when
H
0
{\displaystyle H_{0}}
is in fact true) then the power of the test is 1 − β where β is the probability of failing to reject
H
0
{\displaystyle H_{0}}
when the alternative
H
1
{\displaystyle H_{1}}
is true.
To make this more concrete, a typical statistical test would be based on a test statistic t calculated from the sampled data, which has a particular probability distribution under
H
0
{\displaystyle H_{0}}
. A desired significance level α would then define a corresponding "rejection region" (bounded by certain "critical values"), a set of values t is unlikely to take if
H
0
{\displaystyle H_{0}}
was correct. If we reject
H
0
{\displaystyle H_{0}}
in favor of
H
1
{\displaystyle H_{1}}
only when the sample t takes those values, we would be able to keep the probability of falsely rejecting
H
0
{\displaystyle H_{0}}
within our desired significance level. At the same time, if
H
1
{\displaystyle H_{1}}
defines its own probability distribution for t (the difference between the two distributions being a function of the effect size), the power of the test would be the probability, under
H
1
{\displaystyle H_{1}}
, that the sample t falls into our defined rejection region and causes
H
0
{\displaystyle H_{0}}
to be correctly rejected.
Statistical power is one minus the type II error probability and is also the sensitivity of the hypothesis testing procedure to detect a true effect. There is usually a trade-off between demanding more stringent tests (and so, smaller rejection regions) and trying to have a high probability of rejecting the null under the alternative hypothesis. Statistical power may also be extended to the case where multiple hypotheses are being tested based on an experiment or survey. It is thus also common to refer to the power of a study, evaluating a scientific project in terms of its ability to answer the research questions they are seeking to answer.
Applications
The main application of statistical power is "power analysis", a calculation of power usually done before an experiment is conducted using data from pilot studies or a literature review. Power analyses can be used to calculate the minimum sample size required so that one can be reasonably likely to detect an effect of a given size (in other words, producing an acceptable level of power). For example: "How many times do I need to toss a coin to conclude it is rigged by a certain amount?" If resources and thus sample sizes are fixed, power analyses can also be used to calculate the minimum effect size that is likely to be detected.
Funding agencies, ethics boards and research review panels frequently request that a researcher perform a power analysis. An underpowered study is likely be inconclusive, failing to allow one to choose between hypotheses at the desired significance level, while an overpowered study will spend great expense on being able to report significant effects even if they are tiny and so practically meaningless. If a large number of underpowered studies are done and statistically significant results published, published findings are more likely false positives than true results, contributing to a replication crisis. However, excessive demands for power could be connected to wasted resources and ethical problems, for example the use of a large number of animal test subjects when a smaller number would have been sufficient. It could also induce researchers trying to seek funding to overstate their expected effect sizes, or avoid looking for more subtle interaction effects that cannot be easily detected.
Power analysis is primarily a frequentist statistics tool. In Bayesian statistics, hypothesis testing of the type used in classical power analysis is not done. In the Bayesian framework, one updates his or her prior beliefs using the data obtained in a given study. In principle, a study that would be deemed underpowered from the perspective of hypothesis testing could still be used in such an updating process. However, power remains a useful measure of how much a given experiment size can be expected to refine one's beliefs. A study with low power is unlikely to lead to a large change in beliefs.
In addition, the concept of power is used to make comparisons between different statistical testing procedures: for example, between a parametric test and a nonparametric test of the same hypothesis. Tests may have the same size, and hence the same false positive rates, but different ability to detect true effects. Consideration of their theoretical power proprieties is a key reason for the common use of likelihood ratio tests.
Rule of thumb for t-test
Lehr's (rough) rule of thumb says that the sample size
n
{\displaystyle n}
(for each group) for the common case of a two-sided two-sample t-test with power 80% (
β
=
0.2
{\displaystyle \beta =0.2}
) and significance level
α
=
0.05
{\displaystyle \alpha =0.05}
should be:
n
≈
16
s
2
d
2
,
{\displaystyle n\approx 16{\frac {s^{2}}{d^{2}}},}
where
s
2
{\displaystyle s^{2}}
is an estimate of the population variance and
d
=
μ
1
−
μ
2
{\displaystyle d=\mu _{1}-\mu _{2}}
the to-be-detected difference in the mean values of both samples. This expression can be rearranged, implying for example that 80% power is obtained when looking for a difference in means that exceeds about 4 times the group-wise standard error of the mean.
For a one sample t-test 16 is to be replaced with 8. Other values provide an appropriate approximation when the desired power or significance level are different.
However, a full power analysis should always be performed to confirm and refine this estimate.
Factors influencing power
Statistical power may depend on a number of factors. Some factors may be particular to a specific testing situation, but in normal use, power depends on the following three aspects that can be potentially controlled by the practitioner:
the test itself and the statistical significance criterion used
the magnitude of the effect of interest
the size and variability of the sample used to detect the effect
For a given test, the significance criterion determines the desired degree of rigor, specifying how unlikely it is for the null hypothesis of no effect to be rejected if it is in fact true. The most commonly used threshold is a probability of rejection of 0.05, though smaller values like 0.01 or 0.001 are sometimes used. This threshold then implies that the observation must be at least that unlikely (perhaps by suggesting a sufficiently large estimate of difference) to be considered strong enough evidence against the null. Picking a smaller value to tighten the threshold, so as to reduce the chance of a false positive, would also reduce power (and so increase the chance of a false negative). Some statistical tests will inherently produce better power, albeit often at the cost of requiring stronger assumptions.
The magnitude of the effect of interest defines what is being looked for by the test. It can be the expected effect size if it exists, as a scientific hypothesis that the researcher has arrived at and wishes to test. Alternatively, in a more practical context it could be determined by the size the effect must be to be useful, for example that which is required to be clinically significant. An effect size can be a direct value of the quantity of interest (for example, a difference in mean of a particular size), or it can be a standardized measure that also accounts for the variability in the population (such as a difference in means expressed as a multiple of the standard deviation). If the researcher is looking for a larger effect, then it should be easier to find with a given experimental or analytic setup, and so power is higher.
The nature of the sample underlies the information being used in the test. This will usually involve the sample size, and the sample variability, if that is not implicit in the definition of the effect size. More broadly, the precision with which the data are measured can also be an important factor (such as the statistical reliability), as well as the design of an experiment or observational study. Ultimately, these factors lead to an expected amount of sampling error. A smaller sampling error could be obtained by larger sample sizes from a less variability population, from more accurate measurements, or from more efficient experimental designs (for example, with the appropriate use of blocking), and such smaller errors would lead to improved power, albeit usually at a cost in resources. How increased sample size translates to higher power is a measure of the efficiency of the test—for example, the sample size required for a given power.
Discussion
The statistical power of a hypothesis test has an impact on the interpretation of its results. Not finding a result with a more powerful study is stronger evidence against the effect existing than the same finding with a less powerful study. However, this is not completely conclusive. The effect may exist, but be smaller than what was looked for, meaning the study is in fact underpowered and the sample is thus unable to distinguish it from random chance. Many clinical trials, for instance, have low statistical power to detect differences in adverse effects of treatments, since such effects may only affect a few patients, even if this difference can be important. Conclusions about the probability of actual presence of an effect also should consider more things than a single test, especially as real world power is rarely close to 1.
Indeed, although there are no formal standards for power, many researchers and funding bodies assess power using 0.80 (or 80%) as a standard for adequacy. This convention implies a four-to-one trade off between β-risk and α-risk, as the probability of a type II error β is set as 1 - 0.8 = 0.2, while α, the probability of a type I error, is commonly set at 0.05. Some applications require much higher levels of power. Medical tests may be designed to minimise the number of false negatives (type II errors) produced by loosening the threshold of significance, raising the risk of obtaining a false positive (a type I error). The rationale is that it is better to tell a healthy patient "we may have found something—let's test further," than to tell a diseased patient "all is well."
Power analysis focuses on the correct rejection of a null hypothesis. Alternative concerns may however motivate an experiment, and so lead to different needs for sample size. In many contexts, the issue is less about deciding between hypotheses but rather with getting an estimate of the population effect size of sufficient accuracy. For example, a careful power analysis can tell you that 55 pairs of normally distributed samples with a correlation of 0.5 will be sufficient to grant 80% power in rejecting a null that the correlation is no more than 0.2 (using a one-sided test, α = 0.05). But the typical 95% confidence interval with this sample would be around [0.27, 0.67]. An alternative, albeit related analysis would be required if we wish to be able to measure correlation to an accuracy of +/- 0.1, implying a different (in this case, larger) sample size. Alternatively, multiple under-powered studies can still be useful, if appropriately combined through a meta-analysis.
Many statistical analyses involve the estimation of several unknown quantities. In simple cases, all but one of these quantities are nuisance parameters. In this setting, the only relevant power pertains to the single quantity that will undergo formal statistical inference. In some settings, particularly if the goals are more "exploratory", there may be a number of quantities of interest in the analysis. For example, in a multiple regression analysis we may include several covariates of potential interest. In situations such as this where several hypotheses are under consideration, it is common that the powers associated with the different hypotheses differ. For instance, in multiple regression analysis, the power for detecting an effect of a given size is related to the variance of the covariate. Since different covariates will have different variances, their powers will differ as well.
Additional complications arise when we consider these multiple hypotheses together. For example, if we consider a false positive to be making an erroneous null rejection on any one of these hypotheses, our likelihood of this "family-wise error" will be inflated if appropriate measures are not taken. Such measures typically involve applying a higher threshold of stringency to reject a hypothesis (such as with the Bonferroni method), and so would reduce power. Alternatively, there may be different notions of power connected with how the different hypotheses are considered. "Complete power" demands that all true effects are detected across all of the hypotheses, which is a much stronger requirement than the "minimal power" of being able to find at least one true effect, a type of power that might increase with an increasing number of hypotheses.
A priori vs. post hoc analysis
Power analysis can either be done before (a priori or prospective power analysis) or after (post hoc or retrospective power analysis) data are collected. A priori power analysis is conducted prior to the research study, and is typically used in estimating sufficient sample sizes to achieve adequate power. Post-hoc analysis of "observed power" is conducted after a study has been completed, and uses the obtained sample size and effect size to determine what the power was in the study, assuming the effect size in the sample is equal to the effect size in the population. Whereas the utility of prospective power analysis in experimental design is universally accepted, post hoc power analysis is controversial. Many statisticians have argued that post-hoc power calculations are misleading and essentially meaningless.
Example
The following is an example that shows how to compute power for a randomized experiment: Suppose the goal of an experiment is to study the effect of a treatment on some quantity, and so we shall compare research subjects by measuring the quantity before and after the treatment, analyzing the data using a one-sided paired t-test, with a significance level threshold of 0.05. We are interested in being able to detect a positive change of size
θ
>
0
{\displaystyle \theta >0}
.
We first set up the problem according to our test. Let
A
i
{\displaystyle A_{i}}
and
B
i
{\displaystyle B_{i}}
denote the pre-treatment and post-treatment measures on subject
i
{\displaystyle i}
, respectively. The possible effect of the treatment should be visible in the differences
D
i
=
B
i
−
A
i
,
{\displaystyle D_{i}=B_{i}-A_{i},}
which are assumed to be independent and identically Normal in distribution, with unknown mean value
μ
D
{\displaystyle \mu _{D}}
and variance
σ
D
2
{\displaystyle \sigma _{D}^{2}}
.
Here, it is natural to choose our null hypothesis to be that the expected mean difference is zero, i.e.
H
0
:
μ
D
=
μ
0
=
0.
{\displaystyle H_{0}:\mu _{D}=\mu _{0}=0.}
For our one-sided test, the alternative hypothesis would be that there is a positive effect, corresponding to
H
1
:
μ
D
=
θ
>
0.
{\displaystyle H_{1}:\mu _{D}=\theta >0.}
The test statistic in this case is defined as:
T
n
=
D
¯
n
−
μ
0
σ
^
D
/
n
=
D
¯
n
−
0
σ
^
D
/
n
,
{\displaystyle T_{n}={\frac {{\bar {D}}_{n}-\mu _{0}}{{\hat {\sigma }}_{D}/{\sqrt {n}}}}={\frac {{\bar {D}}_{n}-0}{{\hat {\sigma }}_{D}/{\sqrt {n}}}},}
where
μ
0
{\displaystyle \mu _{0}}
is the mean under the null so we substitute in 0, n is the sample size (number of subjects),
D
¯
n
{\displaystyle {\bar {D}}_{n}}
is the sample mean of the difference
D
¯
n
=
1
n
∑
i
=
1
n
D
i
,
{\displaystyle {\bar {D}}_{n}={\frac {1}{n}}\sum _{i=1}^{n}D_{i},}
and
σ
^
D
{\displaystyle {\hat {\sigma }}_{D}}
is the sample standard deviation of the difference.
Analytic solution
We can proceed according to our knowledge of statistical theory, though in practice for a standard case like this software will exist to compute more accurate answers.
Thanks to t-test theory, we know this test statistic under the null hypothesis follows a Student t-distribution with
n
−
1
{\displaystyle n-1}
degrees of freedom. If we wish to reject the null at significance level
α
=
0.05
{\displaystyle \alpha =0.05\,}
, we must find the critical value
t
α
{\displaystyle t_{\alpha }}
such that the probability of
T
n
>
t
α
{\displaystyle T_{n}>t_{\alpha }}
under the null is equal to
α
{\displaystyle \alpha }
. If n is large, the t-distribution converges to the standard normal distribution (thus no longer involving n) and so through use of the corresponding quantile function
Φ
−
1
{\displaystyle \Phi ^{-1}}
, we obtain that the null should be rejected if
T
n
>
t
α
≈
Φ
−
1
(
0.95
)
≈
1.64
.
{\displaystyle T_{n}>t_{\alpha }\approx \Phi ^{-1}(0.95)\approx 1.64\,.}
Now suppose that the alternative hypothesis
H
1
{\displaystyle H_{1}}
is true so
μ
D
=
θ
{\displaystyle \mu _{D}=\theta }
. Then, writing the power as a function of the effect size,
B
(
θ
)
{\displaystyle B(\theta )}
, we find the probability of
T
n
{\displaystyle T_{n}}
being above
t
α
{\displaystyle t_{\alpha }}
under
H
1
{\displaystyle H_{1}}
.
B
(
θ
)
≈
Pr
(
T
n
>
1.64
|
μ
D
=
θ
)
=
Pr
(
D
¯
n
−
0
σ
^
D
/
n
>
1.64
|
μ
D
=
θ
)
=
1
−
Pr
(
D
¯
n
−
0
σ
^
D
/
n
<
1.64
|
μ
D
=
θ
)
=
1
−
Pr
(
D
¯
n
−
θ
σ
^
D
/
n
<
1.64
−
θ
σ
^
D
/
n
|
μ
D
=
θ
)
{\displaystyle {\begin{aligned}B(\theta )&\approx \Pr \left(T_{n}>1.64~{\big |}~\mu _{D}=\theta \right)\\&=\Pr \left({\frac {{\bar {D}}_{n}-0}{{\hat {\sigma }}_{D}/{\sqrt {n}}}}>1.64~{\Big |}~\mu _{D}=\theta \right)\\&=1-\Pr \left({\frac {{\bar {D}}_{n}-0}{{\hat {\sigma }}_{D}/{\sqrt {n}}}}<1.64~{\Big |}~\mu _{D}=\theta \right)\\&=1-\Pr \left({\frac {{\bar {D}}_{n}-\theta }{{\hat {\sigma }}_{D}/{\sqrt {n}}}}<1.64-{\frac {\theta }{{\hat {\sigma }}_{D}/{\sqrt {n}}}}~{\Big |}~\mu _{D}=\theta \right)\\\end{aligned}}}
D
¯
n
−
θ
σ
^
D
/
n
{\displaystyle {\frac {{\bar {D}}_{n}-\theta }{{\hat {\sigma }}_{D}/{\sqrt {n}}}}}
again follows a student-t distribution under
H
1
{\displaystyle H_{1}}
, converging on to a standard normal distribution for large n. The estimated
σ
^
D
{\displaystyle {\hat {\sigma }}_{D}}
will also converge on to its population value
σ
D
{\displaystyle \sigma _{D}}
Thus power can be approximated as
B
(
θ
)
≈
1
−
Φ
(
1.64
−
θ
σ
D
/
n
)
.
{\displaystyle B(\theta )\approx 1-\Phi \left(1.64-{\frac {\theta }{\sigma _{D}/{\sqrt {n}}}}\right).}
According to this formula, the power increases with the values of the effect size
θ
{\displaystyle \theta }
and the sample size n, and reduces with increasing variability
σ
D
{\displaystyle \sigma _{D}}
. In the trivial case of zero effect size, power is at a minimum (infimum) and equal to the significance level of the test
α
,
{\displaystyle \alpha \,,}
in this example 0.05. For finite sample sizes and non-zero variability, it is the case here, as is typical, that power cannot be made equal to 1 except in the trivial case where
α
=
1
{\displaystyle \alpha =1}
so the null is always rejected.
We can invert
B
{\displaystyle B}
to obtain required sample sizes:
n
>
σ
D
θ
(
1.64
−
Φ
−
1
(
1
−
B
(
θ
)
)
)
.
{\displaystyle {\sqrt {n}}>{\frac {\sigma _{D}}{\theta }}\left(1.64-\Phi ^{-1}\left(1-B(\theta )\right)\right).}
Suppose
θ
=
1
{\displaystyle \theta =1}
and we believe
σ
D
{\displaystyle \sigma _{D}}
is around 2, say, then we require for a power of
B
(
θ
)
=
0.8
{\displaystyle B(\theta )=0.8}
, a sample size
n
>
4
(
1.64
−
Φ
−
1
(
1
−
0.8
)
)
2
≈
4
(
1.64
+
0.84
)
2
≈
24.6.
{\displaystyle n>4\left(1.64-\Phi ^{-1}\left(1-0.8\right)\right)^{2}\approx 4\left(1.64+0.84\right)^{2}\approx 24.6.}
Simulation solution
Alternatively we can use a Monte Carlo simulation method that works more generally. Once again, we return to the assumption of the distribution of
D
n
{\displaystyle D_{n}}
and the definition of
T
n
{\displaystyle T_{n}}
. Suppose we have fixed values of the sample size, variability and effect size, and wish to compute power. We can adopt this process:
1. Generate a large number of sets of
D
n
{\displaystyle D_{n}}
according to the null hypothesis,
N
(
0
,
σ
D
)
{\displaystyle N(0,\sigma _{D})}
2. Compute the resulting test statistic
T
n
{\displaystyle T_{n}}
for each set.
3. Compute the
(
1
−
α
)
{\displaystyle (1-\alpha )}
th quantile of the simulated
T
n
{\displaystyle T_{n}}
and use that as an estimate of
t
α
{\displaystyle t_{\alpha }}
.
4. Now generate a large number of sets of
D
n
{\displaystyle D_{n}}
according to the alternative hypothesis,
N
(
θ
,
σ
D
)
{\displaystyle N(\theta ,\sigma _{D})}
, and compute the corresponding test statistics again.
5. Look at the proportion of these simulated alternative
T
n
{\displaystyle T_{n}}
that are above the
t
α
{\displaystyle t_{\alpha }}
calculated in step 3 and so are rejected. This is the power.
This can be done with a variety of software packages. Using this methodology with the values before, setting the sample size to 25 leads to an estimated power of around 0.78. The small discrepancy with the previous section is due mainly to inaccuracies with the normal approximation.
Power in different disciplines
Several studies have attempted to estimate typical levels of statistical power across different academic fields. One common approach uses meta-analyses to assess whether individual studies have sufficient power to detect the average effect size estimated from the meta-analysis itself. This method essentially asks: how likely is each study to detect the consensus effect found in the broader literature? These assessments consistently find low levels of statistical power across many disciplines. For example, using this method median power is 18% in economics, 10% in political science, 36% in psychology, and 15% in ecology and evolutionary biology.
Extension
Bayesian power
In the frequentist setting, parameters are assumed to have a specific value which is unlikely to be true. This issue can be addressed by assuming the parameter has a distribution. The resulting power is sometimes referred to as Bayesian power which is commonly used in clinical trial design.
Predictive probability of success
Both frequentist power and Bayesian power use statistical significance as the success criterion. However, statistical significance is often not enough to define success. To address this issue, the power concept can be extended to the concept of predictive probability of success (PPOS). The success criterion for PPOS is not restricted to statistical significance and is commonly used in clinical trial designs.
Software for power and sample size calculations
Numerous free and/or open source programs are available for performing power and sample size calculations. These include
G*Power (https://www.gpower.hhu.de/)
WebPower Free online statistical power analysis (https://webpower.psychstat.org)
Free and open source online calculators (https://powerandsamplesize.com)
PowerUp! provides Excel-based functions to determine minimum detectable effect size and minimum required sample size for various experimental and quasi-experimental designs.
PowerUpR is R package version of PowerUp! and additionally includes functions to determine sample size for various multilevel randomized experiments with or without budgetary constraints.
R package pwr (https://cran.r-project.org/web/packages/pwr/)
R package WebPower (https://cran.r-project.org/web/packages/WebPower/index.html)
R package Spower (https://cran.r-project.org/web/packages/Spower/index.html) for general-purpose power analyses using simulation experiments
Python package statsmodels (https://www.statsmodels.org/)
See also
Positive and negative predictive values – Statistical measures of whether a finding is likely to be true
Effect size – Statistical measure of the magnitude of a phenomenon
Efficiency – Quality measure of a statistical method
Neyman–Pearson lemma – Theorem about the power of the likelihood ratio test
Sample size – Statistical considerations on how many observations to makePages displaying short descriptions of redirect targets
Uniformly most powerful test – Theoretically optimal hypothesis test
References
Sources
Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences (2nd ed.). Lawrence Erlbaum Associates. ISBN 0-8058-0283-5.
Aberson, C.L. (2010). Applied Power Analysis for the Behavioral Science. Routledge. ISBN 978-1-84872-835-6.
External links
StatQuest: P-value pitfalls and power calculations on YouTube
|
|
wiki::en::Equivalence test
|
wiki
|
Equivalence test
|
https://en.wikipedia.org/wiki/Equivalence_test
|
en
|
[] |
Equivalence tests are a variety of hypothesis tests used to draw statistical inferences from observed data. In these tests, the null hypothesis is defined as an effect large enough to be deemed interesting, specified by an equivalence bound. The alternative hypothesis is any effect that is less extreme than said equivalence bound. The observed data are statistically compared against the equivalence bounds. If the statistical test indicates the observed data is surprising, assuming that true effects are at least as extreme as the equivalence bounds, a Neyman-Pearson approach to statistical inferences can be used to reject effect sizes larger than the equivalence bounds with a pre-specified Type 1 error rate.
Equivalence testing originates from the field of clinical trials. One application, known as a non-inferiority trial, is used to show that a new drug that is cheaper than available alternatives works as well as an existing drug. In essence, equivalence tests consist of calculating a confidence interval around an observed effect size and rejecting effects more extreme than the equivalence bound when the confidence interval does not overlap with the equivalence bound. In two-sided tests, both upper and lower equivalence bounds are specified. In non-inferiority trials, where the goal is to test the hypothesis that a new treatment is not worse than existing treatments, only a lower equivalence bound is specified.
Equivalence tests can be performed in addition to null-hypothesis significance tests. This might prevent common misinterpretations of p-values larger than the alpha level as support for the absence of a true effect. Furthermore, equivalence tests can identify effects that are statistically significant but practically insignificant, whenever effects are statistically different from zero, but also statistically smaller than any effect size deemed worthwhile (see the first figure). Equivalence tests were originally used in areas such as pharmaceutics, frequently in bioequivalence trials. However, these tests can be applied to any instance where the research question asks whether the means of two sets of scores are practically or theoretically equivalent. Equivalence tests have recently been introduced in evaluation of measurement devices, artificial intelligence, exercise physiology and sports science, political science, psychology, and economics. Several tests exist for equivalence analyses; however, more recently the two-one-sided t-tests (TOST) procedure has been garnering considerable attention. As outlined below, this approach is an adaptation of the widely known t-test.
TOST procedure
A very simple equivalence testing approach is the ‘two one-sided t-tests’ (TOST) procedure. In the TOST procedure an upper (ΔU) and lower (–ΔL) equivalence bound is specified based on the smallest effect size of interest (e.g., a positive or negative difference of d = 0.3). Two composite null hypotheses are tested: H01: Δ ≤ –ΔL and H02: Δ ≥ ΔU. When both these one-sided tests can be statistically rejected, we can conclude that –ΔL < Δ < ΔU, or that the observed effect falls within the equivalence bounds and is statistically smaller than any effect deemed worthwhile and considered practically equivalent". Alternatives to the TOST procedure have been developed as well. A recent modification to TOST makes the approach feasible in cases of repeated measures and assessing multiple variables.
Comparison between t-test and equivalence test
The equivalence test can be induced from the t-test. Consider a t-test at the significance level αt-test with a power of 1-βt-test for a relevant effect size dr. If Δ = dr as well as αequiv.-test = βt-test and βequiv.-test = αt-test coincide, i.e. the error types (type I and type II) are interchanged between the t-test and the equivalence test, then the t-test will obtain the same results as the equivalence test. To achieve this for the t-test, either the sample size calculation needs to be carried out correctly, or the t-test significance level αt-test needs to be adjusted, referred to as the so-called revised t-test. Both approaches have difficulties in practice since sample size planning relies on unverifiable assumptions of the standard deviation, and the revised t-test yields numerical problems. Preserving the test behavior, those limitations can be removed by using an equivalence test.
The figure below allows a visual comparison of the equivalence test and the t-test when the sample size calculation is affected by differences between the a priori standard deviation
σ
{\textstyle \sigma }
and the sample's standard deviation
σ
^
{\textstyle {\widehat {\sigma }}}
, which is a common problem. Using an equivalence test instead of a t-test additionally ensures that αequiv.-test is bounded, which the t-test does not do in case that
σ
^
>
σ
{\textstyle {\widehat {\sigma }}>\sigma }
with the type II error growing arbitrarily large. On the other hand, having
σ
^
<
σ
{\textstyle {\widehat {\sigma }}<\sigma }
results in the t-test being stricter than the dr specified in the planning, which may randomly penalize the sample source (e.g., a device manufacturer). This makes the equivalence test safer to use.
See also
Bootstrap (statistics)-based testing
Literature
The papers below are good introductions to equivalence testing.
Westlake, W. J. (1976). "Symmetrical confidence intervals for bioequivalence trials". Biometrics. 32 (4): 741–744. doi:10.2307/2529265. JSTOR 2529265.
Berger, Roger L.; Hsu, Jason C. (1996). "Bioequivalence trials, intersection-union tests and equivalence confidence sets". Statistical Science. 11 (4): 283–319. doi:10.1214/ss/1032280304.
Walker, Esteban; Nowacki, Amy S. (2011). "Understanding Equivalence and Noninferiority Testing". Journal of General Internal Medicine. 26 (2): 192–196. doi:10.1007/s11606-010-1513-8. PMC 3019319. PMID 20857339.
Rainey, Carlisle (2014). "Arguing for a Negligible Effect" (PDF). American Journal of Political Science. 58 (4): 1083–1091. doi:10.1111/ajps.12102. Retrieved 2025-06-01.
Lakens, Daniël (2017). "Equivalence Tests: A Practical Primer for t Tests, Correlations, and Meta-Analyses". Social Psychological and Personality Science. 8 (4): 355–362. doi:10.1177/1948550617697177. PMC 5502906.
Lakens, Daniël; Isager, P. M.; Scheel, A. M. (2018). "Equivalence Testing for Psychological Research: A Tutorial". Advances in Methods and Practices in Psychological Science. 1 (2): 259–269. doi:10.1177/2515245918770963.
Fitzgerald, Jack (2025). "The Need for Equivalence Testing in Economics". MetaArXiv. Retrieved 2025-06-01.
An applied introduction to equivalence testing appears in Section 4.2 of Vincent Arel-Bundock’s open-access book Model to Meaning.
== References ==
|
|
wiki::en::Multi-armed bandit
|
wiki
|
Multi-armed bandit
|
https://en.wikipedia.org/wiki/Multi-armed_bandit
|
en
|
[] |
In probability theory and machine learning, the multi-armed bandit problem (sometimes called the K- or N-armed bandit problem) is named from imagining a gambler at a row of slot machines (sometimes known as "one-armed bandits"), who has to decide which machines to play, how many times to play each machine and in which order to play them, and whether to continue with the current machine or try a different machine.
More generally, it is a problem in which a decision maker iteratively selects one of multiple fixed choices (i.e., arms or actions) when the properties of each choice are only partially known at the time of allocation, and may become better understood as time passes. A fundamental aspect of bandit problems is that choosing an arm does not affect the properties of the arm or other arms.
Instances of the multi-armed bandit problem include the task of iteratively allocating a fixed, limited set of resources between competing (alternative) choices in a way that minimizes the regret. A notable alternative setup for the multi-armed bandit problem includes the "best arm identification (BAI)" problem where the goal is instead to identify the best choice by the end of a finite number of rounds.
The multi-armed bandit problem is a classic reinforcement learning problem that exemplifies the exploration–exploitation tradeoff dilemma. In contrast to general reinforcement learning, the selected actions in bandit problems do not affect the reward distribution of the arms.
The multi-armed bandit problem also falls into the broad category of stochastic scheduling.
In the problem, each machine provides a random reward from a probability distribution specific to that machine, that is not known a priori. The objective of the gambler is to maximize the sum of rewards earned through a sequence of lever pulls. The crucial tradeoff the gambler faces at each trial is between "exploitation" of the machine that has the highest expected payoff and "exploration" to get more information about the expected payoffs of the other machines. The trade-off between exploration and exploitation is also faced in machine learning. In practice, multi-armed bandits have been used to model problems such as managing research projects in a large organization, like a science foundation or a pharmaceutical company. In early versions of the problem, the gambler begins with no initial knowledge about the machines.
Herbert Robbins in 1952, realizing the importance of the problem, constructed convergent population selection strategies in "some aspects of the sequential design of experiments". A theorem, the Gittins index, first published by John C. Gittins, gives an optimal policy for maximizing the expected discounted reward.
Empirical motivation
The multi-armed bandit problem models an agent that simultaneously attempts to acquire new knowledge (called "exploration") and optimize their decisions based on existing knowledge (called "exploitation"). The agent attempts to balance these competing tasks in order to maximize their total value over the period of time considered. There are many practical applications of the bandit model, for example:
clinical trials investigating the effects of different experimental treatments while minimizing patient losses,
adaptive routing efforts for minimizing delays in a network,
financial portfolio design
In these practical examples, the problem requires balancing reward maximization based on the knowledge already acquired with attempting new actions to further increase knowledge. This is known as the exploitation vs. exploration tradeoff in machine learning.
The model has also been used to control dynamic allocation of resources to different projects, answering the question of which project to work on, given uncertainty about the difficulty and payoff of each possibility.
Originally considered by Allied scientists in World War II, it proved so intractable that, according to Peter Whittle, the problem was proposed to be dropped over Germany so that German scientists could also waste their time on it.
The version of the problem now commonly analyzed was formulated by Herbert Robbins in 1952.
The multi-armed bandit model
The multi-armed bandit (short: bandit or MAB) can be seen as a set of real distributions
B
=
{
R
1
,
…
,
R
K
}
{\displaystyle B=\{R_{1},\dots ,R_{K}\}}
, each distribution being associated with the rewards delivered by one of the
K
∈
N
+
{\displaystyle K\in \mathbb {N} ^{+}}
levers. Let
μ
1
,
…
,
μ
K
{\displaystyle \mu _{1},\dots ,\mu _{K}}
be the mean values associated with these reward distributions. The gambler iteratively plays one lever per round and observes the associated reward. The objective is to maximize the sum of the collected rewards. The horizon
H
{\displaystyle H}
is the number of rounds that remain to be played. The bandit problem is formally equivalent to a one-state Markov decision process. The regret
ρ
{\displaystyle \rho }
after
T
{\displaystyle T}
rounds is defined as the expected difference between the reward sum associated with an optimal strategy and the sum of the collected rewards:
ρ
=
T
μ
∗
−
∑
t
=
1
T
r
^
t
{\displaystyle \rho =T\mu ^{*}-\sum _{t=1}^{T}{\widehat {r}}_{t}}
,
where
μ
∗
{\displaystyle \mu ^{*}}
is the maximal reward mean,
μ
∗
=
max
k
{
μ
k
}
{\displaystyle \mu ^{*}=\max _{k}\{\mu _{k}\}}
, and
r
^
t
{\displaystyle {\widehat {r}}_{t}}
is the reward in round t.
A zero-regret strategy is a strategy whose average regret per round
ρ
/
T
{\displaystyle \rho /T}
tends to zero with probability 1 when the number of played rounds tends to infinity. Intuitively, zero-regret strategies are guaranteed to converge to a (not necessarily unique) optimal strategy if enough rounds are played.
Variations
A common formulation is the Binary multi-armed bandit or Bernoulli multi-armed bandit, which issues a reward of one with probability
p
{\displaystyle p}
, and otherwise a reward of zero.
Another formulation of the multi-armed bandit has each arm representing an independent Markov machine. Each time a particular arm is played, the state of that machine advances to a new one, chosen according to the Markov state evolution probabilities. There is a reward depending on the current state of the machine. In a generalization called the "restless bandit problem", the states of non-played arms can also evolve over time. There has also been discussion of systems where the number of choices (about which arm to play) increases over time.
Computer science researchers have studied multi-armed bandits under worst-case assumptions, obtaining algorithms to minimize regret in both finite and infinite (asymptotic) time horizons for both stochastic and non-stochastic arm payoffs.
Best arm identification
An important variation of the classical regret minimization problem in multi-armed bandits is best arm identification (BAI), also known as pure exploration. This problem is crucial in various applications, including clinical trials, adaptive routing, recommendation systems, and A/B testing.
In BAI, the objective is to identify the arm having the highest expected reward. An algorithm in this setting is characterized by a sampling rule, a decision rule, and a stopping rule, described as follows:
Sampling rule:
(
a
t
)
t
≥
1
{\displaystyle (a_{t})_{t\geq 1}}
is a sequence of actions at each time step
Stopping rule:
τ
{\displaystyle \tau }
is a (random) stopping time which suggests when to stop collecting samples
Decision rule:
a
^
τ
{\displaystyle {\hat {a}}_{\tau }}
is a guess on the best arm based on the data collected up to time
τ
{\displaystyle \tau }
There are two predominant settings in BAI:
Fixed budget setting: Given a time horizon
T
≥
1
{\displaystyle T\geq 1}
, the objective is to identify the arm with the highest expected reward
a
⋆
∈
arg
max
k
μ
k
{\displaystyle a^{\star }\in \arg \max _{k}\mu _{k}}
minimizing probability of error
δ
{\displaystyle \delta }
.
Fixed confidence setting: Given a confidence level
δ
∈
(
0
,
1
)
{\displaystyle \delta \in (0,1)}
, the objective is to identify the arm with the highest expected reward
a
⋆
∈
arg
max
k
μ
k
{\displaystyle a^{\star }\in \arg \max _{k}\mu _{k}}
with the least possible amount of trials and with probability of error
P
(
a
^
τ
≠
a
⋆
)
≤
δ
{\displaystyle \mathbb {P} ({\hat {a}}_{\tau }\neq a^{\star })\leq \delta }
.
For example using a decision rule, we could use
m
1
{\displaystyle m_{1}}
where
m
{\displaystyle m}
is the machine no.1 (you can use a different variable respectively) and
1
{\displaystyle 1}
is the amount for each time an attempt is made at pulling the lever, where
∫
∑
m
1
,
m
2
,
(
.
.
.
)
=
M
{\displaystyle \int \sum m_{1},m_{2},(...)=M}
, identify
M
{\displaystyle M}
as the sum of each attempts
m
1
+
m
2
{\displaystyle m_{1}+m_{2}}
, (...) as needed, and from there you can get a ratio, sum or mean as quantitative probability and sample your formulation for each slots.
You can also do
∫
∑
k
∝
i
N
−
(
n
j
)
{\displaystyle \int \sum _{k\propto _{i}}^{N}-(n_{j})}
where
m
1
+
m
2
{\displaystyle m1+m2}
equal to each a unique machine slot,
x
,
y
{\displaystyle x,y}
is the amount each time the lever is triggered,
N
{\displaystyle N}
is the sum of
(
m
1
x
,
y
)
+
(
m
2
x
,
y
)
(
.
.
.
)
{\displaystyle (m1_{x},_{y})+(m2_{x},_{y})(...)}
,
k
{\displaystyle k}
would be the total available amount in your possession,
k
{\displaystyle k}
is relative to
N
{\displaystyle N}
where
N
=
n
(
n
a
,
b
)
,
(
n
1
a
,
b
)
,
(
n
2
a
,
b
)
{\displaystyle N=n(n_{a},b),(n1_{a},b),(n2_{a},b)}
reduced
n
j
{\displaystyle n_{j}}
as the sum of each gain or loss from
a
,
b
{\displaystyle a,b}
(for example, suppose you have 100$ that is defined as
n
{\displaystyle n}
, and
a
{\displaystyle a}
would be a gain,
b
{\displaystyle b}
is equal to a loss. From there you get your results either positive or negative to add for
N
{\displaystyle N}
with your own specific rule) and
i
{\displaystyle i}
as the maximum you are willing to spend.
It is possible to express this construction using a combination of multiple algebraic formulation, as mentioned above where you can limit with
T
{\displaystyle T}
for, or in time and so on.
Bandit strategies
A major breakthrough was the construction of optimal population selection strategies, or policies (that possess uniformly maximum convergence rate to the population with highest mean) in the work described below.
Optimal solutions
In the paper "Asymptotically efficient adaptive allocation rules", Lai and Robbins (following papers of Robbins and his co-workers going back to Robbins in the year 1952) constructed convergent population selection policies that possess the fastest rate of convergence (to the population with highest mean) for the case that the population reward distributions are the one-parameter exponential family. Then, in Katehakis and Robbins simplifications of the policy and the main proof were given for the case of normal populations with known variances. The next notable progress was obtained by Burnetas and Katehakis in the paper "Optimal adaptive policies for sequential allocation problems", where index based policies with uniformly maximum convergence rate were constructed, under more general conditions that include the case in which the distributions of outcomes from each population depend on a vector of unknown parameters. Burnetas and Katehakis (1996) also provided an explicit solution for the important case in which the distributions of outcomes follow arbitrary (i.e., non-parametric) discrete, univariate distributions.
Later in "Optimal adaptive policies for Markov decision processes" Burnetas and Katehakis studied the much larger model of Markov Decision Processes under partial information, where the transition law and/or the expected one period rewards may depend on unknown parameters. In this work, the authors constructed an explicit form for a class of adaptive policies with uniformly maximum convergence rate properties for the total expected finite horizon reward under sufficient assumptions of finite state-action spaces and irreducibility of the transition law. A main feature of these policies is that the choice of actions, at each state and time period, is based on indices that are inflations of the right-hand side of the estimated average reward optimality equations. These inflations have recently been called the optimistic approach in the work of Tewari and Bartlett, Ortner Filippi, Cappé, and Garivier, and Honda and Takemura.
For Bernoulli multi-armed bandits, Pilarski et al. studied computation methods of deriving fully optimal solutions (not just asymptotically) using dynamic programming in the paper "Optimal Policy for Bernoulli Bandits: Computation and Algorithm Gauge." Via indexing schemes, lookup tables, and other techniques, this work provided practically applicable optimal solutions for Bernoulli bandits provided that time horizons and numbers of arms did not become excessively large. Pilarski et al. later extended this work in "Delayed Reward Bernoulli Bandits: Optimal Policy and Predictive Meta-Algorithm PARDI" to create a method of determining the optimal policy for Bernoulli bandits when rewards may not be immediately revealed following a decision and may be delayed. This method relies upon calculating expected values of reward outcomes which have not yet been revealed and updating posterior probabilities when rewards are revealed.
When optimal solutions to multi-arm bandit tasks are used to derive the value of animals' choices, the activity of neurons in the amygdala and ventral striatum encodes the values derived from these policies, and can be used to decode when the animals make exploratory versus exploitative choices. Moreover, optimal policies better predict animals' choice behavior than alternative strategies (described below). This suggests that the optimal solutions to multi-arm bandit problems are biologically plausible, despite being computationally demanding.
Approximate solutions
Many strategies exist which provide an approximate solution to the bandit problem, and can be put into the four broad categories detailed below.
Semi-uniform strategies
Semi-uniform strategies were the earliest (and simplest) strategies discovered to approximately solve the bandit problem. All those strategies have in common a greedy behavior where the best lever (based on previous observations) is always pulled except when a (uniformly) random action is taken.
Epsilon-greedy strategy: The best lever is selected for a proportion
1
−
ϵ
{\displaystyle 1-\epsilon }
of the trials, and a lever is selected at random (with uniform probability) for a proportion
ϵ
{\displaystyle \epsilon }
. A typical parameter value might be
ϵ
=
0.1
{\displaystyle \epsilon =0.1}
, but this can vary widely depending on circumstances and predilections.
Epsilon-first strategy: A pure exploration phase is followed by a pure exploitation phase. For
N
{\displaystyle N}
trials in total, the exploration phase occupies
ϵ
N
{\displaystyle \epsilon N}
trials and the exploitation phase
(
1
−
ϵ
)
N
{\displaystyle (1-\epsilon )N}
trials. During the exploration phase, a lever is randomly selected (with uniform probability); during the exploitation phase, the best lever is always selected.
Epsilon-decreasing strategy: Similar to the epsilon-greedy strategy, except that the value of
ϵ
{\displaystyle \epsilon }
decreases as the experiment progresses, resulting in highly explorative behaviour at the start and highly exploitative behaviour at the finish.
Adaptive epsilon-greedy strategy based on value differences (VDBE): Similar to the epsilon-decreasing strategy, except that epsilon is reduced on basis of the learning progress instead of manual tuning (Tokic, 2010). High fluctuations in the value estimates lead to a high epsilon (high exploration, low exploitation); low fluctuations to a low epsilon (low exploration, high exploitation). Further improvements can be achieved by a softmax-weighted action selection in case of exploratory actions (Tokic & Palm, 2011).
Adaptive epsilon-greedy strategy based on Bayesian ensembles (Epsilon-BMC): An adaptive epsilon adaptation strategy for reinforcement learning similar to VBDE, with monotone convergence guarantees. In this framework, the epsilon parameter is viewed as the expectation of a posterior distribution weighting a greedy agent (that fully trusts the learned reward) and uniform learning agent (that distrusts the learned reward). This posterior is approximated using a suitable Beta distribution under the assumption of normality of observed rewards. In order to address the possible risk of decreasing epsilon too quickly, uncertainty in the variance of the learned reward is also modeled and updated using a normal-gamma model. (Gimelfarb et al., 2019).
Probability matching strategies
Probability matching strategies reflect the idea that the number of pulls for a given lever should match its actual probability of being the optimal lever. Probability matching strategies are also known as Thompson sampling or Bayesian Bandits, and are surprisingly easy to implement if you can sample from the posterior for the mean value of each alternative.
Probability matching strategies also admit solutions to so-called contextual bandit problems.
Pricing strategies
Pricing strategies establish a price for each lever. For example, as illustrated with the POKER algorithm, the price can be the sum of the expected reward plus an estimation of extra future rewards that will gain through the additional knowledge. The lever of highest price is always pulled.
Contextual bandit
A useful generalization of the multi-armed bandit is the contextual multi-armed bandit. At each iteration an agent still has to choose between arms, but they also see a d-dimensional feature vector, the context vector they can use together with the rewards of the arms played in the past to make the choice of the arm to play. Over time, the learner's aim is to collect enough information about how the context vectors and rewards relate to each other, so that it can predict the next best arm to play by looking at the feature vectors.
Approximate solutions for contextual bandit
Many strategies exist that provide an approximate solution to the contextual bandit problem, and can be put into two broad categories detailed below.
Online linear bandits
LinUCB (Upper Confidence Bound) algorithm: the authors assume a linear dependency between the expected reward of an action and its context and model the representation space using a set of linear predictors.
LinRel (Linear Associative Reinforcement Learning) algorithm: Similar to LinUCB, but utilizes singular value decomposition rather than ridge regression to obtain an estimate of confidence.
Online non-linear bandits
UCBogram algorithm: The nonlinear reward functions are estimated using a piecewise constant estimator called a regressogram in nonparametric regression. Then, UCB is employed on each constant piece. Successive refinements of the partition of the context space are scheduled or chosen adaptively.
Generalized linear algorithms: The reward distribution follows a generalized linear model, an extension to linear bandits.
KernelUCB algorithm: a kernelized non-linear version of LinUCB, with efficient implementation and finite-time analysis.
Bandit Forest algorithm: a random forest is built and analyzed w.r.t the random forest built knowing the joint distribution of contexts and rewards.
Oracle-based algorithm: The algorithm reduces the contextual bandit problem into a series of supervised learning problem, and does not rely on typical realizability assumption on the reward function.
Constrained contextual bandit
In practice, there is usually a cost associated with the resource consumed by each action and the total cost is limited by a budget in many applications such as crowdsourcing and clinical trials. Constrained contextual bandit (CCB) is such a model that considers both the time and budget constraints in a multi-armed bandit setting.
A. Badanidiyuru et al. first studied contextual bandits with budget constraints, also referred to as Resourceful Contextual Bandits, and show that a
O
(
T
)
{\displaystyle O({\sqrt {T}})}
regret is achievable. However, their work focuses on a finite set of policies, and the algorithm is computationally inefficient.
A simple algorithm with logarithmic regret is proposed in:
UCB-ALP algorithm: The framework of UCB-ALP is shown in the right figure. UCB-ALP is a simple algorithm that combines the UCB method with an Adaptive Linear Programming (ALP) algorithm, and can be easily deployed in practical systems. It is the first work that show how to achieve logarithmic regret in constrained contextual bandits. Although is devoted to a special case with single budget constraint and fixed cost, the results shed light on the design and analysis of algorithms for more general CCB problems.
Adversarial bandit
Another variant of the multi-armed bandit problem is called the adversarial bandit, first introduced by Auer and Cesa-Bianchi (1998). In this variant, at each iteration, an agent chooses an arm and an adversary simultaneously chooses the payoff structure for each arm. This is one of the strongest generalizations of the bandit problem as it removes all assumptions of the distribution and a solution to the adversarial bandit problem is a generalized solution to the more specific bandit problems.
Example: Iterated prisoner's dilemma
An example often considered for adversarial bandits is the iterated prisoner's dilemma. In this example, each adversary has two arms to pull. They can either Deny or Confess. Standard stochastic bandit algorithms don't work very well with these iterations. For example, if the opponent cooperates in the first 100 rounds, defects for the next 200, then cooperate in the following 300, etc. then algorithms such as UCB won't be able to react very quickly to these changes. This is because after a certain point sub-optimal arms are rarely pulled to limit exploration and focus on exploitation. When the environment changes the algorithm is unable to adapt or may not even detect the change.
Approximate solutions
Exp3
Source:
EXP3 is a popular algorithm for adversarial multiarmed bandits, suggested and analyzed in this setting by Auer et al. [2002b].
Recently there was an increased interest in the performance of this algorithm in the stochastic setting, due to its new applications to stochastic multi-armed bandits with side information [Seldin et al., 2011] and to multi-armed bandits in the mixed stochastic-adversarial setting [Bubeck and Slivkins, 2012].
The paper presented an empirical evaluation and improved analysis of the performance of the EXP3 algorithm in the stochastic setting, as well as a modification of the EXP3 algorithm capable of achieving "logarithmic" regret in stochastic environment.
Algorithm
Parameters: Real
γ
∈
(
0
,
1
]
{\displaystyle \gamma \in (0,1]}
Initialisation:
ω
i
(
1
)
=
1
{\displaystyle \omega _{i}(1)=1}
for
i
=
1
,
.
.
.
,
K
{\displaystyle i=1,...,K}
For each t = 1, 2, ..., T
1. Set
p
i
(
t
)
=
(
1
−
γ
)
ω
i
(
t
)
∑
j
=
1
K
ω
j
(
t
)
+
γ
K
{\displaystyle p_{i}(t)=(1-\gamma ){\frac {\omega _{i}(t)}{\sum _{j=1}^{K}\omega _{j}(t)}}+{\frac {\gamma }{K}}}
i
=
1
,
.
.
.
,
K
{\displaystyle i=1,...,K}
2. Draw
i
t
{\displaystyle i_{t}}
randomly according to the probabilities
p
1
(
t
)
,
.
.
.
,
p
K
(
t
)
{\displaystyle p_{1}(t),...,p_{K}(t)}
3. Receive reward
x
i
t
(
t
)
∈
[
0
,
1
]
{\displaystyle x_{i_{t}}(t)\in [0,1]}
4. For
j
=
1
,
.
.
.
,
K
{\displaystyle j=1,...,K}
set:
x
^
j
(
t
)
=
{
x
j
(
t
)
/
p
j
(
t
)
if
j
=
i
t
0
,
otherwise
{\displaystyle {\hat {x}}_{j}(t)={\begin{cases}x_{j}(t)/p_{j}(t)&{\text{if }}j=i_{t}\\0,&{\text{otherwise}}\end{cases}}}
ω
j
(
t
+
1
)
=
ω
j
(
t
)
exp
(
γ
x
^
j
(
t
)
/
K
)
{\displaystyle \omega _{j}(t+1)=\omega _{j}(t)\exp(\gamma {\hat {x}}_{j}(t)/K)}
Explanation
Exp3 chooses an arm at random with probability
(
1
−
γ
)
{\displaystyle (1-\gamma )}
it prefers arms with higher weights (exploit), it chooses with probability
γ
{\displaystyle \gamma }
to uniformly randomly explore. After receiving the rewards the weights are updated. The exponential growth significantly increases the weight of good arms.
Regret analysis
The (external) regret of the Exp3 algorithm is at most
O
(
K
T
l
o
g
(
K
)
)
{\displaystyle O({\sqrt {KTlog(K)}})}
Follow the perturbed leader (FPL) algorithm
Algorithm
Parameters: Real
η
{\displaystyle \eta }
Initialisation:
∀
i
:
R
i
(
1
)
=
0
{\displaystyle \forall i:R_{i}(1)=0}
For each t = 1,2,...,T
1. For each arm generate a random noise from an exponential distribution
∀
i
:
Z
i
(
t
)
∼
E
x
p
(
η
)
{\displaystyle \forall i:Z_{i}(t)\sim Exp(\eta )}
2. Pull arm
I
(
t
)
{\displaystyle I(t)}
:
I
(
t
)
=
a
r
g
max
i
{
R
i
(
t
)
+
Z
i
(
t
)
}
{\displaystyle I(t)=arg\max _{i}\{R_{i}(t)+Z_{i}(t)\}}
Add noise to each arm and pull the one with the highest value
3. Update value:
R
I
(
t
)
(
t
+
1
)
=
R
I
(
t
)
(
t
)
+
x
I
(
t
)
(
t
)
{\displaystyle R_{I(t)}(t+1)=R_{I(t)}(t)+x_{I(t)}(t)}
The rest remains the same
Explanation
We follow the arm that we think has the best performance so far adding exponential noise to it to provide exploration.
Exp3 vs FPL
Infinite-armed bandit
In the original specification and in the above variants, the bandit problem is specified with a discrete and finite number of arms, often indicated by the variable
K
{\displaystyle K}
. In the infinite armed case, introduced by Agrawal (1995), the "arms" are a continuous variable in
K
{\displaystyle K}
dimensions.
Non-stationary bandit
This framework refers to the multi-armed bandit problem in a non-stationary setting (i.e., in presence of concept drift). In the non-stationary setting, it is assumed that the expected reward for an arm
k
{\displaystyle k}
can change at every time step
t
∈
T
{\displaystyle t\in {\mathcal {T}}}
:
μ
t
−
1
k
≠
μ
t
k
{\displaystyle \mu _{t-1}^{k}\neq \mu _{t}^{k}}
. Thus,
μ
t
k
{\displaystyle \mu _{t}^{k}}
no longer represents the whole sequence of expected (stationary) rewards for arm
k
{\displaystyle k}
. Instead,
μ
k
{\displaystyle \mu ^{k}}
denotes the sequence of expected rewards for arm
k
{\displaystyle k}
, defined as
μ
k
=
{
μ
t
k
}
t
=
1
T
{\displaystyle \mu ^{k}=\{\mu _{t}^{k}\}_{t=1}^{T}}
.
A dynamic oracle represents the optimal policy to be compared with other policies in the non-stationary setting. The dynamic oracle optimises the expected reward at each step
t
∈
T
{\displaystyle t\in {\mathcal {T}}}
by always selecting the best arm, with expected reward of
μ
t
∗
{\displaystyle \mu _{t}^{*}}
. Thus, the cumulative expected reward
D
(
T
)
{\displaystyle {\mathcal {D}}(T)}
for the dynamic oracle at final time step
T
{\displaystyle T}
is defined as:
D
(
T
)
=
∑
t
=
1
T
μ
t
∗
.
{\displaystyle {\mathcal {D}}(T)=\sum _{t=1}^{T}{\mu _{t}^{*}}.}
Hence, the regret
ρ
π
(
T
)
{\displaystyle \rho ^{\pi }(T)}
for policy
π
{\displaystyle \pi }
is computed as the difference between
D
(
T
)
{\displaystyle {\mathcal {D}}(T)}
and the cumulative expected reward at step
T
{\displaystyle T}
for policy
π
{\displaystyle \pi }
:
ρ
π
(
T
)
=
∑
t
=
1
T
μ
t
∗
−
E
π
μ
[
∑
t
=
1
T
r
t
]
=
D
(
T
)
−
E
π
μ
[
∑
t
=
1
T
r
t
]
.
{\displaystyle \rho ^{\pi }(T)=\sum _{t=1}^{T}{\mu _{t}^{*}}-\mathbb {E} _{\pi }^{\mu }\left[\sum _{t=1}^{T}{r_{t}}\right]={\mathcal {D}}(T)-\mathbb {E} _{\pi }^{\mu }\left[\sum _{t=1}^{T}{r_{t}}\right].}
Garivier and Moulines derive some of the first results with respect to bandit problems where the underlying model can change during play. A number of algorithms were presented to deal with this case, including Discounted UCB and Sliding-Window UCB. A similar approach based on Thompson Sampling algorithm is the f-Discounted-Sliding-Window Thompson Sampling (f-dsw TS) proposed by Cavenaghi et al. The f-dsw TS algorithm exploits a discount factor on the reward history and an arm-related sliding window to contrast concept drift in non-stationary environments. Another work by Burtini et al. introduces a weighted least squares Thompson sampling approach (WLS-TS), which proves beneficial in both the known and unknown non-stationary cases.
Other variants
Many variants of the problem have been proposed in recent years.
Dueling bandit
The dueling bandit variant was introduced by Yue et al. (2012) to model the exploration-versus-exploitation tradeoff for relative feedback.
In this variant the gambler is allowed to pull two levers at the same time, but they only get a binary feedback telling which lever provided the best reward. The difficulty of this problem stems from the fact that the gambler has no way of directly observing the reward of their actions.
The earliest algorithms for this problem were InterleaveFiltering and Beat-The-Mean.
The relative feedback of dueling bandits can also lead to voting paradoxes. A solution is to take the Condorcet winner as a reference.
More recently, researchers have generalized algorithms from traditional MAB to dueling bandits: Relative Upper Confidence Bounds (RUCB), Relative EXponential weighing (REX3),
Copeland Confidence Bounds (CCB), Relative Minimum Empirical Divergence (RMED), and Double Thompson Sampling (DTS).
Collaborative bandit
Approaches using multiple bandits that cooperate sharing knowledge in order to better optimize their performance started in 2013 with "A Gang of Bandits", an algorithm relying on a similarity graph between the different bandit problems to share knowledge. The need of a similarity graph was removed in 2014 by the work on the CLUB algorithm. Following this work, several other researchers created algorithms to learn multiple models at the same time under bandit feedback.
For example, COFIBA was introduced by Li and Karatzoglou and Gentile (SIGIR 2016), where the classical collaborative filtering, and content-based filtering methods try to learn a static recommendation model given training data.
Combinatorial bandit
The Combinatorial Multiarmed Bandit (CMAB) problem arises when instead of a single discrete variable to choose from, an agent needs to choose values for a set of variables. Assuming each variable is discrete, the number of possible choices per iteration is exponential in the number of variables. Several CMAB settings have been studied in the literature, from settings where the variables are binary to more general setting where each variable can take an arbitrary set of values.
See also
Gittins index – a powerful, general strategy for analyzing bandit problems.
Greedy algorithm
Optimal stopping
Search theory
Stochastic scheduling
References
Further reading
Guha, S.; Munagala, K.; Shi, P. (2010), "Approximation algorithms for restless bandit problems", Journal of the ACM, 58: 1–50, arXiv:0711.3861, doi:10.1145/1870103.1870106, S2CID 1654066
Dayanik, S.; Powell, W.; Yamazaki, K. (2008), "Index policies for discounted bandit problems with availability constraints", Advances in Applied Probability, 40 (2): 377–400, doi:10.1239/aap/1214950209.
Powell, Warren B. (2007), "Chapter 10", Approximate Dynamic Programming: Solving the Curses of Dimensionality, New York: John Wiley and Sons, ISBN 978-0-470-17155-4.
Robbins, H. (1952), "Some aspects of the sequential design of experiments", Bulletin of the American Mathematical Society, 58 (5): 527–535, doi:10.1090/S0002-9904-1952-09620-8.
Sutton, Richard; Barto, Andrew (1998), Reinforcement Learning, MIT Press, ISBN 978-0-262-19398-6, archived from the original on 2013-12-11.
Allesiardo, Robin (2014), "A Neural Networks Committee for the Contextual Bandit Problem", Neural Information Processing – 21st International Conference, ICONIP 2014, Malaisia, November 03-06,2014, Proceedings, Lecture Notes in Computer Science, vol. 8834, Springer, pp. 374–381, arXiv:1409.8191, doi:10.1007/978-3-319-12637-1_47, ISBN 978-3-319-12636-4, S2CID 14155718.
Weber, Richard (1992), "On the Gittins index for multiarmed bandits", Annals of Applied Probability, 2 (4): 1024–1033, doi:10.1214/aoap/1177005588, JSTOR 2959678.
Katehakis, M.; C. Derman (1986), "Computing optimal sequential allocation rules in clinical trials", Adaptive statistical procedures and related topics, Institute of Mathematical Statistics Lecture Notes - Monograph Series, vol. 8, pp. 29–39, doi:10.1214/lnms/1215540286, ISBN 978-0-940600-09-6, JSTOR 4355518.
Katehakis, Michael N.; Veinott, Jr., Arthur F. (1987), "The multi-armed bandit problem: decomposition and computation", Mathematics of Operations Research, 12 (2): 262–268, doi:10.1287/moor.12.2.262, JSTOR 3689689, S2CID 656323
External links
MABWiser, open-source Python implementation of bandit strategies that supports context-free, parametric and non-parametric contextual policies with built-in parallelization and simulation capability.
PyMaBandits, open-source implementation of bandit strategies in Python and Matlab.
Contextual, open-source R package facilitating the simulation and evaluation of both context-free and contextual Multi-Armed Bandit policies.
bandit.sourceforge.net Bandit project, open-source implementation of bandit strategies.
Banditlib, open-source implementation of bandit strategies in C++.
Leslie Pack Kaelbling and Michael L. Littman (1996). Exploitation versus Exploration: The Single-State Case.
Tutorial: Introduction to Bandits: Algorithms and Theory. Part1. Part2.
Feynman's restaurant problem, a classic example (with known answer) of the exploitation vs. exploration tradeoff.
Bandit algorithms vs. A-B testing.
S. Bubeck and N. Cesa-Bianchi A Survey on Bandits.
A Survey on Contextual Multi-armed Bandits, a survey/tutorial for Contextual Bandits.
Blog post on multi-armed bandit strategies, with Python code.
Animated, interactive plots illustrating Epsilon-greedy, Thompson sampling, and Upper Confidence Bound exploration/exploitation balancing strategies.
|
|
wiki::en::Thompson sampling
|
wiki
|
Thompson sampling
|
https://en.wikipedia.org/wiki/Thompson_sampling
|
en
|
[] |
Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that address the exploration–exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.
Description
Consider a set of contexts
X
{\displaystyle {\mathcal {X}}}
, a set of actions
A
{\displaystyle {\mathcal {A}}}
, and rewards in
R
{\displaystyle \mathbb {R} }
. The aim of the player is to play actions under the various contexts, such as to maximize the cumulative rewards. Specifically, in each round, the player obtains a context
x
∈
X
{\displaystyle x\in {\mathcal {X}}}
, plays an action
a
∈
A
{\displaystyle a\in {\mathcal {A}}}
and receives a reward
r
∈
R
{\displaystyle r\in \mathbb {R} }
following a distribution that depends on the context and the issued action.
The elements of Thompson sampling are as follows:
a likelihood function
P
(
r
|
θ
,
a
,
x
)
{\displaystyle P(r|\theta ,a,x)}
;
a set
Θ
{\displaystyle \Theta }
of parameters
θ
{\displaystyle \theta }
of the distribution of
r
{\displaystyle r}
;
a prior distribution
P
(
θ
)
{\displaystyle P(\theta )}
on these parameters;
past observations triplets
D
=
{
(
x
;
a
;
r
)
}
{\displaystyle {\mathcal {D}}=\{(x;a;r)\}}
;
a posterior distribution
P
(
θ
|
D
)
∝
P
(
D
|
θ
)
P
(
θ
)
{\displaystyle P(\theta |{\mathcal {D}})\propto P({\mathcal {D}}|\theta )P(\theta )}
, where
P
(
D
|
θ
)
{\displaystyle P({\mathcal {D}}|\theta )}
is the likelihood function.
Thompson sampling consists of playing the action
a
∗
∈
A
{\displaystyle a^{\ast }\in {\mathcal {A}}}
according to the probability that it maximizes the expected reward; action
a
∗
{\displaystyle a^{\ast }}
is chosen with probability
∫
I
[
E
(
r
|
a
∗
,
x
,
θ
)
=
max
a
′
E
(
r
|
a
′
,
x
,
θ
)
]
P
(
θ
|
D
)
d
θ
,
{\displaystyle \int \mathbb {I} \left[\mathbb {E} (r|a^{\ast },x,\theta )=\max _{a'}\mathbb {E} (r|a',x,\theta )\right]P(\theta |{\mathcal {D}})d\theta ,}
where
I
{\displaystyle \mathbb {I} }
is the indicator function.
In practice, the rule is implemented by sampling. In each round, parameters
θ
∗
{\displaystyle \theta ^{\ast }}
are sampled from the posterior
P
(
θ
|
D
)
{\displaystyle P(\theta |{\mathcal {D}})}
, and an action
a
∗
{\displaystyle a^{\ast }}
chosen that maximizes
E
[
r
|
θ
∗
,
a
∗
,
x
]
{\displaystyle \mathbb {E} [r|\theta ^{\ast },a^{\ast },x]}
, i.e. the expected reward given the sampled parameters, the action, and the current context. Conceptually, this means that the player instantiates their beliefs randomly in each round according to the posterior distribution, and then acts optimally according to them. In most practical applications, it is computationally onerous to maintain and sample from a posterior distribution over models. As such, Thompson sampling is often used in conjunction with approximate sampling techniques.
History
Thompson sampling was originally described by Thompson in 1933. It was subsequently rediscovered numerous times independently in the context of multi-armed bandit problems. A first proof of convergence for the bandit case has been shown in 1997. The first application to Markov decision processes was in 2000. A related approach (see Bayesian control rule) was published in 2010. In 2010 it was also shown that Thompson sampling is instantaneously self-correcting. Asymptotic convergence results for contextual bandits were published in 2011. Thompson Sampling has been widely used in many online learning problems including A/B testing in website design and online advertising, and accelerated learning in decentralized decision making. A Double Thompson Sampling (D-TS) algorithm has been proposed for dueling bandits, a variant of traditional MAB, where feedback comes in the form of pairwise comparison.
Relationship to other approaches
Probability matching
Probability matching is a decision strategy in which predictions of class membership are proportional to the class base rates. Thus, if in the training set positive examples are observed 60% of the time, and negative examples are observed 40% of the time, the observer using a probability-matching strategy will predict (for unlabeled examples) a class label of "positive" on 60% of instances, and a class label of "negative" on 40% of instances.
Bayesian control rule
A generalization of Thompson sampling to arbitrary dynamical environments and causal structures, known as Bayesian control rule, has been shown to be the optimal solution to the adaptive coding problem with actions and observations. In this formulation, an agent is conceptualized as a mixture over a set of behaviours. As the agent interacts with its environment, it learns the causal properties and adopts the behaviour that minimizes the relative entropy to the behaviour with the best prediction of the environment's behaviour. If these behaviours have been chosen according to the maximum expected utility principle, then the asymptotic behaviour of the Bayesian control rule matches the asymptotic behaviour of the perfectly rational agent.
The setup is as follows. Let
a
1
,
a
2
,
…
,
a
T
{\displaystyle a_{1},a_{2},\ldots ,a_{T}}
be the actions issued by an agent up to time
T
{\displaystyle T}
, and let
o
1
,
o
2
,
…
,
o
T
{\displaystyle o_{1},o_{2},\ldots ,o_{T}}
be the observations gathered by the agent up to time
T
{\displaystyle T}
. Then, the agent issues the action
a
T
+
1
{\displaystyle a_{T+1}}
with probability:
P
(
a
T
+
1
|
a
^
1
:
T
,
o
1
:
T
)
,
{\displaystyle P(a_{T+1}|{\hat {a}}_{1:T},o_{1:T}),}
where the "hat"-notation
a
^
t
{\displaystyle {\hat {a}}_{t}}
denotes the fact that
a
t
{\displaystyle a_{t}}
is a causal intervention (see Causality), and not an ordinary observation. If the agent holds beliefs
θ
∈
Θ
{\displaystyle \theta \in \Theta }
over its behaviors, then the Bayesian control rule becomes
P
(
a
T
+
1
|
a
^
1
:
T
,
o
1
:
T
)
=
∫
Θ
P
(
a
T
+
1
|
θ
,
a
^
1
:
T
,
o
1
:
T
)
P
(
θ
|
a
^
1
:
T
,
o
1
:
T
)
d
θ
{\displaystyle P(a_{T+1}|{\hat {a}}_{1:T},o_{1:T})=\int _{\Theta }P(a_{T+1}|\theta ,{\hat {a}}_{1:T},o_{1:T})P(\theta |{\hat {a}}_{1:T},o_{1:T})\,d\theta }
,
where
P
(
θ
|
a
^
1
:
T
,
o
1
:
T
)
{\displaystyle P(\theta |{\hat {a}}_{1:T},o_{1:T})}
is the posterior distribution over the parameter
θ
{\displaystyle \theta }
given actions
a
1
:
T
{\displaystyle a_{1:T}}
and observations
o
1
:
T
{\displaystyle o_{1:T}}
.
In practice, the Bayesian control amounts to sampling, at each time step, a parameter
θ
∗
{\displaystyle \theta ^{\ast }}
from the posterior distribution
P
(
θ
|
a
^
1
:
T
,
o
1
:
T
)
{\displaystyle P(\theta |{\hat {a}}_{1:T},o_{1:T})}
, where the posterior distribution is computed using Bayes' rule by only considering the (causal) likelihoods of the observations
o
1
,
o
2
,
…
,
o
T
{\displaystyle o_{1},o_{2},\ldots ,o_{T}}
and ignoring the (causal) likelihoods of the actions
a
1
,
a
2
,
…
,
a
T
{\displaystyle a_{1},a_{2},\ldots ,a_{T}}
, and then by sampling the action
a
T
+
1
∗
{\displaystyle a_{T+1}^{\ast }}
from the action distribution
P
(
a
T
+
1
|
θ
∗
,
a
^
1
:
T
,
o
1
:
T
)
{\displaystyle P(a_{T+1}|\theta ^{\ast },{\hat {a}}_{1:T},o_{1:T})}
.
Upper-confidence-bound (UCB) algorithms
Thompson sampling and upper-confidence bound algorithms share a fundamental property that underlies many of their theoretical guarantees. Roughly speaking, both algorithms allocate exploratory effort to actions that might be optimal and are in this sense "optimistic". Leveraging this property, one can translate regret bounds established for UCB algorithms to Bayesian regret bounds for Thompson sampling or unify regret analysis across both these algorithms and many classes of problems.
== References ==
|
|
wiki::en::Randomized controlled trial
|
wiki
|
Randomized controlled trial
|
https://en.wikipedia.org/wiki/Randomized_controlled_trial
|
en
|
[] |
A randomized controlled trial (abbreviated RCT) is a type of scientific experiment designed to evaluate the efficacy or safety of an intervention by minimizing bias through the random allocation of participants to one or more comparison groups.
In this design, at least one group receives the intervention under study (such as a drug, surgical procedure, medical device, diet, or diagnostic test), while another group receives an alternative treatment, a placebo, or standard care.
RCTs are a fundamental methodology in modern clinical trials and are considered one of the highest-quality sources of evidence in evidence-based medicine, due to their ability to reduce selection bias and the influence of confounding factors.
Participants who enroll in RCTs differ from one another in known and unknown ways that can influence study outcomes, and yet cannot be directly controlled. By randomly allocating participants among compared treatments, an RCT enables statistical control over these influences. Provided it is designed well, conducted properly, and enrolls enough participants, an RCT may achieve sufficient control over these confounding factors to deliver a useful comparison of the treatments studied.
Definition and examples
An RCT in clinical research typically compares a proposed new treatment against an existing standard of care; these are then termed the 'experimental' and 'control' treatments, respectively. When no such generally accepted treatment is available, a placebo may be used in the control group so that participants are blinded, or not given information, about their treatment allocations. This blinding principle is ideally also extended as much as possible to other parties including researchers, technicians, data analysts, and evaluators. Effective blinding experimentally isolates the physiological effects of treatments from various psychological sources of bias.
The randomness in the assignment of participants to treatments reduces selection bias and allocation bias, balancing both known and unknown prognostic factors, in the assignment of treatments. Blinding reduces other forms of experimenter and subject biases.
A well-blinded RCT is considered the gold standard for clinical trials. Blinded RCTs are commonly used to test the efficacy of medical interventions and may additionally provide information about adverse effects, such as drug reactions. A randomized controlled trial can provide compelling evidence that the study treatment causes an effect on human health.
The terms "RCT" and "randomized trial" are sometimes used synonymously, but the latter term omits mention of controls and can therefore describe studies that compare multiple treatment groups with each other in the absence of a control group. Similarly, the initialism is sometimes expanded as "randomized clinical trial" or "randomized comparative trial", leading to ambiguity in the scientific literature. Not all RCTs are randomized controlled trials (and some of them could never be, as in cases where controls would be impractical or unethical to use). The term randomized controlled clinical trial is an alternative term used in clinical research; however, RCTs are also employed in other research areas, including many of the social sciences.
History
In the posthumously published Ortus Medicinae (1648), Jan Baptist van Helmont made the first proposal of a RCT, to test two treatment regimes of fever. One treatment would be conducted by practitioners of Galenic medicine involving bloodletting and purging, and the other would be conducted by van Helmont. It is likely that he never conducted the trial, and merely proposed it as an experiment that could be conducted.
The first reported clinical trial was conducted by James Lind in 1747 to identify a treatment for scurvy. The first blind experiment was conducted by the French Royal Commission on Animal Magnetism in 1784 to investigate the claims of mesmerism. An early essay advocating the blinding of researchers came from Claude Bernard in the latter half of the 19th century. Bernard recommended that the observer of an experiment should not have knowledge of the hypothesis being tested. This suggestion contrasted starkly with the prevalent Enlightenment-era attitude that scientific observation can only be objectively valid when undertaken by a well-educated, informed scientist. The first study recorded to have a blinded researcher was published in 1907 by W. H. R. Rivers and H. N. Webber to investigate the effects of caffeine.
Randomized experiments first appeared in psychology, where they were introduced by Charles Sanders Peirce and Joseph Jastrow in the 1880s, and in education. The earliest experiments comparing treatment and control groups were published by Robert Woodworth and Edward Thorndike in 1901, and by John E. Coover and Frank Angell in 1907.
In the early 20th century, randomized experiments appeared in agriculture, due to Jerzy Neyman and Ronald A. Fisher. Fisher's experimental research and his writings popularized randomized experiments.
The first published Randomized Controlled Trial in medicine appeared in the 1948 paper entitled "Streptomycin treatment of pulmonary tuberculosis", which described a Medical Research Council investigation. One of the authors of that paper was Austin Bradford Hill, who is credited as having conceived the modern RCT.
Trial design was further influenced by the large-scale ISIS trials on heart attack treatments that were conducted in the 1980s.
By the late 20th century, RCTs were recognized as the standard method for "rational therapeutics" in medicine. As of 2004, more than 150,000 RCTs were in the Cochrane Library. To improve the reporting of RCTs in the medical literature, an international group of scientists and editors published Consolidated Standards of Reporting Trials (CONSORT) Statements in 1996, 2001 and 2010, and these have become widely accepted.
Ethics
Although the principle of clinical equipoise ("genuine uncertainty within the expert medical community... about the preferred treatment") common to clinical trials has been applied to RCTs, the ethics of RCTs have special considerations. For one, it has been argued that equipoise itself is insufficient to justify RCTs. For another, "collective equipoise" can conflict with a lack of personal equipoise (e.g., a personal belief that an intervention is effective). Finally, Zelen's design, which has been used for some RCTs, randomizes subjects before they provide informed consent, which may be ethical for RCTs of screening and selected therapies, but is likely unethical "for most therapeutic trials."
Although subjects almost always provide informed consent for their participation in an RCT, studies since 1982 have documented that RCT subjects may believe that they are certain to receive treatment that is best for them personally; that is, they do not understand the difference between research and treatment. Further research is necessary to determine the prevalence of and ways to address this "therapeutic misconception".
The RCT method variations may also create cultural effects that have not been well understood. For example, patients with terminal illness may join trials in the hope of being cured, even when treatments are unlikely to be successful.
Trial registration
In 2004, the International Committee of Medical Journal Editors (ICMJE) announced that all trials starting enrolment after July 1, 2005, must be registered prior to consideration for publication in one of the 12 member journals of the committee. However, trial registration may still occur late or not at all.
Medical journals have been slow in adapting policies requiring mandatory clinical trial registration as a prerequisite for publication.
Classifications
By study design
One way to classify RCTs is by study design. From most to least common in the healthcare literature, the major categories of RCT study designs are:
Parallel-group – each participant is randomly assigned to a group, and all the participants in the group receive (or do not receive) an intervention.
Crossover – over time, each participant receives (or does not receive) an intervention in a random sequence.
Stepped-wedge trial - " involves random and sequential crossover of clusters (of subjects) from control to intervention until all clusters are exposed." In the past, this design has been called a "waiting list designs" or "phased implementations."
Cluster – pre-existing groups of participants (e.g., villages, schools) are randomly selected to receive (or not receive) an intervention.
Factorial – each participant is randomly assigned to a group that receives a particular combination of interventions or non-interventions (e.g., group 1 receives vitamin X and vitamin Y, group 2 receives vitamin X and placebo Y, group 3 receives placebo X and vitamin Y, and group 4 receives placebo X and placebo Y).
An analysis of the 616 RCTs indexed in PubMed during December 2006 found that 78% were parallel-group trials, 16% were crossover, 2% were split-body, 2% were cluster, and 2% were factorial.
By outcome of interest (efficacy vs. effectiveness)
RCTs can be classified as "explanatory" or "pragmatic." Explanatory RCTs test efficacy in a research setting with highly selected participants and under highly controlled conditions. In contrast, pragmatic RCTs (pRCTs) test effectiveness in everyday practice with relatively unselected participants and under flexible conditions; in this way, pragmatic RCTs can "inform decisions about practice."
By hypothesis (superiority vs. noninferiority vs. equivalence)
Another classification of RCTs categorizes them as "superiority trials", "noninferiority trials", and "equivalence trials", which differ in methodology and reporting. Most RCTs are superiority trials, in which one intervention is hypothesized to be superior to another in a statistically significant way. Some RCTs are noninferiority trials "to determine whether a new treatment is no worse than a reference treatment." Other RCTs are equivalence trials in which the hypothesis is that two interventions are indistinguishable from each other.
Randomization
The advantages of proper randomization in RCTs include:
"It eliminates bias in treatment assignment," specifically selection bias and confounding.
"It facilitates blinding (masking) of the identity of treatments from investigators, participants, and assessors."
"It permits the use of probability theory to express the likelihood that any difference in outcome between treatment groups merely indicates chance."
There are two processes involved in randomizing patients to different interventions. First is choosing a randomization procedure to generate an unpredictable sequence of allocations; this may be a simple random assignment of patients to any of the groups at equal probabilities, may be "restricted", or may be "adaptive." A second and more practical issue is allocation concealment, which refers to the stringent precautions taken to ensure that the group assignment of patients are not revealed prior to definitively allocating them to their respective groups. Non-random "systematic" methods of group assignment, such as alternating subjects between one group and the other, can cause "limitless contamination possibilities" and can cause a breach of allocation concealment.
However empirical evidence that adequate randomization changes outcomes relative to inadequate randomization has been difficult to detect.
Procedures
The treatment allocation is the desired proportion of patients in each treatment arm.
An ideal randomization procedure would achieve the following goals:
Maximize statistical power, especially in subgroup analyses. Generally, equal group sizes maximize statistical power, however, unequal groups sizes may be more powerful for some analyses (e.g., multiple comparisons of placebo versus several doses using Dunnett's procedure ), and are sometimes desired for non-analytic reasons (e.g., patients may be more motivated to enroll if there is a higher chance of getting the test treatment, or regulatory agencies may require a minimum number of patients exposed to treatment).
Minimize selection bias. This may occur if investigators can consciously or unconsciously preferentially enroll patients between treatment arms. A good randomization procedure will be unpredictable so that investigators cannot guess the next subject's group assignment based on prior treatment assignments. The risk of selection bias is highest when previous treatment assignments are known (as in unblinded studies) or can be guessed (perhaps if a drug has distinctive side effects).
Minimize allocation bias (or confounding). This may occur when covariates that affect the outcome are not equally distributed between treatment groups, and the treatment effect is confounded with the effect of the covariates (i.e., an "accidental bias"). If the randomization procedure causes an imbalance in covariates related to the outcome across groups, estimates of effect may be biased if not adjusted for the covariates (which may be unmeasured and therefore impossible to adjust for).
However, no single randomization procedure meets those goals in every circumstance, so researchers must select a procedure for a given study based on its advantages and disadvantages.
Simple
This is a commonly used and intuitive procedure, similar to "repeated fair coin-tossing." Also known as "complete" or "unrestricted" randomization, it is robust against both selection and accidental biases. However, its main drawback is the possibility of imbalanced group sizes in small RCTs. It is therefore recommended only for RCTs with over 200 subjects.
Restricted
To balance group sizes in smaller RCTs, some form of "restricted" randomization is recommended. The major types of restricted randomization used in RCTs are:
Permuted-block randomization or blocked randomization: a "block size" and "allocation ratio" (number of subjects in one group versus the other group) are specified, and subjects are allocated randomly within each block. For example, a block size of 6 and an allocation ratio of 2:1 would lead to random assignment of 4 subjects to one group and 2 to the other. This type of randomization can be combined with "stratified randomization", for example by center in a multicenter trial, to "ensure good balance of participant characteristics in each group." A special case of permuted-block randomization is random allocation, in which the entire sample is treated as one block. The major disadvantage of permuted-block randomization is that even if the block sizes are large and randomly varied, the procedure can lead to selection bias. Another disadvantage is that "proper" analysis of data from permuted-block-randomized RCTs requires stratification by blocks.
Adaptive biased-coin randomization methods (of which urn randomization is the most widely known type): In these relatively uncommon methods, the probability of being assigned to a group decreases if the group is overrepresented and increases if the group is underrepresented. The methods are thought to be less affected by selection bias than permuted-block randomization.
Adaptive
At least two types of "adaptive" randomization procedures have been used in RCTs, but much less frequently than simple or restricted randomization:
Covariate-adaptive randomization, of which one type is minimization: The probability of being assigned to a group varies in order to minimize "covariate imbalance." Minimization is reported to have "supporters and detractors" because only the first subject's group assignment is truly chosen at random, the method does not necessarily eliminate bias on unknown factors.
Response-adaptive randomization, also known as outcome-adaptive randomization: The probability of being assigned to a group increases if the responses of the prior patients in the group were favorable. Although arguments have been made that this approach is more ethical than other types of randomization when the probability that a treatment is effective or ineffective increases during the course of an RCT, ethicists have not yet studied the approach in detail.
Allocation concealment
"Allocation concealment" (defined as "the procedure for protecting the randomization process so that the treatment to be allocated is not known before the patient is entered into the study") is important in RCTs. In practice, clinical investigators in RCTs often find it difficult to maintain impartiality. Stories abound of investigators holding up sealed envelopes to lights or ransacking offices to determine group assignments in order to dictate the assignment of their next patient. Such practices introduce selection bias and confounders (both of which should be minimized by randomization), possibly distorting the results of the study. Adequate allocation concealment should defeat patients and investigators from discovering treatment allocation once a study is underway and after the study has concluded. Treatment related side-effects or adverse events may be specific enough to reveal allocation to investigators or patients thereby introducing bias or influencing any subjective parameters collected by investigators or requested from subjects.
Some standard methods of ensuring allocation concealment include sequentially numbered, opaque, sealed envelopes (SNOSE); sequentially numbered containers; pharmacy controlled randomization; and central randomization. It is recommended that allocation concealment methods be included in an RCT's protocol, and that the allocation concealment methods should be reported in detail in a publication of an RCT's results; however, a 2005 study determined that most RCTs have unclear allocation concealment in their protocols, in their publications, or both. On the other hand, a 2008 study of 146 meta-analyses concluded that the results of RCTs with inadequate or unclear allocation concealment tended to be biased toward beneficial effects only if the RCTs' outcomes were subjective as opposed to objective.
Sample size
The number of treatment units (subjects or groups of subjects) assigned to control and treatment groups, affects an RCT's reliability. If the effect of the treatment is small, the number of treatment units in either group may be insufficient for rejecting the null hypothesis in the respective statistical test. The failure to reject the null hypothesis would imply that the treatment shows no statistically significant effect on the treated in a given test. But as the sample size increases, the same RCT may be able to demonstrate a significant effect of the treatment, even if this effect is small.
Blinding
An RCT may be blinded, (also called "masked") by "procedures that prevent study participants, caregivers, or outcome assessors from knowing which intervention was received." Unlike allocation concealment, blinding is sometimes inappropriate or impossible to perform in an RCT; for example, if an RCT involves a treatment in which active participation of the patient is necessary (e.g., physical therapy), participants cannot be blinded to the intervention.
Traditionally, blinded RCTs have been classified as "single-blind", "double-blind", or "triple-blind"; however, in 2001 and 2006 two studies showed that these terms have different meanings for different people. The 2010 CONSORT Statement specifies that authors and editors should not use the terms "single-blind", "double-blind", and "triple-blind"; instead, reports of blinded RCT should discuss "If done, who was blinded after assignment to interventions (for example, participants, care providers, those assessing outcomes) and how."
RCTs without blinding are referred to as "unblinded", "open", or (if the intervention is a medication) "open-label". In 2008 a study concluded that the results of unblinded RCTs tended to be biased toward beneficial effects only if the RCTs' outcomes were subjective as opposed to objective; for example, in an RCT of treatments for multiple sclerosis, unblinded neurologists (but not the blinded neurologists) felt that the treatments were beneficial. In pragmatic RCTs, although the participants and providers are often unblinded, it is "still desirable and often possible to blind the assessor or obtain an objective source of data for evaluation of outcomes."
Analysis of data
The types of statistical methods used in RCTs depend on the characteristics of the data and include:
For dichotomous (binary) outcome data, logistic regression (e.g., to predict sustained virological response after receipt of peginterferon alfa-2a for hepatitis C) and other methods can be used.
For continuous outcome data, analysis of covariance (e.g., for changes in blood lipid levels after receipt of atorvastatin after acute coronary syndrome) tests the effects of predictor variables.
For time-to-event outcome data that may be censored, survival analysis (e.g., Kaplan–Meier estimators and Cox proportional hazards models for time to coronary heart disease after receipt of hormone replacement therapy in menopause) is appropriate.
Regardless of the statistical methods used, important considerations in the analysis of RCT data include:
Whether an RCT should be stopped early due to interim results. For example, RCTs may be stopped early if an intervention produces "larger than expected benefit or harm", or if "investigators find evidence of no important difference between experimental and control interventions."
The extent to which the groups can be analyzed exactly as they existed upon randomization (i.e., whether a so-called "intention-to-treat analysis" is used). A "pure" intention-to-treat analysis is "possible only when complete outcome data are available" for all randomized subjects; when some outcome data are missing, options include analyzing only cases with known outcomes and using imputed data. Nevertheless, the more that analyses can include all participants in the groups to which they were randomized, the less bias that an RCT will be subject to.
Whether subgroup analysis should be performed. These are "often discouraged" because multiple comparisons may produce false positive findings that cannot be confirmed by other studies.
Reporting of results
The CONSORT 2010 Statement is "an evidence-based, minimum set of recommendations for reporting RCTs." The CONSORT 2010 checklist contains 25 items (many with sub-items) focusing on "individually randomised, two group, parallel trials" which are the most common type of RCT.
For other RCT study designs, "CONSORT extensions" have been published, some examples are:
Consort 2010 Statement: Extension to Cluster Randomised Trials
Consort 2010 Statement: Non-Pharmacologic Treatment Interventions
"Reporting of surrogate endpoints in randomised controlled trial reports (CONSORT-Surrogate): extension checklist with explanation and elaboration"
Relative importance and observational studies
Two studies published in The New England Journal of Medicine in 2000 found that observational studies and RCTs overall produced similar results. The authors of the 2000 findings questioned the belief that "observational studies should not be used for defining evidence-based medical care" and that RCTs' results are "evidence of the highest grade." However, a 2001 study published in Journal of the American Medical Association concluded that "discrepancies beyond chance do occur and differences in estimated magnitude of treatment effect are very common" between observational studies and RCTs. According to a 2014 (updated in 2024) Cochrane review, there is little evidence for significant effect differences between observational studies and randomized controlled trials. To evaluate differences it is necessary to consider things other than design, such as heterogeneity, population, intervention or comparator.
Two other lines of reasoning question RCTs' contribution to scientific knowledge beyond other types of studies:
If study designs are ranked by their potential for new discoveries, then anecdotal evidence would be at the top of the list, followed by observational studies, followed by RCTs.
RCTs may be unnecessary for treatments that have dramatic and rapid effects relative to the expected stable or progressively worse natural course of the condition treated. One example is combination chemotherapy including cisplatin for metastatic testicular cancer, which increased the cure rate from 5% to 60% in a 1977 non-randomized study.
Interpretation of statistical results
Like all statistical methods, RCTs are subject to both type I ("false positive") and type II ("false negative") statistical errors. Regarding Type I errors, a typical RCT will use 0.05 (i.e., 1 in 20) as the probability that the RCT will falsely find two equally effective treatments significantly different. Regarding Type II errors, despite the publication of a 1978 paper noting that the sample sizes of many "negative" RCTs were too small to make definitive conclusions about the negative results, by 2005-2006 a sizeable proportion of RCTs still had inaccurate or incompletely reported sample size calculations.
Peer review
Peer review of results is an important part of the scientific method. Reviewers examine the study results for potential problems with design that could lead to unreliable results (for example by creating a systematic bias), evaluate the study in the context of related studies and other evidence, and evaluate whether the study can be reasonably considered to have proven its conclusions. To underscore the need for peer review and the danger of overgeneralizing conclusions, two Boston-area medical researchers performed a randomized controlled trial in which they randomly assigned either a parachute or an empty backpack to 23 volunteers who jumped from either a biplane or a helicopter. The study was able to accurately report that parachutes fail to reduce injury compared to empty backpacks. The key context that limited the general applicability of this conclusion was that the aircraft were parked on the ground, and participants had only jumped about two feet.
Advantages
RCTs are considered to be the most reliable form of scientific evidence in the hierarchy of evidence that influences healthcare policy and practice because RCTs reduce spurious causality and bias. Results of RCTs may be combined in systematic reviews which are increasingly being used in the conduct of evidence-based practice. Some examples of scientific organizations' considering RCTs or systematic reviews of RCTs to be the highest-quality evidence available are:
As of 1998, the National Health and Medical Research Council of Australia designated "Level I" evidence as that "obtained from a systematic review of all relevant randomised controlled trials" and "Level II" evidence as that "obtained from at least one properly designed randomised controlled trial."
Since at least 2001, in making clinical practice guideline recommendations the United States Preventive Services Task Force has considered both a study's design and its internal validity as indicators of its quality. It has recognized "evidence obtained from at least one properly randomized controlled trial" with good internal validity (i.e., a rating of "I-good") as the highest quality evidence available to it.
The GRADE Working Group concluded in 2008 that "randomised trials without important limitations constitute high quality evidence."
For issues involving "Therapy/Prevention, Aetiology/Harm", the Oxford Centre for Evidence-based Medicine as of 2011 defined "Level 1a" evidence as a systematic review of RCTs that are consistent with each other, and "Level 1b" evidence as an "individual RCT (with narrow Confidence Interval)."
Notable RCTs with unexpected results that contributed to changes in clinical practice include:
After Food and Drug Administration approval, the antiarrhythmic agents flecainide and encainide came to market in 1986 and 1987 respectively. The non-randomized studies concerning the drugs were characterized as "glowing", and their sales increased to a combined total of approximately 165,000 prescriptions per month in early 1989. In that year, however, a preliminary report of an RCT concluded that the two drugs increased mortality. Sales of the drugs then decreased.
Prior to 2002, based on observational studies, it was routine for physicians to prescribe hormone replacement therapy for post-menopausal women to prevent myocardial infarction. In 2002 and 2004, however, published RCTs from the Women's Health Initiative claimed that women taking hormone replacement therapy with estrogen plus progestin had a higher rate of myocardial infarctions than women on a placebo, and that estrogen-only hormone replacement therapy caused no reduction in the incidence of coronary heart disease. Possible explanations for the discrepancy between the observational studies and the RCTs involved differences in methodology, in the hormone regimens used, and in the populations studied. The use of hormone replacement therapy decreased after publication of the RCTs.
Disadvantages
Many papers discuss the disadvantages of RCTs. Among the most frequently cited drawbacks are:
Time and costs
RCTs can be expensive; one study found 28 Phase III RCTs funded by the National Institute of Neurological Disorders and Stroke prior to 2000 with a total cost of US$335 million, for a mean cost of US$12 million per RCT. Nevertheless, the return on investment of RCTs may be high, in that the same study projected that the 28 RCTs produced a "net benefit to society at 10-years" of 46 times the cost of the trials program, based on evaluating a quality-adjusted life year as equal to the prevailing mean per capita gross domestic product.
The conduct of an RCT takes several years until being published; thus, data is restricted from the medical community for long years and may be of less relevance at time of publication.
It is costly to maintain RCTs for the years or decades that would be ideal for evaluating some interventions.
Interventions to prevent events that occur only infrequently (e.g., sudden infant death syndrome) and uncommon adverse outcomes (e.g., a rare side effect of a drug) would require RCTs with extremely large sample sizes and may, therefore, best be assessed by observational studies.
Due to the costs of running RCTs, these usually only inspect one variable or very few variables, rarely reflecting the full picture of a complicated medical situation; whereas the case report, for example, can detail many aspects of the patient's medical situation (e.g. patient history, physical examination, diagnosis, psychosocial aspects, follow up).
Conflict of interest dangers
A 2011 study done to disclose possible conflicts of interests in underlying research studies used for medical meta-analyses reviewed 29 meta-analyses and found that conflicts of interests in the studies underlying the meta-analyses were rarely disclosed. The 29 meta-analyses included 11 from general medicine journals; 15 from specialty medicine journals, and 3 from the Cochrane Database of Systematic Reviews. The 29 meta-analyses reviewed an aggregate of 509 randomized controlled trials (RCTs). Of these, 318 RCTs reported funding sources with 219 (69%) industry funded. 132 of the 509 RCTs reported author conflict of interest disclosures, with 91 studies (69%) disclosing industry financial ties with one or more authors. The information was, however, seldom reflected in the meta-analyses. Only two (7%) reported RCT funding sources and none reported RCT author-industry ties. The authors concluded "without acknowledgment of COI due to industry funding or author industry financial ties from RCTs included in meta-analyses, readers' understanding and appraisal of the evidence from the meta-analysis may be compromised."
Some RCTs are fully or partly funded by the health care industry (e.g., the pharmaceutical industry) as opposed to government, nonprofit, or other sources. A systematic review published in 2003 found four 1986–2002 articles comparing industry-sponsored and nonindustry-sponsored RCTs, and in all the articles there was a correlation of industry sponsorship and positive study outcome. A 2004 study of 1999–2001 RCTs published in leading medical and surgical journals determined that industry-funded RCTs "are more likely to be associated with statistically significant pro-industry findings." These results have been mirrored in trials in surgery, where although industry funding did not affect the rate of trial discontinuation it was however associated with a lower odds of publication for completed trials. One possible reason for the pro-industry results in industry-funded published RCTs is publication bias. Other authors have cited the differing goals of academic and industry sponsored research as contributing to the difference. Commercial sponsors may be more focused on performing trials of drugs that have already shown promise in early stage trials, and on replicating previous positive results to fulfill regulatory requirements for drug approval.
Ethics
If a disruptive innovation in medical technology is developed, it may be difficult to test this ethically in an RCT if it becomes "obvious" that the control subjects have poorer outcomes—either due to other foregoing testing, or within the initial phase of the RCT itself. Ethically it may be necessary to abort the RCT prematurely, and getting ethics approval (and patient agreement) to withhold the innovation from the control group in future RCTs may not be feasible.
Historical control trials (HCT) exploit the data of previous RCTs to reduce the sample size; however, these approaches are controversial in the scientific community and must be handled with care.
In social science
Due to the recent emergence of RCTs in social science, the use of RCTs in social sciences is a contested issue. Some writers from a medical or health background have argued that existing research in a range of social science disciplines lacks rigour, and should be improved by greater use of randomized control trials.
Transport science
Researchers in transport science argue that public spending on programmes such as school travel plans could not be justified unless their efficacy is demonstrated by randomized controlled trials. Graham-Rowe and colleagues reviewed 77 evaluations of transport interventions found in the literature, categorising them into 5 "quality levels". They concluded that most of the studies were of low quality and advocated the use of randomized controlled trials wherever possible in future transport research.
Dr. Steve Melia took issue with these conclusions, arguing that claims about the advantages of RCTs, in establishing causality and avoiding bias, have been exaggerated. He proposed the following eight criteria for the use of RCTs in contexts where interventions must change human behaviour to be effective:
The intervention:
Has not been applied to all members of a unique group of people (e.g. the population of a whole country, all employees of a unique organisation etc.)
Is applied in a context or setting similar to that which applies to the control group
Can be isolated from other activities—and the purpose of the study is to assess this isolated effect
Has a short timescale between its implementation and maturity of its effects
And the causal mechanisms:
Are either known to the researchers, or else all possible alternatives can be tested
Do not involve significant feedback mechanisms between the intervention group and external environments
Have a stable and predictable relationship to exogenous factors
Would act in the same way if the control group and intervention group were reversed
Criminology
A 2005 review found 83 randomized experiments in criminology published in 1982–2004, compared with only 35 published in 1957–1981. The authors classified the studies they found into five categories: "policing", "prevention", "corrections", "court", and "community". Focusing only on offending behavior programs, Hollin (2008) argued that RCTs may be difficult to implement (e.g., if an RCT required "passing sentences that would randomly assign offenders to programmes") and therefore that experiments with quasi-experimental design are still necessary.
Education
RCTs have been used in evaluating a number of educational interventions. Between 1980 and 2016, over 1,000 reports of RCTs have been published. For example, a 2009 study randomized 260 elementary school teachers' classrooms to receive or not receive a program of behavioral screening, classroom intervention, and parent training, and then measured the behavioral and academic performance of their students. Another 2009 study randomized classrooms for 678 first-grade children to receive a classroom-centered intervention, a parent-centered intervention, or no intervention, and then followed their academic outcomes through age 19.
Criticism
A 2018 review of the 10 most cited randomised controlled trials noted poor distribution of background traits, difficulties with blinding, and discussed other assumptions and biases inherent in randomised controlled trials. These include the "unique time period assessment bias", the "background traits remain constant assumption", the "average treatment effects limitation", the "simple treatment at the individual level limitation", the "all preconditions are fully met assumption", the "quantitative variable limitation" and the "placebo only or conventional treatment only limitation".
See also
Drug development
Hypothesis testing
Impact evaluation
Jadad scale
Pipeline planning
Patient and public involvement
Observational study
Blinded experiment
Statistical inference
Royal Commission on Animal Magnetism – 1784 French scientific bodies' investigations involving systematic controlled trials
References
== Further reading ==
|
|
wiki::en::Scientific control
|
wiki
|
Scientific control
|
https://en.wikipedia.org/wiki/Scientific_control
|
en
|
[] |
A scientific control is an element of an experiment or observation designed to minimize the influence of variables other than the independent variable under investigation, thereby reducing the risk of confounding.
The use of controls increases the reliability and validity of results by providing a baseline for comparison between experimental measurements and control measurements. In many designs, the control group does not receive the experimental treatment, allowing researchers to isolate the effect of the independent variable.
Scientific controls are a fundamental part of the scientific method, particularly in fields such as biology, chemistry, medicine, and psychology, where complex systems are subject to multiple interacting variables.
Controlled experiments
Controls eliminate alternate explanations of experimental results, especially experimental errors and experimenter bias. Many controls are specific to the type of experiment being performed, as in the molecular markers used in SDS-PAGE experiments, and may simply have the purpose of ensuring that the equipment is working properly. The selection and use of proper controls to ensure that experimental results are valid (for example, absence of confounding variables) can be very difficult. Control measurements may also be used for other purposes: for example, a measurement of a microphone's background noise in the absence of a signal allows the noise to be subtracted from later measurements of the signal, thus producing a processed signal of higher quality.
For example, if a researcher feeds an experimental artificial sweetener to sixty laboratories rats and observes that ten of them subsequently become sick, the underlying cause could be the sweetener itself or something unrelated. Other variables, which may not be readily obvious, may interfere with the experimental design. For instance, the artificial sweetener might be mixed with a dilutant and it might be the dilutant that causes the effect. To control for the effect of the dilutant, the same test is run twice; once with the artificial sweetener in the dilutant, and another done exactly the same way but using the dilutant alone. Now the experiment is controlled for the dilutant and the experimenter can distinguish between sweetener, dilutant, and non-treatment. Controls are most often necessary where a confounding factor cannot easily be separated from the primary treatments. For example, it may be necessary to use a tractor to spread fertilizer where there is no other practicable way to spread fertilizer. The simplest solution is to have a treatment where a tractor is driven over plots without spreading fertilizer and in that way, the effects of tractor traffic are controlled.
The simplest types of control are negative and positive controls, and both are found in many different types of experiments. These two controls, when both are successful, are usually sufficient to eliminate most potential confounding variables: it means that the experiment produces a negative result when a negative result is expected, and a positive result when a positive result is expected. Other controls include vehicle controls, sham controls and comparative controls.
Confounding
Confounding is a critical issue in observational studies because it can lead to biased or misleading conclusions about relationships between variables. A confounder is an extraneous variable that is related to both the independent variable (treatment or exposure) and the dependent variable (outcome), potentially distorting the true association. If confounding is not properly accounted for, researchers might incorrectly attribute an effect to the exposure when it is actually due to another factor. This can result in incorrect policy recommendations, ineffective interventions, or flawed scientific understanding. For example, in a study examining the relationship between physical activity and heart disease, failure to control for diet, a potential confounder, could lead to an overestimation or underestimation of the true effect of exercise.
Falsification tests are a robustness-checking technique used in observational studies to assess whether observed associations are likely due to confounding, bias, or model misspecification rather than a true causal effect. These tests help validate findings by applying the same analytical approach to a scenario where no effect is expected. If an association still appears where none should exist, it raises concerns that the primary analysis may suffer from confounding or other biases.
Negative controls are one type of falsification tests. The need to use negative controls usually arise in observational studies, when the study design can be questioned because of a potential confounding mechanism. A Negative control test can reject study design, but it cannot validate them. Either because there might be another confounding mechanism, or because of low statistical power. Negative controls are increasingly used in the epidemiology literature, but they show promise in social sciences fields such as economics. Negative controls are divided into two main categories: Negative Control Exposures (NCEs) and Negative Control Outcomes (NCOs).
Lousdal et al. examined the effect of screening participation on death from breast cancer. They hypothesized that screening participants are healthier than non-participants and, therefore, already at baseline have a lower risk of breast-cancer death. Therefore, they used proxies for better health as negative-control outcomes (NCOs) and proxies for healthier behavior as negative-control exposures (NCEs). Death from causes other than breast cancer was taken as NCO, as it is an outcome of better health, not effected by breast cancer screening. Dental care participation was taken to be NCE, as it is assumed to be a good proxy of health attentive behavior.
Negative control
Negative controls are variables that meant to help when the study design is suspected to be invalid because of unmeasured confounders that are correlated with both the treatment and the outcome. Where there are only two possible outcomes, e.g. positive or negative, if the treatment group and the negative control (non-treatment group) both produce a negative result, it can be inferred that the treatment had no effect. If the treatment group and the negative control both produce a positive result, it can be inferred that a confounding variable is involved in the phenomenon under study, and the positive results are not solely due to the treatment.
In other examples, outcomes might be measured as lengths, times, percentages, and so forth. In the drug testing example, we could measure the percentage of patients cured. In this case, the treatment is inferred to have no effect when the treatment group and the negative control produce the same results. Some improvement is expected in the placebo group due to the placebo effect, and this result sets the baseline upon which the treatment must improve upon. Even if the treatment group shows improvement, it needs to be compared to the placebo group. If the groups show the same effect, then the treatment was not responsible for the improvement (because the same number of patients were cured in the absence of the treatment). The treatment is only effective if the treatment group shows more improvement than the placebo group.
Negative Control Exposure (NCE)
NCE is a variable that should not causally affect the outcome, but may suffer from the same confounding as the exposure-outcome relationship in question. A priori, there should be no statistical association between the NCE and the outcome. If an association is found, then it through the unmeasured confounder, and since the NCE and treatment share the same confounding mechanism, there is an alternative path, apart from the direct path from the treatment to the outcome. In that case, the study design is invalid.
For example, Yerushalmy used husband's smoking as an NCE. The exposure was maternal smoking; the outcomes were various birth factors, such as incidence of low birth weight, length of pregnancy, and neonatal mortality rates. It is assumed that husband's smoking share common confounders, such household health lifestyle with the pregnant woman's smoking, but it does not causally affect the fetus development. Nonetheless, Yerushalmy found a statistical association, And as a result, it casts doubt on the proposition that cigarette smoking causally interferes with intrauterine development of the fetus.
Differences Between Negative Control Exposures and Placebo
The term negative controls is used when the study is based on observations, while the Placebo should be used as a non-treatment in randomized control trials.
Negative Control Outcome (NCO)
Negative Control Outcomes are the more popular type of negative controls. NCO is a variable that is not causally affected by the treatment, but suspected to have a similar confounding mechanism as the treatment-outcome relationship. If the study design is valid, there should be no statistical association between the NCO and the treatment. Thus, an association between them suggest that the design is invalid.
For example, Jackson et al. used mortality from all causes outside of influenza season an NCO in a study examining influenza vaccine's effect on influenza-related deaths. A possible confounding mechanism is health status and lifestyle, such as the people who are more healthy in general also tend to take the influenza vaccine. Jackson et al. found that a preferential receipt of vaccine by relatively healthy seniors, and that differences in health status between vaccinated and unvaccinated groups leads to bias in estimates of influenza vaccine effectiveness. In a similar example, when discussing the impact of air pollutants on asthma hospital admissions, Sheppard et al. et al. used non-elderly appendicitis hospital admissions as NCO.
Formal Conditions
Given a treatment
A
{\displaystyle A}
and an outcome
Y
{\displaystyle Y}
, in the presence of a set of control variables
X
{\displaystyle X}
, and unmeasured confounder
U
{\displaystyle U}
for the
A
−
Y
{\displaystyle A-Y}
relationship. Shi et al. presented formal conditions for a negative control outcome
Y
~
{\displaystyle {\tilde {Y}}}
,
Stable Unit Treatment Value Assumption (SUTVA): For both
Y
{\displaystyle {Y}}
and
Y
~
{\displaystyle {\tilde {Y}}}
with regard to
A
=
a
{\displaystyle A=a}
.
Latent Exchangeability:
Y
A
=
a
⊥
A
|
X
,
U
{\displaystyle Y^{A=a}\perp A|\;X,U}
Given
X
{\displaystyle X}
and
U
{\displaystyle U}
, the potential outcome
Y
A
=
a
{\displaystyle Y^{A=a}}
is independent of the treatment.
Irrelevancy: Ensures the irrelevancy of the treatment on the NCO.
Y
~
A
=
a
=
Y
~
A
=
a
′
=
Y
~
|
U
,
X
{\displaystyle {\tilde {Y}}^{A=a}={\tilde {Y}}^{A=a'}={\tilde {Y}}|\;U,X}
: There is no causal effect of
A
{\displaystyle A}
on
Y
~
{\displaystyle {\tilde {Y}}}
given
X
{\displaystyle X}
and
U
{\displaystyle U}
.
Y
~
⊥
A
|
U
,
X
{\displaystyle {\tilde {Y}}\perp A|\;U,X}
: There is no causal effect of
A
{\displaystyle A}
on
Y
~
{\displaystyle {\tilde {Y}}}
given
X
{\displaystyle X}
and
U
{\displaystyle U}
. The NCO is independent of the treatment given
X
{\displaystyle X}
and
U
{\displaystyle U}
.
U-Comparability:
Y
~
⧸
⊥
U
|
X
{\displaystyle {\tilde {Y}}\not {\perp }U|\;X}
The unmeasured confounders
U
{\displaystyle U}
of the association between
A
{\displaystyle A}
and
Y
{\displaystyle Y}
are the same for the association between
A
{\displaystyle A}
and
Y
~
{\displaystyle {\tilde {Y}}}
.
Given assumption 1 - 4, a non-null association between
A
{\displaystyle A}
and
Y
~
{\displaystyle {\tilde {Y}}}
, can be explained by
U
{\displaystyle U}
, and not by another mechanism. A possible violation of Latent Exchangeability will be when only the people that are influenced by a medicine will take it, even if both
X
{\displaystyle X}
and
U
{\displaystyle U}
are the same. For example, we would expect that given age and medical history (
X
{\displaystyle X}
), general health awareness (
U
{\displaystyle U}
), the intake of
A
{\displaystyle A}
influenza vaccine will be independent of potential influenza related deaths
Y
~
A
=
a
{\displaystyle {\tilde {Y}}^{A=a}}
. Otherwise, the Latent Exchangeability assumption is violated, and no identification can be made.
A violation of Irrelevancy occurs when there is a causal effect of
A
{\displaystyle A}
on
Y
~
{\displaystyle {\tilde {Y}}}
. For example, we would expect that given
X
{\displaystyle X}
and
U
{\displaystyle U}
, the influenza vaccine does not influence all-cause mortality. If, however, during the influenza vaccine medical visit, the physician also performs a general physical test, recommends good health habits, and prescribes vitamins and essential drugs. In this case, there is likely a causal effect of
A
{\displaystyle A}
on
Y
~
{\displaystyle {\tilde {Y}}}
(conditional on
X
{\displaystyle X}
and
U
{\displaystyle U}
). Therefore,
Y
~
{\displaystyle {\tilde {Y}}}
cannot be used as NCO, as the test might fail even if the causal design is valid.
U-Comparability is violated when
Y
~
⊥
U
{\displaystyle {\tilde {Y}}{\perp }U}
, and therefore the lack of association between
A
{\displaystyle A}
and
Y
~
{\displaystyle {\tilde {Y}}}
does not provide us any evidence for the invalidity of
A
{\displaystyle A}
. This violation would occur when we choose a poor NCO, that is not or very weakly correlated with the unmeasured confounders.
Positive control
Positive controls are often used to assess test validity. For example, to assess a new test's ability to detect a disease (its sensitivity), then we can compare it against a different test that is already known to work. The well-established test is a positive control since we already know that the answer to the question (whether the test works) is yes.
Similarly, in an enzyme assay to measure the amount of an enzyme in a set of extracts, a positive control would be an assay containing a known quantity of the purified enzyme (while a negative control would contain no enzyme). The positive control should give a large amount of enzyme activity, while the negative control should give very low to no activity.
If the positive control does not produce the expected result, there may be something wrong with the experimental procedure, and the experiment is repeated. For difficult or complicated experiments, the result from the positive control can also help in comparison to previous experimental results. For example, if the well-established disease test was determined to have the same effect as found by previous experimenters, this indicates that the experiment is being performed in the same way that the previous experimenters did.
When possible, multiple positive controls may be used—if there is more than one disease test that is known to be effective, more than one might be tested. Multiple positive controls also allow finer comparisons of the results (calibration, or standardization) if the expected results from the positive controls have different sizes. For example, in the enzyme assay discussed above, a standard curve may be produced by making many different samples with different quantities of the enzyme.
Randomization
In randomization, the groups that receive different experimental treatments are determined randomly. While this does not ensure that there are no differences between the groups, it ensures that the differences are distributed equally, thus correcting for systematic errors.
For example, in experiments where crop yield is affected (e.g. soil fertility), the experiment can be controlled by assigning the treatments to randomly selected plots of land. This mitigates the effect of variations in soil composition on the yield.
Blind experiments
Blinding is the practice of withholding information that may bias an experiment. For example, participants may not know who received an active treatment and who received a placebo. If this information were to become available to trial participants, patients could receive a larger placebo effect, researchers could influence the experiment to meet their expectations (the observer effect), and evaluators could be subject to confirmation bias. A blind can be imposed on any participant of an experiment, including subjects, researchers, technicians, data analysts, and evaluators. In some cases, sham surgery may be necessary to achieve blinding.
During the course of an experiment, a participant becomes unblinded if they deduce or otherwise obtain information that has been masked to them. Unblinding that occurs before the conclusion of a study is a source of experimental error, as the bias that was eliminated by blinding is re-introduced. Unblinding is common in blind experiments and must be measured and reported. Meta-research has revealed high levels of unblinding in pharmacological trials. In particular, antidepressant trials are poorly blinded. Reporting guidelines recommend that all studies assess and report unblinding. In practice, very few studies assess unblinding.
Blinding is an important tool of the scientific method, and is used in many fields of research. In some fields, such as medicine, it is considered essential. In clinical research, a trial that is not blinded trial is called an open trial.
See also
False positives and false negatives
Designed experiment
Controlling for a variable
James Lind
Randomized controlled trial
Wait list control group
References
External links
"Control" . Encyclopædia Britannica. Vol. 7 (11th ed.). 1911.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.