Glossary of terms

This glossary has two parts: an A - Z of terms and an explanation of the Summary of Findings table contained in SUPPORT Summaries. The A - Z of terms is based on the glossary contained in the Cochrane Collaboration Handbook for Systematic Reviews of Interventions and is used with permission. The Cochrane Glossary is updated from time to time so this version may not be exactly the same as that used by the Cochrane Collaboration. The full version of the Cochrane Glossary is at

Explanation of the Summary of Findings table

This document explains each of the sections of the Summary of Findings table and can be viewed as a PDF.


A - Z of terms





























Absolute risk difference
See risk difference.

Absolute risk reduction
See risk difference.

Additive model
A statistical model in which the combined effect of several factors is the sum of the effects produced by each of the factors in the absence of the others. For example, if one factor increases risk by a% and a second factor by b%, the additive combined effect of the two factors is (a + b)%. See also multiplicative model.

 Adjusted analysis
An analysis that controls (adjusts) for baseline imbalances in important patient characteristics.

 Adverse event
An adverse outcome that occurs during or after the use of a drug or other intervention but is not necessarily caused by it.

 Adverse effect
An adverse event for which the causal relation between the drug/intervention and the event is at least a reasonable possibility. The term ‘adverse effect’ applies to all interventions, while ‘adverse drug reaction’ (ADR) is used only with drugs. In the case of drugs an adverse effect tends to be seen from the point of view of the drug and an adverse reaction is seen from the point of view of the patient.

 Adverse event
An adverse outcome that occurs during or after the use of a drug or other intervention but is not necessarily caused by it. See also: Adverse effect

Adverse reaction
See adverse effect.

 Aggregate data
Data summarised by groups, for example summary outcome data for treatment and control groups in a controlled trial.

Allocation concealment
See concealment of allocation.

Applicability (synonyms: external validity, generalisability, relevance, transferability)
The degree to which the results of an observation, trial or review hold true in other settings.

[In a controlled trial.] Refers to a group of participants allocated a particular treatment. In a randomised controlled trial, allocation to different arms is determined by the randomisation procedure. Many controlled trials have two arms, a group of participants assigned to an experimental intervention (sometimes called the treatment arm) and a group of participants assigned to a control (the control arm). Trials may have more than two arms, with more than one experimental arm and/or more than one control arm.

A relationship between two characteristics, such that as one changes, the other changes in a predictable way. For example, statistics demonstrate that there is an association between smoking and lung cancer. In a positive association, one quantity increases as the other one increases (as with smoking and lung cancer). In a negative association, an increase in one quantity corresponds to a decrease in the other. Association does not necessarily imply a causal effect. (Also called correlation.)

The loss of participants during the course of a study. (Also called loss to follow up.) Participants that are lost during the study are often call dropouts.

 Attrition bias
Systematic differences between comparison groups in withdrawals or exclusions of participants from the results of a trial. For example, patients may drop out of a trial because of side effects of the intervention. Excluding these patients from the analysis could result in an overestimate of the effectiveness of the intervention.



Baseline characteristics
Values of demographic, clinical and other variables collected for each participant at the beginning of a trial, before the intervention is administered.

[In statistics.] A systematic error or deviation in results or inferences from the truth. In studies of the effects of health care, the main types of bias arise from systematic differences in the groups that are compared (selection bias), the care that is provided, exposure to other factors apart from the intervention of interest (performance bias), withdrawals or exclusions of people entered into a study (attrition bias) or how outcomes are assessed (detection bias). Reviews of studies may also be particularly affected by reporting bias, where a biased subset of all the relevant data is available.

 Bias prevention
Aspects of the design or conduct of a study designed to prevent bias. For controlled trials, such aspects include randomisation, blinding and concealment of allocation.

 Blinding (synonym: masking)
[In a controlled trial:] The process of preventing those involved in a trial from knowing to which comparison group a particular participant belongs. The risk of bias is minimised when as few people as possible know who is receiving the experimental intervention and who the control intervention. Participants, caregivers, outcome assessors, and analysts are all candidates for being blinded.  Blinding of certain groups is not always possible, for example surgeons in surgical trials. The terms single blind, double blind and triple blind are in common use, but are not used consistently and so are ambiguous unless the specific people who are blinded are listed.  (Also called masking.)



Case series
A study reporting observations on a series of individuals, usually all receiving the same intervention, with no control group.

 Case study
A study reporting observations on a single individual.  

 Case-control study 
A study that compares people with a specific disease or outcome of interest (cases) to people from the same population without that disease or outcome (controls), and which seeks to find associations between the outcome and prior exposure to particular risk factors. This design is particularly useful where the outcome is rare and past exposure can be reliably measured. Case-control studies are usually retrospective, but not always.

Causal effect
An association between two characteristics that can be demonstrated to be due to cause and effect, i.e. a change in one causes the change in the other. Causality can be demonstrated by experimental studies such as controlled trials (for example, that an experimental intervention causes a reduction in mortality). However, causality can often not be determined from an observational study.

See Confidence interval

CINAHL (Cumulative Index of Nursing and Allied Health Literature)
Electronic database covering the major journals in nursing and allied health. Years of coverage: 1983 - present.

Clinical guideline
A systematically developed statement for practitioners and patients about appropriate health care for specific clinical circumstances.

 Clinical trial
An experiment to compare the effects of two or more healthcare interventions. Clinical trial is an umbrella term for a variety of designs of healthcare trials, including uncontrolled trials, controlled trials, and randomised controlled trials. (Also called intervention study.)

Clinically significant
A result (e.g. a treatment effect) that is large enough to be of practical importance to patients and healthcare providers. This is not the same thing as statistically significant. Assessing clinical significance takes into account factors such as the size of a treatment effect, the severity of the condition being treated, the side effects of the treatment, and the cost. For instance, if the estimated effect of a treatment for acne was small but statistically significant, but the treatment was very expensive, and caused many of the treated patients to feel nauseous, this would not be a clinically significant result. Showing that a drug lowered the heart rate by an average of 1 beat per minute would also not be clinically significant.

 Cluster randomised trial
A trial in which clusters of individuals (e.g. clinics, families, geographical areas), rather than individuals themselves, are randomised to different arms. In such studies, care should be taken to avoid unit of analysis errors.

Cochrane Collaboration
An international organisation that aims to help people make well informed decisions about health by preparing, maintaining and ensuring the accessibility of systematic reviews of the benefits and risks of healthcare interventions.

 Cochrane Database of Systematic Reviews (CDSR)
One of the databases in The Cochrane Library. It brings together all the currently available Cochrane Reviews and Protocols for Cochrane Reviews. It is updated quarterly, and is available via the Internet and CD-ROM. See The Cochrane Library.

 Cochrane Library (CLIB)
A collection of databases, published on CD-ROM and the Internet and updated quarterly, containing the Cochrane Database of Systematic Reviews, the Cochrane Central Register of Controlled Trials, the Database of Abstracts of Reviews of Effects, the Cochrane Methodology Register, the HTA Database, NHSEED, and information about The Cochrane Collaboration.

 Cochrane Review
Cochrane Reviews are systematic summaries of evidence of the effects of healthcare interventions. They are intended to help people make practical decisions. 

 Cohort study
An observational study in which a defined group of people (the cohort) is followed over time. The outcomes of people in subsets of this cohort are compared, to examine people who were exposed or not exposed (or exposed at different levels) to a particular intervention or other factor of interest. A prospective cohort study assembles participants and follows them into the future.  A retrospective (or historical) cohort study identifies subjects from past records and follows them from the time of those records to the present. Because subjects are not allocated by the investigator to different interventions or other exposures, adjusted analysis is usually required to minimise the influence of other factors (confounders).

The application of additional diagnostic or therapeutic procedures to people receiving a particular programme of treatment. In a controlled trial, members of either or both the experimental and the control groups might receive co-interventions.

The presence of one or more diseases or conditions other than those of primary interest. In a study looking at treatment for one disease or condition, some of the individuals may have other diseases or conditions that could affect their outcomes.  (A co-morbidity may be a confounder.)

 Comparison group
See control group.

 Concealment of allocation
The process used to ensure that the person deciding to enter a participant into a randomised controlled trial does not know the comparison group into which that individual will be allocated. This is distinct from blinding, and is aimed at preventing selection bias. Some attempts at concealing allocation are more prone to manipulation than others, and the method of allocation concealment is used as an assessment of the quality of a trial. See also bias prevention. (Also called allocation concealment.)

Conference abstracts
Short summaries of presentations at conferences. May be published as proceedings.

 Confidence interval (CI)
A measure of the uncertainty around the main finding of a statistical analysis.  Estimates of unknown quantities, such as the odds ratio comparing an experimental intervention with a control, are usually presented as a point estimate and a 95% confidence interval. This means that if someone were to keep repeating a study in other samples from the same population, 95% of the confidence intervals from those studies would contain the true value of the unknown quantity. Alternatives to 95%, such as 90% and 99% confidence intervals, are sometimes used. Wider intervals indicate lower precision; narrow intervals, greater precision. (Also called CI.) 

Confidence limits
The upper and lower boundaries of a confidence interval.

Conflict of interest declaration [or Competing interests declaration]
A statement by a contributor to a report or review of personal, financial, or other interests that could have influenced someone.

Confounded comparison
A comparison between two treatment groups that will give a biased estimate of the effect of treatment due to the study design. For a comparison to be unconfounded, the two treatment groups must be treated identically apart from the randomised treatment. For instance, to estimate the effect of heparin in acute stroke, a trial of heparin alone versus placebo would provide an unconfounded comparison.  However, a trial of heparin alone versus aspirin alone provides a confounded comparison of the effect of heparin. (See also unconfounded comparison.)

A factor that is associated with both an intervention (or exposure) and the outcome of interest. For example, if people in the experimental group of a controlled trial are younger than those in the control group, it will be difficult to decide whether a lower risk of death in one group is due to the intervention or the difference in ages. Age is then said to be a confounder, or a confounding variable.  Randomisation is used to minimise imbalances in confounding variables between experimental and control groups. Confounding is a major concern in non-randomised studies. See also adjusted analyses.

[In a controlled trial:] The inadvertent application of the intervention being evaluated to people in the control group; or inadvertent failure to apply the intervention to people assigned to the intervention group. Fear of contamination is one motivation for performing a cluster randomised trial.

The conditions and circumstances that are relevant to the application of an intervention, for example the setting (in hospital, at home, in the air); the time (working day, holiday, night-time); type of practice (primary, secondary, tertiary care; private practice, insurance practice, charity); whether routine or emergency.


  1. [In a controlled trial:] A participant in the arm that acts as a comparator for one or more experimental interventions. Controls may receive placebo, no treatment, standard treatment, or an active intervention, such as a standard drug.
  2. [In a case-control study:] A person in the group without the disease or outcome of interest.

 Control group

  1. [In a controlled trial:] The arm that acts as a comparator for one or more experimental interventions. See also control. (Also called comparison group.)
  2. [In a case-control study:] The group without the disease or outcome of interest. (Also called comparison group.)

 Controlled before and after study
A non-randomised study design where a control population of similar characteristics and performance as the intervention group is identified. Data are collected before and after the intervention in both the control and intervention groups.

 Controlled trial
A clinical trial that has a control group. Such trials are not necessarily randomised.

 Conventional treatment
Whatever the standard or usual treatment is for a particular condition at that time.

See association. (Positive correlation is the same as positive association, and negative correlation is the same as negative association.)

 Cost-benefit analysis
An economic analysis that converts effects into the same monetary terms as the costs and compares them.

 Cost-effectiveness analysis
An economic analysis that converts effects into health terms and describes the costs for some additional health gain (e.g. cost per additional stroke prevented).

 Cost-utility analysis
An economic analysis that expresses effects as overall health improvement and describes how much it costs for some additional utility gain (e.g. cost per additional quality-adjusted life-year.)

Cross-over trial
A type of clinical trial comparing two or more interventions in which the participants, upon completion of the course of one treatment, are switched to another. For example, for a comparison of treatments A and B, the participants are randomly allocated to receive them in either the order A, B or the order B, A. Particularly appropriate for study of treatment options for relatively stable health problems. The time during which the first interventions  is taken is known as the first period, with the second intervention being taken during the second period.  

 Cross-sectional study
A study measuring the distribution of some characteristic(s) in a population at a particular point in time. (Also called survey.)

Cumulative meta-analysis
A meta-analysis in which studies are added one at a time in a specified order (e.g. according to date of publication or quality) and the results are summarised as each new study is added. In a graph of a cumulative meta-analysis, each horizontal line represents the summary of the results as each study is added, rather than the results of a single study.



Descriptive study
A  study that describes characteristics of a sample of individuals. Unlike an experimental study, the investigators do not actively intervene to test a hypothesis, but merely describe the health status or characteristics of a sample from a defined population.

 Detection bias (synonym: ascertainment bias)
Systematic difference between comparison groups in how outcomes are ascertained, diagnosed or verified.  (Also called ascertainment bias.)

The collection of values of a variable in the population or the sample, sometimes called an empirical distribution. See also probability distribution.

 Dichotomous data (synonym: binary data)
Data that can take one of two possible values, such as dead/alive, smoker/non-smoker, present/not present. (Also called binary data.) Sometimes continuous data or ordinal data are simplified into dichotomous data (e.g. age in years could become <75 years or ≥75 years).

Dose dependent
A response to a drug which may be related to the amount received (i.e. the dose). Sometimes trials are done to test the effect of different dosages of the same drug. This may be true for both benefits and harms.

Dose response relationship
The relationship between the quantity of treatment given and its effect on outcome.  In meta-analysis, dose-response relationships can be investigated using meta-regression.

Double blind
See blinding.

See attrition.



Economic analysis (synonym: economic evaluation)
Comparison of the relationship between costs and outcomes of alternative healthcare interventions. See cost-benefit analysis, cost-effectiveness analysis, and cost-utility analysis.

Effect size

  1. A generic term for the estimate of effect of treatment for a study.
  2. A dimensionless measure of effect that is typically used for continuous data when different scales (e.g. for measuring pain) are used to measure an outcome and is usually defined as the difference in means between the intervention and control groups divided by the standard deviation of the control or both groups.  See also standardised mean difference.

The extent to which a specific intervention, when used under ordinary circumstances, does what it is intended to do. Clinical trials that assess effectiveness are sometimes called pragmatic or management trials. See also intention-to-treat.

The extent to which an intervention produces a beneficial result under ideal conditions. Clinical trials that assess efficacy are sometimes called explanatory trials and are restricted to participants who fully co-operate.

EMBASE (Excerpta Medica database)
A European-based electronic database of pharmacological and biomedical literature covering 3,500 journals from 110 countries. Years of coverage - 1974 to present.

Empirical results are based on experience (or observation) rather than on reasoning alone.

See outcome.

The study of the health of populations and communities, not just particular individuals.

A state of uncertainty where a person believes it is equally likely that either of two treatment options is better.

 Equivalence trial
A trial designed to determine whether the response to two or more treatments differs by an amount that is clinically unimportant. This is usually demonstrated by showing that the true treatment difference is likely to lie between a lower and an upper equivalence level of clinically acceptable differences.  See also non-inferiority trial.

 Estimate of effect (synonym: treatment effect)
The observed relationship between an intervention and an outcome expressed as, for example, a number needed to treat to benefit, odds ratio, risk difference, risk ratio, standardised mean difference, or weighted mean difference.  (Also called treatment effect.)

Event rate
See risk.

 Experimental intervention
An intervention under evaluation. In a controlled trial, an experimental intervention arm is compared with one or more control arms, and possibly with additional experimental intervention arms.

 Experimental study
the investigators actively intervene to test a hypothesis.  In a controlled trial, one type of experiment, the people receiving the treatment being tested are said to be in the experimental group or arm of the trial.

 Explanatory trial
A trial that aims to test a treatment policy in an ideal situation where patients receive the full course of therapy as prescribed, and use of other treatments may be controlled or restricted.  See also pragmatic trial.

 External validity (synonyms: external validity, generalisability, relevance, transferability)
The extent to which results provide a correct basis for generalisations to other circumstances. For instance, a meta-analysis of trials of elderly patients may not be generalisable to children. (Also called generalisability or applicability.)



 Factorial design
A trial design used to assess the individual contribution of treatments given in combination, as well as any interactive effect they may have. Most trials only consider a single factor, where an intervention is compared with one or more alternatives, or a placebo. In a trial using a 2x2 factorial design, participants are allocated to one of four possible combinations. For example in a 2x2 factorial RCT of nicotine replacement and counselling, participants would be allocated to: nicotine replacement alone, counselling alone, both, or neither. In this way it is possible to test the independent effect of each intervention on smoking cessation and the combined effect of (interaction between) the two interventions. This type of study is usually carried out in circumstances where no interaction is likely.

Fixed effect model
[In meta-analysis:] A model that calculates a pooled effect estimate using the assumption that all observed variation between studies is caused by the play of chance. Studies are assumed to be measuring the same overall effect. An alternative model is the random-effects model.

The observation over a period of time of study/trial participants to measure outcomes under investigation.



Generalisability (synonyms: applicability, external validity, relevance, transferability)
See external validity.



Hazard rate
The probability of an event occurring given that it hasn’t occurred up to the current point in time.

Hazard ratio
A measure of effect produced by a survival analysis. This represents the increased risk with which one group is likely to experience the outcome of interest.  For example, if the hazard ratio for death for a treatment is 0.5, then we can say that treated patients are likely to die at half the rate of untreated patients.


  1. Used in a general sense to describe the variation in, or diversity of, participants, interventions, and measurement of outcomes across a set of studies, or the variation in internal validity of those studies.
  2. Used specifically, as statistical heterogeneity, to describe the degree of variation in the effect estimates from a set of studies. Also used to indicate the presence of variability among studies beyond the amount expected due solely to the play of chance.

Used to describe a set of studies or participants with sizeable heterogeneity.  The opposite of homogeneous.

Historical control
A control person or group for whom data were collected earlier than for the group being studied. There is a large risk of bias in studies that use historical controls due to systematic differences between the comparison groups, due to changes over time in risks, prognosis, health care, etc.


  1. Used in a general sense to mean that the participants, interventions, and measurement of outcomes are similar across a set of studies.
  2. Used specifically to describe the effect estimates from a set of studies where they do not vary more than would be expected by chance.

See also heterogeneity.

An unproved theory that can be tested through research.  To properly test a hypothesis, it should be pre-specified and clearly articulated, and the study to test it should be designed appropriately. See also null hypothesis.



The number of new occurrences of something in a population over a particular period of time, e.g. the number of cases of a disease in a country over one year.

Index Medicus
Catalogue of the United States National Library of Medicine (NLM), and a periodical index to the medical literature. Available in printed form, or electronically as MEDLINE.

Individual patient data
[In meta-analysis:] The availability of raw data for each study participant in each included study, as opposed to aggregate data (summary data for the comparison groups in each study). Reviews using individual patient data require collaboration of the investigators who conducted the original studies, who must provide the necessary data.

A strategy for analysing data from a randomised controlled trial. All participants are included in the arm to which they were allocated, whether or not they received (or completed) the intervention given to that arm. Intention-to-treat analysis prevents bias caused by the loss of participants, which may disrupt the baseline equivalence established by randomisation and which may reflect non-adherence to the protocol. The term is often misused in trial publications when some participants were excluded.

The situation in which the effect of one independent variable on the outcome is affected by the value of a second independent variable. In a trial, a test of interaction examines whether the treatment effect varies across sub-groups of participants. See also factorial trial, sub-group analysis.

Intermediary outcomes
See surrogate endpoints.

 Internal validity
The extent to which the design and conduct of a study are likely to have prevented bias. Variation in quality can explain variation in the results of studies included in a systematic review. More rigorously designed (better quality) trials are more likely to yield results that are closer to the truth. (Also called methodological quality but better thought of as relating to bias prevention.) See also external validity, validitybias prevention.

Inter-rater reliability
The degree of stability exhibited when a measurement is repeated under identical conditions by different raters. Reliability refers to the degree to which the results obtained by a measurement procedure can be replicated. Lack of inter-rater reliability may arise from divergences between observers or instability of the attribute being measured. See also Intra-rater reliability.

 Interrupted time series
A research design that collects observations at multiple time points before and after an intervention (interruption). The design attempts to detect whether the intervention has had an effect significantly greater than the underlying trend.

The process of intervening on people, groups, entities or objects in an experimental study. In controlled trials, the word is sometimes used to describe the regimens in all comparison groups, including placebo and no-treatment arms. See also treatment, experimental intervention and control.

 Intervention group
A group of participants in a study receiving a particular health care interventionParallel group trials include at least two intervention groups.

 Intervention study
See Clinical trial.

 Intra-rater reliability
The degree of stability exhibited when a measurement is repeated under identical conditions by the same rater. Reliability refers to the degree to which the results obtained by a measurement procedure can be replicated. Lack of intra-rater reliability may arise from divergences between instruments of measurement, or instability of the attribute being measured.



LILACS (Latin American and Caribbean Health Sciences Literature)
An electronic database based on a regional database of medical and science literature. It is compiled by the Latin American and Caribbean Center for Health Science Information, a unit of the Pan American Health Organisation.

 Logistic regression
A form of regression analysis that models an individual's odds of disease or some other outcome as a function of a risk factor or intervention. It is widely used for dichotomous outcomes, in particular to carry out adjusted analysis. See also meta-regression.

Log-odds ratio
The (natural) log of the odds ratio. It is used in statistical calculations and in graphical displays of odds ratios in systematic reviews.

Loss to follow up
See attrition.



See blinding.

[In a case-control study:] Choosing one or more controls with particular matching attributes for each case. Researchers match cases and controls according to
particular variables that are thought to be important, such as age and sex. 

The average value, calculated by adding all the observations and dividing by the number of observations.  Also called arithmetic mean.

 Mean difference
[In meta-analysis:] A method used to combine measures on continuous scales (such as weight), where the mean, standard deviation and sample size in each group are known. The weight given to the difference in means from each study (e.g. how much influence each study has on the overall results of the meta-analysis) is determined by the precision of its estimate of effect and, in the statistical software in RevMan and the Cochrane Database of Systematic Reviews, is equal to the inverse of the variance. This method assumes that all of the trials have measured the outcome on the same scale.  See also standardised mean difference.  (Also called WMD, weighted mean difference.)

An electronic database produced by the United States National Library of Medicine. It indexes millions of articles in selected (about 3,700) journals. It is available through most medical libraries, and can be accessed on CD-ROM, the Internet and by other means. Years of coverage - 1966 to present.

The use of statistical techniques in a systematic review to integrate the results of included studies. Sometimes misused as a synonym for systematic reviews, where the review includes a meta-analysis.

[In meta-analysis:] A technique used to explore the relationship between study characteristics (e.g. concealment of allocation, baseline risk, timing of the intervention) and study results (the magnitude of effect observed in each study) in a systematic review. See also logistic regression.

Methodological quality 
See internal validity, bias prevention.

A method of allocation used to provide comparison groups that are closely similar for several variables. The next participant is assessed with regard to several characteristics, and assigned to the treatment group that has so far had fewer such people assigned to it. It can be done with a component of randomisation,
where the chance of allocation to the group with fewer similar participants is less than one.  Minimisation is best performed centrally with the aid of a computer program to ensure concealment of allocation

Illness or harm.  See also co-morbidity.


Multi-arm trial
A trial with more than two arms.

Multicentre trial
A trial conducted at several geographical sites. Trials are sometimes conducted among several collaborating institutions, rather than at a single institution -  particularly when very large numbers of participants  are needed.

 Multiple comparison
The performance of multiple analyses on the same data. Multiple statistical comparisons increase the probability of making a Type I error, i.e. attributing a difference to an intervention when chance is a reasonable explanation. See also Sub-group analysis.

 Multiplicative model
A statistical model in which the combined effect of several factors is the product of the effects produced by each in the absence of the others. For example, if one factor multiplies risk by a% and a second factor by b%, the combined effect of the two factors is a multiplication by (a x b)%.

Multivariate analsis
Measuring the impact of more than one variable at a time while analysing a set of data, e.g. looking at the impact of age, sex, and occupation on a particular  outcome. Performed using regression analysis.



N of 1 randomised trial
A randomised trial in an individual to determine the optimum treatment for that individual. The individual is given repeated administrations of experimental and control interventions (or of two or more experimental treatments), with the treatments being randomised.

Negative association
See association.

Negative study
A term often used to refer to a study with results that either do not indicate a beneficial effect of treatment or that have not reached statistical significance.  The term can generate confusion because it can refer to either statistical significance or the direction of effect. Studies often have multiple outcomes, the criteria for classifying studies as ‘negative’ are not always clear and, in the case of studies of risk or undesirable effects, ‘negative’ studies are ones that do not show a harmful effect.

See number needed to treat to harm.

See number needed to treat to benefit.

See number needed to treat to benefit.

See number needed to treat to harm.

Non-experimental study
See observational study.

 Non-inferiority trial
A trial designed to determine whether the effect of a new treatment is not worse than a standard treatment by more than a pre-specified amount. A one-sided version of an equivalence trial.

Non-randomised study
Any quantitative study estimating the effectiveness of an intervention (harm or benefit) that does not use randomisation to allocate units to comparison groups (including studies where ‘allocation’ occurs in the course of usual treatment decisions or peoples’ choices, i.e. studies usually called ‘observational’). To avoid ambiguity, the term should be substantiated using a description of the type of question being addressed. For example, a 'non-randomised intervention study' is typically a comparative study of an experimental intervention against some control intervention (or no intervention) that is not a randomised controlled trial. There are many possible types of non-randomised intervention study, including cohort studies, case-control studies, controlled before-and-after studies, interrupted-time-series studies and controlled trials that do not use appropriate randomisation strategies (sometimes called quasi-randomised studies).

 Null hypothesis
The statistical hypothesis that one variable (e.g. which treatment a study participant was allocated to receive) has no association with another variable or set of variables (e.g. whether or not a study participant died), or that two or more population distributions do not differ from one another.  In simplest terms, the null hypothesis states that the factor of interest (e.g. treatment) has no impact on outcome (e.g. risk of death).

Number needed to harm
See number needed to treat to harm.

Number needed to treat
See number needed to treat to benefit.

 Number needed to treat to benefit
An estimate of how many people need to receive a treatment before one person would experience a beneficial outcome. For example, if you need to give a stroke prevention drug to 20 people before one stroke is prevented, then the number needed to treat to benefit for that stroke prevention drug is 20. The NNTb is estimated as the reciprocal of the absolute risk difference.  (Also called NNT, NNTB, number needed to treat.)

 Number needed to treat to harm
A number needed to treat to benefit associated with a harmful effect. It is an estimate of how many people need to receive a treatment before one more person would experience a harmful outcome or one fewer person would experience a beneficial outcome. (Also called NNH, NNTH, number needed to harm.) See also number needed to treat to benefit. 



 Observational study
A study in which the investigators do not seek to intervene, and simply observe the course of events. Changes or differences in one characteristic (e.g. whether or not people received the intervention of interest) are studied in relation to changes or differences in other characteristic(s) (e.g. whether or not they died), without action by the investigator.  There is a greater risk of selection bias than in experimental studies. See also randomised controlled trial. (Also called non-experimental study.)

A way of expressing the chance of an event, calculated by dividing the number of individuals in a sample who experienced the event by the number for whom it did  not occur. For example, if in a sample of 100, 20 people died and 80 people survived the odds of death are 20/80 = 1⁄4, 0.25 or 1:4.

 Odds ratio (OR)
The ratio of the odds of an event in one group to the odds of an event in another group. In studies of treatment effect, the odds in the treatment group are usually divided by the odds in the control group. An odds ratio of one indicates no difference between comparison groups. For undesirable outcomes an OR that is less than one indicates that the intervention was effective in reducing the risk of that outcome.  When the risk is small, odds ratios are very similar to risk ratios. (Also called OR.)

Open clinical trial
There are at least three possible meanings for this term:

  1. A clinical trial in which the investigator and participant are aware which intervention is being used for which participant (i.e. not blinded). Random allocation may or may not be used in such trials. Sometimes called an ‘open label’ design.
  2. A clinical trial in which the investigator decides which intervention is to be used (non-random allocation). This is sometimes called an open label design (but some trials which are said to be ‘open label’, are randomised).
  3. A clinical trial that uses an open sequential design.

 Open sequential design
A sequential trial where the decision to stop the trial rests on the size of effect in those studies, and there is no finite maximum number of participants in the study.

See odds ratios.

Ordinal data
Data that are classified into more than two categories where there is a natural order to the categories; for example, non-smokers, ex-smokers, light smokers and heavy smokers. Ordinal data are often reduced to two categories to simplify analysis and presentation, which may result in a considerable loss of information.

A component of a participant's clinical and functional status after an intervention has been applied, that is used to assess the effectiveness of an intervention. See also primary outcome, secondary outcome.

Overview, systematic
See systematic review.



Paired design
A trial in which participants or groups of participants are matched (e.g. based on prognostic factors) and one member of each pair is allocated to the experimental (intervention) group and the other to the control group.

 Parallel group trial
A trial that compares two groups of people concurrently, one of which receives the intervention of interest and one of which is a control group. Some parallel trials have more than two comparison groups and some compare different interventions without including a non-intervention control group. (Also called independent group design.)

An individual who is studied in a trial, often but not necessarily a patient.

 Performance bias
ystematic differences between intervention groups in care provided apart from the intervention being evaluated. For example, if participants know they are in the control group, they may be more likely to use other forms of care. If care providers are aware of the group a particular participant is in, they might act differently.  Blinding of study participants (both the recipients and providers of care) is used to protect against performance bias.

Phase I, II, III and IV trials
A series of levels of trials required of drugs before (and after) they are routinely used in clinical practice: Phase I trials assess toxic effects on humans (not many people participate in them, and usually without controls); Phase ll trials assess therapeutic benefit (usually involving a few hundred people, usually with controls, but not always); Phase III trials compare the new treatment against standard (or placebo) treatment (usually a full randomised controlled trial). At this point, a drug can be approved for community use. Phase IV monitors a new treatment in the community, often to evaluate long-term safety and effectiveness.

An inactive substance or procedure administered to a participant, usually to compare its effects with those of a real drug or other intervention, but sometimes for the psychological benefit to the participant through a belief that s/he is receiving treatment. Placebos are used in clinical trials to blind people to their treatment allocation. Placebos should be indistinguishable from the active intervention to ensure adequate blinding.

Point estimate
The results (e.g. mean, weighted mean difference, odds ratio, risk ratio or risk difference) obtained in a sample (a study or a meta-analysis) which are used as the best estimate of what is true for the relevant population from which the sample is taken.

The group of people being studied, usually by taking samples from that population. Populations may be defined by any characteristics e.g. geography, age group, certain diseases.

Positive association
See association.

Positive study
A term used to refer to a trial with results indicating a beneficial effect of the intervention being studied. The term can generate confusion because it can refer to both statistical significance and the direction of effect, studies often have multiple outcomes, the criteria for classifying studies as negative or positive are not always clear and, in the case of studies of risk or undesirable effects, "positive" studies are ones that show a harmful effect.

[In statistics:] The probability of rejecting the null hypothesis when a specific alternative hypothesis is true. The power of a hypothesis test is one minus the probability of Type II error.  In clinical trials, power is the probability that a trial will detect, as statistically significant, an intervention effect of a specified size. If a clinical trial had a power of 0.80 (or 80%), and assuming that the pre-specified treatment effect truly existed, then if the trial was repeated 100 times, it would find a statistically significant treatment effect in 80 of them.  Ideally we want a test to have high power, close to maximum of one (or 100%). For a given size of effect, studies with more participants have greater power. Studies with a given number of participants have more power to detect large effects than small effect.  (Also called statistical power.)

 Pragmatic trial
A trial that aims to test a treatment policy in a 'real life' situation, when many people may not receive all of the treatment, and may use other treatments as well.  This is as opposed to an explanatory trial, which is done under ideal conditions and is trying to determine whether a therapy has the ability to make a difference at all (i.e. testing its efficacy).

The proportion of a population having a particular condition or characteristic: e.g. the percentage of people in a city with a particular disease, or who smoke.

Prevalence trial
A type of cross-sectional study that measures the prevalence of a characteristic.

 Primary outcome
The outcome of greatest importance.

 Primary study
‘Original research’ in which data are collected. The term primary study is sometimes used to distinguish it from a secondary study (re-analysis of previously collected data), meta-analysis, and other ways of combining studies (such as economic analysis and decision analysis). (Also called original study.)

 Probability distribution
The function that gives the probabilities that a variable equals each of a sequence of possible values. Examples include the binomial distribution, normal distribution and Poisson distribution. See also Distribution.

 Prospective study
In evaluations of the effects of healthcare interventions, a study in which people are identified according to current risk status or exposure, and followed forwards through time to observe outcome. Randomised controlled trials are always prospective studies. Cohort studies are commonly either prospective or retrospective, whereas case-control studies are usually retrospective. In Epidemiology, 'prospective study’ is sometimes misused as a synonym for cohort study. See also retrospective study.

The plan or set of steps to be followed in a study. A Protocol for a systematic review should describe the rationale for the review, the objectives, and the methods that will be used to locate, select, and critically appraise studies, and to collect and analyse data from the included studies.

Publication bias
See reporting bias.

A free access Internet version of MEDLINE also including records from before 1966 (old MEDLINE), some very recent records and some other life science journals.

The probability (ranging from zero to one) that the results observed in a study (or results more extreme) could have occurred by chance if in reality the null hypothesis was true. In a meta-analysis, the P-value for the overall effect assesses the overall statistical significance of the difference between the intervention groups, whilst the P-value for the heterogeneity statistic assesses the statistical significance of differences between the effects observed in each study.



A vague notion of the methodological strength of a study, sometimes indicating the extent of bias prevention.

Quality of evidence
A judgement about the extent to which we can be confident that an estimate of effect is correct. These judgements are made using the GRADE system, and are made for each important outcome. The judgements are based on the type of study design (randomised trials versus observational studies), five factors that can lower confidence in an estimate of effect (risk of bias, inconsistency of the results across studies, indirectness, imprecision of the overall estimate across studies, and publication bias), and three factors that can increase confidence (a large effect, a dose response relationship, and plausible confounding that would increase confidence in an estimate).

 Quasi-random allocation
Methods of allocating people to a trial that are not random, but were intended to produce similar groups when used to allocate participants. Quasi-random methods include: allocation by the person's date of birth, by the day of the week or month of the year, by a person's medical record number, or just allocating every alternate person. In practice, these methods of allocation are relatively easy to manipulate, introducing selection bias. See also random allocation, randomisation.



Governed by chance. See randomisation.

 Random allocation
A method that uses the play of chance to assign participants to comparison groups in a trial, e.g. by using a random numbers table or a computer-generated random sequence.  Random allocation implies that each individual or unit being entered into a trial has the same chance of receiving each of the possible interventions. It also implies that the probability that an individual will receive a particular intervention is independent of the probability that any other individual will receive the same intervention. See also quasi-random allocation, randomisation.

 Random effects model
[In meta-analysis:] A statistical model in which both within-study sampling error (variance) and between-studies variation are included in the assessment of the uncertainty (confidence interval) of the results of a meta-analysis. See also fixed-effect model. When there is heterogeneity among the results of the included studies beyond chance, random-effects models will give wider confidence intervals than fixed-effect models.

Random error
Error due to the play of chance. Confidence intervals and P-values allow for the existence of random error, but not systematic errors (bias).

 Random permuted blocks
A method of randomisation that ensures that, at any point in a trial, roughly equal numbers of participants have been allocated to all the comparison groups. Permuted blocks are often used in combination with stratified randomisation. (Also called block randomisation.)

Random sample
A group of people selected for a study that is representative of the population of interest. This means that everyone in the population has an equal chance of being approached to participate in the survey, and the process is meant to ensure that a sample is as representative of the population as possible. It has less bias than a convenience sample: that is, a group that the researchers have more convenient access to. Randomised trials are rarely carried out on random samples.

 Randomisation (spelled randomization in US English)
The process of randomly allocating participants into one of the arms of a controlled trial. There are two components to randomisation: the generation of a random sequence, and its implementation, ideally in a way so that those entering participants into a study are not aware of the sequence (concealment of allocation). 

Randomisation blinding
See concealment of allocation.

 Randomised controlled trial (RCT) (Synomym: randomised clinical trial)
An experiment in which two or more interventions, possibly including a control intervention or no intervention, are compared by being randomly allocated to participants. In most trials one intervention is assigned to each individual but sometimes assignment is to defined groups of individuals (for example, in a household) or interventions are assigned within individuals (for example, in different orders or to different parts of the body).

The speed or frequency of occurrence of an event, usually expressed with respect to time. For instance, a mortality rate might be the number of deaths per year, per 100,000 people.

See randomised controlled trial.

 Regression analysis
A statistical modelling technique used to estimate or predict the influence of one or more independent variables on a dependent variable, e.g. the effect of age, sex, and educational level on the prevalence of a disease. Logistic regression and meta-regression are types of regression analysis. 

Relative Risk (RR) 
See risk ratio.

Relative risk reduction
The proportional reduction in risk in one treatment group compared to another.  It is one minus the risk ratio. If the risk ratio is 0.25, then the relative risk reduction is 1-0.25=0.75, or 75%.

Refers to the degree to which results obtained by a measurement procedure can be replicated. Lack of reliability can arise from divergences between observers or measurement instruments, or instability in the attribute being measured.

 Reporting bias
A bias caused by only a subset of all the relevant data being available. The publication of research can depend on the nature and direction of the study results.  Studies in which an intervention is not found to be effective are sometimes not published. Because of this, systematic reviews that fail to include unpublished studies may overestimate the true effect of an intervention. In addition, a published report might present a biased set of results (e.g. only outcomes or sub-groups where a statistically significant difference was found. (Also called publication bias.)

 Retrospective trial
A study in which the outcomes have occurred to the participants before the study commenced. Case-control studies are usually retrospective, cohort studies sometimes are, randomised controlled trials never are.  See also prospective study.


  1. A systematic review.
  2. A review article in the medical literature which summarises a number of different studies and may draw conclusions about a particular intervention.Review articles are often not systematic.Review articles are also sometimes called overviews.
  3. To referee a paper.

The proportion of participants experiencing the event of interest. Thus, if out of 100 participants the event (e.g. a stroke) is observed in 32, the risk is 0.32. The control group risk is the risk amongst the control group. The risk is sometimes referred to as the event rate, and the control group risk as the control event rate.  However, these latter terms confuse risk with rate. Statistical texts in particular are happy to discuss risk of beneficial effects as well as adverse events.

 Risk difference (RD) 
The difference in size of risk between two groups. For example, if one group has a 15% risk of contracting a particular disease, and the other has a 10% risk of getting the disease, the risk difference is five percentage points. (Also called absolute risk difference, absolute risk reduction.)

 Risk factor
An aspect of a person's condition, lifestyle or environment that increases the probability of occurrence of a disease. For example, cigarette smoking is a risk factor for lung cancer.

 Risk ratio
The ratio of risk in two groups. In intervention studies, it is the ratio of the risk in the intervention group to the risk in the control group. A risk ratio of one indicates no difference between comparison groups. For undesirable outcomes, a risk ratio that is less than one indicates that the intervention was effective in reducing the risk of that outcome. (Also called relative risk, RR.)

See risk ratio.

Run-in period
A period before randomisation when participants are monitored but receive no treatment (or they sometimes all receive one of the study treatments, possibly in a blind fashion). The data from this stage of a trial are only occasionally of value but can serve a valuable role in screening out ineligible or non-compliant participants, in ensuring that participants are in a stable condition, and in providing baseline observations. A run-in period is sometimes called a washout period if treatments that participants were using before entering the trial are discontinued.



[of an intervention:] Refers to serious adverse effects, such as those that threaten life, require or prolong hospitalization, result in permanent disability, or cause birth defects. Indirect adverse effects, such as traffic accidents, violence, and damaging consequences of mood change, can also be serious.

Search strategy

  1. The methods used by a reviewer to identify trials. This includes handsearching relevant journals, searching electronic databases, contacting drug companies, other forms of personal contact and checking reference lists.
  2. The combination of terms used to identify studies in an electronic database such as MEDLINE.

Secondary outcome
An outcome used to evaluate additional effects of the intervention deemed a priori as being less important than the primary outcomes.

 Secondary study
A study of studies: a review of individual studies (each of which is called a primary study). A systematic review is a secondary study.

 Selection bias

  1. Systematic differences between comparison groups in prognosis or responsiveness to treatment. Random allocation with adequate concealment of allocation protects against selection bias. Other means of selecting who receives the intervention are more prone to bias because decisions may be related to prognosis or responsiveness to treatment.
  2. A systematic error in reviews due to how studies are selected for inclusion. Reporting bias is an example of this.
  3. A systematic difference in characteristics between those who are selected for study and those who are not. This affects external validity but not internal validity.

Sensitivity analysis
An analysis used to determine how sensitive the results of a trial or systematic review are to changes in how it was done. Sensitivity analyses are used to assess how robust the results are to uncertain decisions or assumptions about the data and the methods that were used.

Sequential trial
A randomised trial in which the data are analysed after each participant’s results become available, and the trial continues until a clear benefit is seen in favour of one of the comparison groups, or it is unlikely that any difference will emerge. The main advantage of sequential trials is that they are usually shorter than fixed size trials when there is a large difference in the effectiveness of the interventions being compared. Their use is restricted to conditions where the outcome of interest is known relatively quickly. In a group sequential trial, a limited number of interim analyses of the data are carried out at pre-specified times during recruitment and follow up, say 3-6 times in all.

Side effect
Any unintended effect of an intervention. Side effects are most commonly associated with pharmaceutical products, in which case they are related to the pharmacological properties of the drug at doses normally used for therapeutic purposes in humans. See also adverse effect.

Single blind
(Also called single masked). See blinding.

Single case report
See case study.

See Standardised mean difference.

 Standard deviation
A measure of the spread or dispersion of a set of observations, calculated as the average difference from the mean value in the sample.

Standard error
The standard deviation of the sampling distribution of a statistic. Measurements taken from a sample of the population will vary from sample to sample. The standard error is a measure of the variation in the sample statistic over all possible samples of the same size. The standard error decreases as the sample size increases. (Also called SE.)

Standard treatment
See conventional treatment.

 Standardised mean difference
The difference between two estimated means divided by an estimate of the standard deviation. It is used to combine results from studies using different ways of measuring the same concept, e.g. mental health. By expressing the effects as a standardised value, the results can be combined since they have no units.  Standardised mean differences are sometimes referred to as a d index. (Also called SMD.)

Statistical power
See power.

 Statistical significant
A result that is unlikely to have happened by chance. The usual threshold for this judgement is that the results, or more extreme results, would occur by chance with a probability of less than 0.05 if the null hypothesis was true. Statistical tests produce a p-value used to assess this.

The process by which groups are separated into mutually exclusive sub-groups of the population that share a characteristic: e.g. age group, sex, or socioeconomic status. It is possible to compare these different strata to try and see if the effects of a treatment differ between the sub-groups. See also sub-group analysis.

 Stratified randomisation
A method used to ensure that equal numbers of participants with a characteristic thought to affect prognosis or response to the intervention will be allocated to each comparison group. For example, in a trial of women with breast cancer, it may be important to have similar numbers of pre-menopausal and post-menopausal women in each comparison group. Stratified randomisation could be used to allocate equal numbers of pre- and post-menopausal women to each treatment group. Stratified randomisation is performed by performing separate randomisation (often using random permuted blocks) for each strata. See also minimisation.

 Sub-group analysis
An analysis in which the intervention effect is evaluated in a defined subset of the participants in a trial, or in complementary subsets, such as by sex or in age categories. Trial sizes are generally too small for sub-group analyses to have adequate statistical power. Comparison of sub-groups should be by test of interaction rather than by comparison of p-values. Sub-group analyses are also subject to the multiple comparisons problem. See also multiple comparisons.

 Surrogate endpoints
Outcome measures that are not of direct practical importance but are believed to reflect outcomes that are important; for example, blood pressure is not directly important to patients but it is often used as an outcome in clinical trials because it is a risk factor for stroke and heart attacks. Surrogate endpoints are often physiological or biochemical markers that can be relatively quickly and easily measured, and that are taken as being predictive of important clinical outcomes. They are often used when observation of clinical outcomes requires long follow-up. (Also called intermediary outcomes, surrogate outcomes.)

Surrogate outcomes
See surrogate endpoints.

See cross-sectional study.

 Survival analysis
The analysis of data that measure the time to an event e.g. death, next episode of disease. See also time to event.

Sub-group analysis
An analysis in which the intervention effect is evaluated in a defined subset of the participants in a trial, or in complementary subsets, such as by sex or in age categories. Trial sizes are generally too small for sub-group analyses to have adequate statistical power. Comparison of sub-groups should be by test of interaction rather than by comparison of p-values. Sub-group analyses are also subject to the multiple comparisons problem. See also multiple comparisons.

Surrogate endpoints
Outcome measures that are not of direct practical importance but are believed to reflect outcomes that are important; for example, blood pressure is not directly important to patients but it is often used as an outcome in clinical trials because it is a risk factor for stroke and heart attacks. Surrogate endpoints are often physiological or biochemical markers that can be relatively quickly and easily measured, and that are taken as being predictive of important clinical outcomes.  They are often used when observation of clinical outcomes requires long follow-up.  (Also called intermediary outcomes, surrogate outcomes.)

Systematic error
See bias.

 Systematic review (synonym: systematic overview)
A review of a clearly formulated question that uses systematic and explicit methods to identify, select, and critically appraise relevant research, and to collect and analyse data from the studies that are included in the review. Statistical methods (meta-analysis) may or may not be used to analyse and summarise the results of the included studies. See also Cochrane Review.



Test of association
A statistical test to assess whether the value of one variable is associated (i.e. varies with) the value of another variable, or whether the presence or absence of a factor is more likely when a particular outcome is present. See also correlation.

 Time to event
A description of the data in studies where the analysis relates not just to whether an event occurs but also when. Such data are analysed using survival analysis. (Also called survival data.)

[of an intervention:] usually refers to medically less important (that is, without serious or permanent sequelae), but unpleasant adverse effects of drugs. These include symptoms such as dry mouth, tiredness, etc, that can affect a person’s quality of life and willingness to continue the treatment. As these adverse effects usually develop early on and are relatively frequent, randomised controlled trials may yield reliable data on their incidence.

The degree to which a medicine is poisonous. How much of a medicine can be taken before it has a toxic effect.

The process of intervening on people with the aim of enhancing health or life expectancy. Sometimes, and particularly in statistical texts, the word is used to cover all comparison groups, including placebo and no treatment arms of a controlled trial and even interventions designed to prevent bad outcomes in healthy people, rather than cure ill people. See also intervention, experimental intervention and control.

 Treatment effect
See estimate of effect.


  1. A consistent movement across ordered categories, e.g. a change in the effect observed in studies grouped according to, for instance, intensity of treatment.
  2. Used loosely to refer to an association or possible effect that is not statistically significant.  This usage should be avoided.

Used to refer to a person conducting or publishing a controlled trial.

Type I error
A conclusion that a treatment works, when it actually does not work. The risk of a Type I error is often called alpha. In a statistical test, it describes the chance of rejecting the null hypothesis when it is in fact true. (Also called false positive.)

Type II error
A conclusion that there is no evidence that a treatment works, when it actually does work. The risk of a Type II error is often called beta. In a statistical test, it describes the chance of not rejecting the null hypothesis when it is in fact false. The risk of a Type II error decreases as the number of participants in a study increases. (Also called false negative.)



 Unconfounded comparison
A comparison between two treatment groups that will give an unbiased estimate of the effect of treatment due to the study design. For a comparison to be unconfounded, the two treatment groups must be treated identically, apart from the randomised treatment. For instance, to estimate the effect of heparin in acute stroke, a trial of heparin alone versus placebo would provide an unconfounded comparison. However, a trial of heparin alone versus aspirin alone provides a confounded comparison of the effect of heparin.

Uncontrolled trial
A clinical trial that has no control group.

 Unit of allocation
The unit that is assigned to the alternative interventions being investigated in a trial. Most commonly, the unit will be an individual person but, in a cluster randomised trial, groups of people will be assigned together to one or the other of the interventions. In some other trials, different parts of a person (such as the left or right eye) might be assigned to receive different interventions. See also unit of analysis error.

 Unit of analysis error
An error made in statistical analysis when the analysis does not take account of the unit of allocation. In some studies, the unit of allocation is not a person, but is instead a group of people, or parts of a person, such as eyes or teeth.  Sometimes the data from these studies are analysed as if people had been allocated individually. Using individuals as the unit of analysis when groups of people are allocated can result in overly narrow confidence intervals. In meta-analysis, it can result in studies receiving more weight than is appropriate.



The degree to which a result (of a measurement or study) is likely to be true and free of bias (systematic errors). Validity has several other meanings, usually accompanied by a qualifying word or phrase; for example, in the context of measurement, expressions such as ‘construct validity’, ‘content validity’ and ‘criterion validity’ are used. See also external validity, internal validity.

A factor that differs among and between groups of people. Variables include patient characteristics such as age, sex, and smoking, or measurements such as blood pressure or depression score. There can also be treatment or condition variables, e.g. in a childbirth study, the length of time someone was in labour, and outcome variables. The set of values of a variable in a population or sample is known as a distribution.

A measure of the variation shown by a set of observations, equal to the square of the standard deviation. It is defined as the sum of the squares of deviations from the mean, divided by the number of observations minus one.



Washout period
In a cross-over trial:] The stage after the first treatment is withdrawn, but before the second treatment is started. The washout period aims to allow time for any active effects of the first treatment to wear off before the new one gets started.

Weighted least squares regression
[In meta-analysis:] A meta-regression technique for estimating the parameters of a regression model, wherein each study's contribution to the sum of products of the measured variables (study characteristics) is weighted by the precision of that study's estimate of effect.

Weighted mean difference 
See mean difference.

See mean difference.