Which measure of central tendency is best
The mean, median and mode are all valid measures of central tendency, but under different conditions, some measures of central tendency become more appropriate to use than others.
In the following sections, we will look at the mean, mode and median, and learn how to calculate them and under what conditions they are most appropriate to be used. The mean or average is the most popular and well known measure of central tendency. It can be used with both discrete and continuous data, although its use is most often with continuous data see our Types of Variable guide for data types.
The mean is equal to the sum of all the values in the data set divided by the number of values in the data set. You may have noticed that the above formula refers to the sample mean. So, why have we called it a sample mean?
This is because, in statistics, samples and populations have very different meanings and these differences are very important, even if, in the case of the mean, they are calculated in the same way. The mean is essentially a model of your data set. It is the value that is most common.
You will notice, however, that the mean is not often one of the actual values that you have observed in your data set. However, one of its important properties is that it minimises error in the prediction of any one value in your data set. That is, it is the value that produces the lowest amount of error from all other values in the data set. An important property of the mean is that it includes every value in your data set as part of the calculation.
In addition, the mean is the only measure of central tendency where the sum of the deviations of each value from the mean is always zero. The mean has one main disadvantage: it is particularly susceptible to the influence of outliers. These are values that are unusual compared to the rest of the data set by being especially small or large in numerical value.
For example, consider the wages of staff at a factory below:. Staff 1 2 3 4 5 6 7 8 9 10 Salary 15k 18k 16k 14k 15k 15k 12k 17k 90k 95k. The mean is being skewed by the two large salaries. Therefore, in this situation, we would like to have a better measure of central tendency. As we will find out later, taking the median would be a better measure of central tendency in this situation. Generally, the test statistic is calculated as the pattern in your data i.
Linear regression most often uses mean-square error MSE to calculate the error of the model. MSE is calculated by:. Linear regression fits a line to the data by finding the regression coefficient that results in the smallest MSE. Simple linear regression is a regression model that estimates the relationship between one independent variable and one dependent variable using a straight line.
Both variables should be quantitative. For example, the relationship between temperature and the expansion of mercury in a thermometer can be modeled using a straight line: as temperature increases, the mercury expands. This linear relationship is so certain that we can use mercury thermometers to measure temperature. A regression model is a statistical model that estimates the relationship between one dependent variable and one or more independent variables using a line or a plane in the case of two or more independent variables.
A regression model can be used when the dependent variable is quantitative, except in the case of logistic regression, where the dependent variable is binary.
A t-test should not be used to measure differences among more than two groups, because the error structure for a t-test will underestimate the actual error when many groups are being compared.
A one-sample t-test is used to compare a single population to a standard value for example, to determine whether the average lifespan of a specific town is different from the country average.
A paired t-test is used to compare a single population before and after some experimental intervention or at two different points in time for example, measuring student performance on a test before and after being taught the material. A t-test measures the difference in group means divided by the pooled standard error of the two group means.
In this way, it calculates a number the t-value illustrating the magnitude of the difference between the two group means being compared, and estimates the likelihood that this difference exists purely by chance p-value. Your choice of t-test depends on whether you are studying one group or two groups, and whether you care about the direction of the difference in group means.
If you are studying one group, use a paired t-test to compare the group mean over time or after an intervention, or use a one-sample t-test to compare the group mean to a standard value.
If you are studying two groups, use a two-sample t-test. If you want to know only whether a difference exists, use a two-tailed test. If you want to know if one group mean is greater or less than the other, use a left-tailed or right-tailed one-tailed test. A t-test is a statistical test that compares the means of two samples. It is used in hypothesis testing , with a null hypothesis that the difference in group means is zero and an alternate hypothesis that the difference in group means is different from zero.
Statistical significance is a term used by researchers to state that it is unlikely their observations could have occurred under the null hypothesis of a statistical test. Significance is usually denoted by a p -value , or probability value. Statistical significance is arbitrary — it depends on the threshold, or alpha value, chosen by the researcher. When the p -value falls below the chosen alpha value, then we say the result of the test is statistically significant.
A test statistic is a number calculated by a statistical test. It describes how far your observed data is from the null hypothesis of no relationship between variables or no difference among sample groups. The test statistic tells you how different two or more groups are from the overall population mean , or how different a linear slope is from the slope predicted by a null hypothesis. Different test statistics are used in different statistical tests.
The measures of central tendency you can use depends on the level of measurement of your data. Ordinal data has two characteristics:. Nominal and ordinal are two of the four levels of measurement. Nominal level data can only be classified, while ordinal level data can be classified and ordered.
If your confidence interval for a difference between groups includes zero, that means that if you run your experiment again you have a good chance of finding no difference between groups.
If your confidence interval for a correlation or regression includes zero, that means that if you run your experiment again there is a good chance of finding no correlation in your data. In both of these cases, you will also find a high p -value when you run your statistical test, meaning that your results could have occurred under the null hypothesis of no relationship between variables or no difference between groups.
If you want to calculate a confidence interval around the mean of data that is not normally distributed , you have two choices:. The standard normal distribution , also called the z -distribution, is a special normal distribution where the mean is 0 and the standard deviation is 1.
Any normal distribution can be converted into the standard normal distribution by turning the individual values into z -scores. In a z -distribution, z -scores tell you how many standard deviations away from the mean each value lies. The z -score and t -score aka z -value and t -value show how many standard deviations away from the mean of the distribution you are, assuming your data follow a z -distribution or a t -distribution.
These scores are used in statistical tests to show how far from the mean of the predicted distribution your statistical estimate is. If your test produces a z -score of 2. The predicted mean and distribution of your estimate are generated by the null hypothesis of the statistical test you are using.
The more standard deviations away from the predicted mean your estimate is, the less likely it is that the estimate could have occurred under the null hypothesis.
To calculate the confidence interval , you need to know:. Then you can plug these components into the confidence interval formula that corresponds to your data.
The formula depends on the type of estimate e. The confidence level is the percentage of times you expect to get close to the same estimate if you run your experiment again or resample the population in the same way. The confidence interval is the actual upper and lower bounds of the estimate you expect to find at a given level of confidence. These are the upper and lower bounds of the confidence interval.
Nominal data is data that can be labelled or classified into mutually exclusive categories within a variable. These categories cannot be ordered in a meaningful way. For example, for the nominal variable of preferred mode of transportation, you may have the categories of car, bus, train, tram or bicycle.
Statistical tests commonly assume that:. If your data does not meet these assumptions you might still be able to use a nonparametric statistical test , which have fewer requirements but also make weaker inferences.
Measures of central tendency help you find the middle, or the average, of a data set. Some variables have fixed levels. For example, gender and ethnicity are always nominal level data because they cannot be ranked. However, for other variables, you can choose the level of measurement. For example, income is a variable that can be recorded on an ordinal or a ratio scale:.
If you have a choice, the ratio level is always preferable because you can analyze data in more ways. The higher the level of measurement, the more precise your data is. The level at which you measure a variable determines how you can analyze your data. Depending on the level of measurement , you can perform different descriptive statistics to get an overall summary of your data and inferential statistics to see if your results support or refute your hypothesis.
Levels of measurement tell you how precisely variables are recorded. There are 4 levels of measurement, which can be ranked from low to high:. The p -value only tells you how likely the data you have observed is to have occurred under the null hypothesis. The alpha value, or the threshold for statistical significance , is arbitrary — which value you use depends on your field of study. In most cases, researchers use an alpha of 0.
P -values are usually automatically calculated by the program you use to perform your statistical test. They can also be estimated using p -value tables for the relevant test statistic. P -values are calculated from the null distribution of the test statistic.
They tell you how often a test statistic is expected to occur under the null hypothesis of the statistical test, based on where it falls in the null distribution. If the test statistic is far from the mean of the null distribution, then the p -value will be small, showing that the test statistic is not likely to have occurred under the null hypothesis.
A p -value , or probability value, is a number describing how likely it is that your data would have occurred under the null hypothesis of your statistical test. You can choose the right statistical test by looking at what type of data you have collected and what type of relationship you want to test. The test statistic will change based on the number of observations in your data, how variable your observations are, and how strong the underlying patterns in the data are.
For example, if one data set has higher variability while another has lower variability, the first data set will produce a test statistic closer to the null hypothesis, even if the true correlation between two variables is the same in either data set. Want to contact us directly? No problem. We are always here for you. If someone in the street asked you "How many legs do people have? The mode is often "the normal thing". If, however, you were in a position where you would have to plan a stock of lower limb prosthesis for a country far away, you would want to multiply the mean with the population size.
In many cases where you would like to assess a mean from a small sample but are afraid of outliers, the median will be a better estimator. So the question for the best measure is not a universal mathematical question nor does is necessarily depend on what you measure, but it depends on whatever real world problem you try to tackle.
In my opinion, the answer should be dependent on your distribution shape. If you have a bit of outliers or you have a skewed distribution or you distribution does not have a well defined mean - you may use median. If you have multi-modal distribution, you may use mode. All these estimators are essentially different and provide different information about your underlying random variable. Another thing worth to be discussed except the deep underlying differences in what these estimators mean is the efficiency of the estimation and the breakdown point.
Mean is the most efficient estimator your estimation will be as close to the true value using the sample of size that you have. Lehman-Hodges estimator is somewhere in between. Sign up to join this community. However, there are some situations where the other measures of central tendency are preferred. Median is preferred to mean[ 3 ] when. Mode is the preferred measure when data are measured in a nominal scale.
Geometric mean is the preferred measure of central tendency when data are measured in a logarithmic scale. Source of Support: Nil. Conflict of Interest: None declared. National Center for Biotechnology Information , U. Journal List J Pharmacol Pharmacother v.
J Pharmacol Pharmacother. Author information Copyright and License information Disclaimer. Assistant Editor, JPP. This is an open-access article distributed under the terms of the Creative Commons Attribution-Noncommercial-Share Alike 3.
This article has been cited by other articles in PMC. Disadvantages It does not take into account the precise value of each observation and hence does not use all information available in the data. MODE Mode is defined as the value that occurs most frequently in the data.
Advantages It is the only measure of central tendency that can be used for data measured in a nominal scale. Disadvantages It is not used in statistical analysis as it is not algebraically defined and the fluctuation in the frequency of observation is more when the sample size is small.
0コメント