A hypothesis is a statement about a population parameter developed for the purpose of testing. The terms hypothesis testing and testing a hypothesis are used interchangeably. Hypothesis testing starts with a statement, or assumption, about a population parameter. The statistical testing of hypothesis is the most important technique in statistical inference. There is a different type of test statistics for hypothesis testing. Here the discussions of four types of test statistics are given below:
• The chi-square test.
• ANOVA (Analysis of variance).
• The z test or large sample test.
• The t test or small sample test.
The Z-test is a statistical test used in inference which determines if the difference between a sample mean and the population mean is large enough to be statistically significant, that is, if it is unlikely to have occurred by chance.
The “Z-Test” is used a lot in statistical analysis and business research. Usually when a research or survey is carried out, a sample population is interviewed, and the number of people actually interviewed is much smaller than the actual population of the subjects of the research. The researchers carry out the Z-Test to determine whether the results of the survey can be considered as representative of the entire population or not.
When we can do z-test?
➢ When data points are independent from each other.➢ Z-test is preferable when sample size is greater than 30. ➢ The distributions should be normal if sample size is low, if however sample size>30 the distribution of the data does not have to be normal ➢ When the variances of the samples are same.
➢ When all individuals are selected at random from the population ➢ When all individuals have equal chance of being selected. ➢ Sample sizes should be as equal as possible but some differences are allowed.
A t-test is any statistical hypothesis test in which the test statistic follows a Student’s t distribution if the null hypothesis is supported. It is most commonly applied when the test statistic would follow a normal distribution if the value of a scaling term in the test statistic were known. When the scaling term is unknown and is replaced by an estimate based on the data, the test statistic follows a Student’s t distribution.
A statistical test involving means of normal populations with unknown standard deviations; small samples are used, based on a variable t equal to the difference between the mean of the sample and the mean of the population divided by a result obtained by dividing the standard deviation of the sample by the square root of the number of individuals in the sample.
When we can do t-test?
➢ Data sets should be independent from each other except in the case of the paired-sample t-test. ➢ T-test is preferable when sample size is less than 30. ➢ When population standard deviation is unknown.
➢ When the distributions are normal for the equal and unequal variance t-test. ➢ When the variances of the samples are same for the equal variance t-test. ➢ When all individuals are selected at random from the population ➢ When all individuals have equal chance of being selected. ➢ Sample sizes should be as equal as possible but some differences are allowed.
An F-test is any statistical test in which the test statistic has an F-distribution under the null hypothesis. It is most often used when comparing statistical models that have been fit to a data set, in order to identify the model that best fits the population from which the data were sampled.
A great variety of hypotheses in applied statistics are tested by F-tests. Among these are: • The hypothesis that the means of multiple normally distributed populations, all having the same standard deviation, are equal. This is perhaps the most well-known of hypotheses tested by means of an F-test, and the simplest problem in the analysis of variance.
• The hypothesis that the standard deviations of two normally distributed populations are equal, and thus that they are of comparable origin. In many cases, the F-test statistic can be calculated through a straightforward process. Two regression models are required, one of which constrains one or more of the regression coefficients according to the null hypothesis. The test statistic is then based on a modified ratio of the sum of squared residuals of the two models. When we can do F-test?
➢ When the populations have equal standard deviations. ➢ When the populations being sampled are normally distributed. ➢ When the samples are randomly selected and are independent. ➢ When the data are in interval scale.
Chi-Square Test:A chi-square test is any statistical hypothesis test in which the sampling distribution of the test statistic is a chi-square distribution when the null hypothesis is true, or any in which this is asymptotically true, meaning that the sampling distribution (if the null hypothesis is true) can be made to approximate a chi-square distribution as closely as desired by making the sample size large enough.
Statistical method to test whether two (or more) variables are: (1) independent or (2) homogeneous. The chi-square test for independence examines whether knowing the value of one variable helps to estimate the value of another variable. The chi-square test for homogeneity examines whether two populations have the same proportion of observations with a common characteristic. Though the formula is the same for both tests, the underlying logic and sampling procedures vary. When we can do Chi-square test?
➢ When the sampling method is simple random sampling.
➢ When measured variables are independent.➢ When values of independent and dependent variables are mutually exclusive. ➢ Observed frequencies cannot be too small.➢ When the sums of the observed frequencies are equal the sum of the expected frequencies. ➢ When the data are in frequency form (nominal data) or greater. ➢ Distribution basis must be decided on before the data is collected.
Similarities among z-test, t-test, F-test and Chi-square test: There are some similarities among z-test, t-test, F-test and chi-square test which are given below: ➢ The z-test, t-test, F- test and chi-square test are used as test statistic in hypothesis testing. ➢ F-test, z-test and t-test, all this test are continuous distribution.
➢ The z distribution, t distribution and F distribution approaches the standard normal distribution. So that, they are asymptotic, meaning the curve approaches but never touches the X-axis. ➢ All these tests are calculated on the basis of level of significance. ➢ The z-test and t-test are bell-shaped and symmetrical. ➢ F-test and chi-square test are non-negative.
➢ The F distribution and chi-square distribution are positively skewed. ➢ The F and t tests should only be used when the parent populations are close to normally distributed.
➢ F-test and chi-square test are based on degrees of freedom.
Dissimilarities among z-test, t-test, F-test and Chi-square test: There are some dissimilarities among z-test, t-test, F-test and chi-square test which are given below:
➢ A t-test is appropriate when sample size is less than 30, while z-test is appropriate when sample size is greater than 30. ➢ We can do z-test when population standard deviation is known, while we can do t-test when population standard deviation is unknown. ➢ The F test compares the variances of two distributions, while the t test compares their means.
➢ Chi-square test and F-test are non-negative, while z-test and t-test can be negative.
➢ The z-test and t-test can test one and two-sided hypotheses, whereas F-test and chi-square test can only test one-sided hypotheses.
➢ Chi-square test is used to test the standarddeviation of a population, while the F-test used to test the standard deviations of two populations.