Here are some common mistakes which researchers often make during the study.
Treatment A
|
Treatment B
|
78% (273/350)
|
83% (289/350)
|
Treatment A
|
Treatment B
|
|
Small Stones
|
93% (81/87)
|
87% (234/270)
|
Large Stones
|
73% (192/263)
|
69% (55/80)
|
Both
|
78% (273/350)
|
83% (289/350)
|
(5) Failure to acknowledge “Know your data approach” : Any analysis must be preceded by three preliminary analysis. First is the exploratory analysis, which dictates the trends and also the type of analysis to be followed further. Second Missing value analysis which finds out the process behind missing values in data set, if any, and then rectifies the data set by processes like deletion, imputation etc. Third process is the outlier analysis, which identifies outliers and the reasons to be ascertained for those outliers. Outliers form the essential component of the data and sometime form THE data in the data set.
(6) Failure to apply Model Parsimony : What is the use of that analysis, if the model arrived is too complex to be understood and similar or better results can be arrived at by much simpler models.
(7) Failure to apportion interaction effect variance if it is not found significant : If some interaction effect is not found significant then instead of consuming the degrees of freedom for this effect, the variance due to this can be apportioned into other main or other interaction effects. This may make other effects significant which were non-significant earlier.
(8) Failure to adjust for co-variates : If effects of co-variates are not adjusted in the model, we may get derive erroneous conclusion of denoting the other effects (both main and interaction) as significant, which might have come significant because of variations in the co-variates solely.
(9) Two groups designs are faulty in most of the cases : Two group designs (even the venerable pre-post design) are faulty designs in the sense that they do not tackle all the threats to internal validity especially the threat posed by interaction testing effect. Four group designs like Solomon design tackles these threats, but it is costly design.
(10) Assumptions consigned to oblivion : The application of tests is highly sensitive to the dictated assumptions for the tests.
(11) Failure to use transformations before applying the non-parametric tests: If the assumptions of the tests are not satisfied, transformations should be applied first so that the assumptions could be satisfied. As a last alternative non-parametric tests me be used.
(12) Failure to use appropriate sample size: This will have drastic implications on the final interpretation and analysis. Non-significant result may look like significant and the vice versa. The analysis done without considering sample size is sheer waste ! The appropriate power analysis must be used to determine the appropriate sample size.
(13) Failure to use appropriate correlation for correlation matrix in analyses like Factor Analysis etc.: This again will have drastic implications on the final interpretation and analysis as correlation matrix will have totally different values.
(14) Failure to impregnate the design with powerful elements( Randomization, Replication, Blocking, Orthogonality and factorials etc.): This is one of the ways we can improve the validity of the experiment. again will have drastic implications on the final interpretation and analysis as correlation matrix will have totally different values. The general rule is “Block what you can, randomize what you cannot.”
(15) Failure to validate results : Objective is population and not sample: Usually experiment is conducted, data is collected and the results is interpreted for sample, we just forget about the population where our actual interest lies.
(16) Failure to use WLS, MLE instead of OLS wherever needed (in case OLS doesn’t give BLUE): Usually Different estimation methods have their usage in different situations and appropriate care needs to be taken for this.
(17) People simple take arbitrary α (.05 or .01) and simple ignore β: Usually α and β are related and can’t be analysed independently. Statistical power is an important parameter to be considered.
(18) Attenuation of ρ in Concurrent validation: When the established test has lower validity the reliability of new test, which is being validated with the established test, gets attenuated. Appropriate transformation needs to be done in this case to take care of the attenuation.
(19) Failure to acknowledge that KP correlation uses simple correlation and Stepwise Regression uses Part Correlation: Using wrong kind of correlation may give wrong predictors as significant
(20) Two Group designs are faulty in most cases: The error due to interaction testing effect can’t be taken care of in two group designs. Four group six study designs are better but expensive.
(21) Failure to use IRT appropriately : Using the IRT model, which is not fitting on the data and then calculating the parameters is just waste of whole exercise.
(22) Failure to recognize that Computer Adaptive Test is not same as Computer Administered Test / Computerized Tests : The tailored item selection in case of adaptive test can result in reduced standard errors and greater precision with only a handful of properly selected items.
(23) Failure to incorporate Taguchi method designs (TQM) : The use of orthogonal arrays in fractional factorial designs for efficient handling of desired main and interaction effects in line with quadratic loss function is increasingly being encouraged.
(24) Failure to incorporate response surface methods designs : This facilitates continuous feasible factor levels and irregular response surface optimization, which is not possible in traditional factorial designs.