Researchers assign significant importance to the differences between psychological study types. The distinctions some researchers draw between quasi-experimental and true experimental studies and between qualitative and quantitative studies are of particular concern. The research community regularly uses these methods to conduct investigations. It is therefore critical to question the true differences of these study types in order to evaluate any impact these supposed distinctions have on the research process. This paper will review the vital differences and similarities of the aforementioned study types and demonstrate the pseudoimportance of these distinctions.
True Experimental Studies versus Quasi Experimental Studies
The two primary types of research design are true experimental and quasi experimental. Both designs are generally used to determine cause and effect. Both experiments involve treatment, outcome, units of assignment, and comparison from which change can be inferred and attributed to the treatment (Morgan 2000). The designs are both characterized by manipulation and control. The researcher manipulates the independent variable by administering it to some subjects and withholding it from other subjects. The researcher then observes any effect on the dependent variable due to the manipulation of the independent variable. Both designs employ the use of a control group as a basis for evaluating the performance of the experimental group. The control group refers to the subjects that do not receive the experimental treatment and the experimental group represents the subjects that do receive the experimental treatment. Both designs share the goal of revealing a treatment effect caused by the manipulation of the independent variable.
The use of randomization is the primary attribute separating these research designs. Randomization refers to the assignment of subjects to groups on a random basis. Randomization is a feature of true experimental designs. This practice ensures that every subject has an equal chance of being assigned to the experimental or the control group(s). The intent is to equalize the groups by randomly distributing sources of potential bias that exist amongst subjects. The probability of obtaining equivalent groups increases as the sample size increases. Equivalent groups can minimize threats to internal validity. Internal validity describes the extent to which the experimental manipulation, rather than extraneous influences, can account for the results, changes, or group differences (Kazdin 2002). Threats to internal validity include maturation, history, and change in instrumentation.
Quasi-Experimental studies have one or more facets of the design that cannot be randomized. These facets include the assignment of subjects to conditions and the assignment of conditions to settings. These limitations are usually due to bureaucratic, financial, and logistical constraints. Many researchers insist that a true experimental design is the only type of research from which strong and reliable inferences can be drawn. This camp suggests that the lack of randomization in a quasi experimental study greatly increases the threat to internal validity and therefore reduces the effectiveness of the study. This group concludes that the lack of randomization in a quasi-experimental study significantly separates this design from the true experimental study design.
However, further analysis reveals that this interpretation is not entirely accurate. The two research designs are in fact more similar than dissimilar. Both designs explore the effect of the independent variable on the dependent variable. Both designs are able to control for threats to internal validity. Threats to internal validity in the experimental design are reduced in a systematic fashion via the randomization process. In a quasi experimental design, the researcher must seek out ways to help make competing interpretations of the results implausible. In quasi-experimental studies, the researcher can significantly reduce threats to internal validity by various methods that include collection of a sufficiently rich data set and use of econometric modeling techniques (Heckman 1987). The researcher should use multiple and heterogeneous subjects and assess them on multiple occasions with the use of replicable measures. A researcher can reduce selection bias by using criteria such as age, education, marital status, and family income to form groups that are virtually equivalent to treatment groups (Heckman 1989). A researcher can also employ the use of pre tests to assess and establish group equivalence. While these approaches are not systematic, they can greatly reduce the threat of internal validity. Additionally, randomization does not guarantee a reduction or elimination of the threat to internal validity. A true experimental design can be flawed in spite of random assignment techniques. Groups of randomly assigned subjects can still differ on many relevant or irrelevant variables that can interfere with conclusions regarding the effect of the intervention on the dependent variable. It is very difficult to select a random group of subjects from a population unless the criteria are narrowly defined. Even then, it is unlikely that a researcher has access to an entire population from which to systematically select subjects. Often the population available is not necessarily representative of the entire population. For example, college students are often used in high proportion due to their availability and willingness to participate in research studies. Volunteer participants are self selective and may therefore vary in some way from the general population.
It is impossible for researchers to control for every single variable that may impact a research study regardless of the type of design. Both the quasi experimental and the true experimental design can yield results influenced by extraneous variables. It is probably safe to conclude that experimental designs are superior in terms of controlling for selection bias. However, the tools available to the researcher to minimize threats to internal validity in a quasi experimental study can offset the lack of randomization. The difficulty of obtaining a true random sample of a population coupled with the inability to control for all relevant variables mitigates the argument that the true experimental design is vastly different than the quasi-experimental design.
Indeed, the distinction between these two research designs is inconsequential. Both designs involve treatment, outcome, units of assignment, and comparison from which change can be inferred and attributed to the treatment. Both designs have limitations that require careful attention to variables that can impact the interpretation of results. Both designs are equally appropriate for research investigations as long as the unique challenges presented by both are adequately addressed by the researcher.
Quantitative Studies versus Qualitative Studies
Many researchers regard quantitative and qualitative designs as two distinct research philosophies. The quantitative paradigm measures the social world to test hypotheses and to predict and control behavior (Poggenpoel 2001). This design is objective and based on hard scientific evidence. The focus of quantitative studies is quite narrow and the design is characterized by control and precision. Quantitative studies identify the influence of a variable on an outcome of interest. The subjective interpretation of the researcher is not reflected in the research process. Data is analyzed systematically and statistically. The data is collected in contrived conditions in a laboratory setting. The qualitative paradigm aims to understand social life and the meaning people attach to things (Casebeer 1997). It is an interpretative approach characterized by subjective analysis of data. Information is collected in a naturalistic environment by an investigator who attaches meaning and substance to the data. Qualitative studies focus on the impact of unique experiences, actions, and conditions. It is based on the cumulative discovery, description and understanding of the subject matter of interest. Despite these differences, quantitative and qualitative research designs share many common aspects and goals. Both designs are characterized by direct observation, interviewing, and the collection of data. The goal of both designs is to identify recurring themes and key concepts that emerge from the research process. Each seeks to provide new knowledge and to complete an experiment that can be replicated in a systematic fashion. Both have to address threats to internal and external validity. The question of generality is of special concern with qualitative studies due to the use of small sample groups. As a result, the subjects may not be representative of a larger or of a particular population. However, the information obtained may be of great value to many individuals. An investigator can reduce the influence of external and internal validity by explicitly sharing the limitations of the study. Additionally, the researcher can and should develop systematic procedures for data collection and interpretation. Quantitative studies must also control for threats to external and internal validity. Randomization does not guarantee a sample that is representative of the population. It simply increases the probability of group equivalence. Time, monetary, and logistical constraints may reduce the possibility of obtaining a representative sample for a quantitative study. Quantitative studies employ the use of statistics which is considered by many researchers to be a superior method of data analysis. However, the very standards on which the researcher relies to determine statistical significance are themselves subjective. The researcher must be concerned with the influence of extraneous variables in a quantitative study. Variables for which the researcher does not or cannot control can impact the internal and external validity of the investigation. Both designs represent methods to investigate a research question. The quantitative design is more geared towards systematic and statistical analysis while the qualitative design is focused on observation and interpretation. This “distinction” is procedural rather than fundamental. The primary difference between the two study designs is the way in which data is collected and interpreted. However, during the data and interpretation process of both designs, the researcher is focused on highlighting the effect of a treatment and minimizing the plausibility of alternative explanations.
It is clear that both research types share many important attributes. The distinction drawn between these two designs is somewhat superficial. Both approaches should be regarded as points on the research continuum that can be employed based on the goals of the investigation (Casebeer 1997). They are not two distinct strategies, rather two methods that can be collectively used to address a single research question.
Careful analysis of the supposed differences between quasi experimental designs and true experimental designs and between qualitative studies and quantitative studies reveal inconsequential distinctions. Both sets of studies share similar research objectives and must contend with similar challenges to the research process. The goal of any investigation is to seek new knowledge that is valuable to the research community and to the public at large. Researchers should focus on how the processes can be effectively combined to yield good information, rather than on the relative merits of each design.
Casebeer, A. L. V., Marja J. (1997). "Combining Qualitative and Quantitative Research Methods: Considering the Possibilities for Enhancing the Study of Chronic Diseases." Chronic Diseases in Canada 18(3).
Heckman, J. J., V.J. Hotz, and M. Dabos (1987). Do we need experimental data to evaluate the impact of manpower training on earnings? Evaluation Review. 11: 395-427.
Heckman, J. J. V. J. H. (1989). "Choosing among alternative nonexperimental methods for estimating the impact of social programs: the case of manpower training." Journal of the American Statistical Association 84: 862-877.
Kazdin, A. E. (2002). Research Design in Clinical Psychology. Boston, MA, Allyn & Bacon.
Morgan, G. A. G., Jeffrey A. Harmon, Robert J. (2000). "Quasi-Experimental designs." Journal of the American Academy of Child and Adolescent Psychiatry 39(6): 794-6.
Poggenpoel, M. M., C.P.H. Van Der Linde, CH. (2001). "Qualitative research strategies as prerequisite for quantitative strategies." Education 122(2): 408-13.
comments powered by