Mixed Designs (Both Quantitative and Qualitative)

Archival Research

involves the examination not of people's present behavior, but of the traces of behavior they have left behind. For instance, to determine if a new machinery layout produces less wasted motion, the researcher could paint the floor area of both old and new layouts with a short-life paint. The more the employees have to walk about the equipment, the more they'll wear off the coating. After a month, the floor scuffing around the two layouts can be compared.

Archival research is often marked by innovation and cleverness in getting at and interpreting the wealth of traces that normal human activity leaves behind. For instance, an advertiser, wanting to know how popular a certain sports event on television is, would normally invest in a costly rating service. Alternatively, the advertiser could examine the record of a big city's water pressure. The clever assumption would be that a captivating event will keep viewers in their seats until it's over, whereupon large numbers will use the washroom, causing a significant drop in water pressure.

The Longitudinal Study

Longitudinal means "along the longest direction or dimension." A takes place over an extended time period. How long a study must be to merit the term "longitudinal" is a matter of opinion. In child psychology, a six-month study would rarely be seen as longitudinal, whereas an equally long OB study could be considered as such. What defines a study as longitudinal is a focus on development or change over time. Any format of research could be termed longitudinal, but such studies are typically field or correlational studies.

A classic example of longitudinal research is the 12-year-long Hawthorne study. Such examples are unfortunately rare, especially in OB research. For students performing research as part of their training, an artificial deadline requires a time-limited study. For academics, the faster a study is completed, the sooner it can be submitted for publication. Given today's faster pace of progress, increased pressure to publish, and severe competition for research funds, we may never again see the results of a decade-long research program.

The Case Study

The terms or sometimes refer to a teaching or training procedure used by a group to solve an actual or made-up problem. The terms also apply to the procedure relied upon by early clinical psychologists and psychoanalysts, namely that of studying and reporting on a single client. When an OB researcher examines and reports on a single situation, especially after studying it in depth and applying some intervention or treatment to it, the report is also termed a case study. Research based on an individual case is appealing for the same reason longitudinal research is unappealing. A case study tends to be limited in scope, and it can often be completed in less time.

Overview of Correlational Research

We've touched on field studies, surveys, archival research, factor analysis, longitudinal studies, and case studies. There are so many variants of correlational research because correlational studies are favored by OB researchers. Why is this? Correlational research is attractive, especially for studies of humans, because the researcher does not have to intrude upon the participants but has only to measure some aspects of their behavior. (Only the survey requires interrupting a person's routine. However, most people don't object to answering a well-designed survey, and because no treatment variable is administered, the ethical concerns are minimal.) The weakness of correlational research is that no causal relationships can be assumed. For inferences of causality, the researcher must turn to the experiment.

The Experiment

Unlike correlational research, the permits us to infer causality, that is, to assume the existence of a cause-and-effect relation. Only through performing an experiment can we conclude that variable a caused (or influenced, affected, yielded, changed) variable b. An experiment is an artificial procedure dealing with two variables, in which the level of Variable A is systematically varied, so that its effect upon levels of Variable B can be measured. A correlation researcher examining the relation between cafeteria food and worker contentment would ask, "Is there a relation between the two variables?" The experimenter would ask, "Does Variable A influence or cause Variable B?"

To answer this question, the experimenter could randomly divide the diners into two groups. Each group would be similar enough that if they were provided with identical meals their ratings of contentment would average about the same. If one group were to receive worse food than the other group, then any changes in the contentment scores could be attributed only to the changes in the food quality. That is, the level of the b variable (contentment) would depend on the level of the a variable (food quality). The b variable may be termed the caused or influenced variable, the outcome variable, or the dependent variable because its size depends on Variable A. Variable A, or causal variable, is termed the treatment or independent variable. While Variable A is manipulated (it must be administered in at least two intensities), Variable B is measured.

Here is another example of an experiment. A researcher located 20 supervisors and separated their names into two matched groups of ten. ("Matched" means that the groups are so equivalent that if supervisory effectiveness were measured, all the groups would show similar averages.) One group was randomly chosen to be the and the other to serve as the . (When one group gets some treatment and the other group gets a fake treatment or nothing at all, the terms "experimental" and "control" are used to distinguish between the two groups.) The members of the experimental group underwent sensitivity training, while the control group was enrolled in a special program that involved physical training. A year later, measures were taken to show if, and to what extent, those given the sensitivity training outperformed the equivalent group given the control treatment. The study thereby indicated whether or not sensitivity training improves managerial effectiveness.

Not all experiments would read like the above examples, since many experimental designs are possible. What is common to every experiment is a systematic administration of a treatment variable so that its influence on a resulting variable may be measured. A final example will clarify this point. If you administer a survey during lunch to measure (1) whether employees eat meat or fish, and (2) how often they smile over the next hour, you have performed a correlational study that can reveal only the size of the connection between diet and facial expression. But if you randomly assign workers to either a meat table or a fish table, ensure that both groups are treated the same except for the food variable, and then record their smiling, you have performed an experiment that entitles you to infer the effect of diet upon smiling.

The essence of experimental design is that any difference in the outcome measure cannot be attributed to anything except the treatment variable. This is accomplished by the proper application of , or procedures that eliminate alternate explanations. Four common controls (randomization, matching, E bias, and demand characteristics) are outlined next.

Randomization Control

Suppose the researcher wants to divide ten welders into two groups, apply a different treatment to each, and measure a performance variable. The researcher lists the welders' names alphabetically and assigns the first five welders to one treatment group and the rest to another group. Would this create two groups whose performances would be about the same in the absence of any differential treatment? Suppose the ten included three members of the Watson family, all famed for their welding ability! The use of the alphabet to assign subjects to groups could introduce various sources of bias that would make the two groups inherently different. The most common solution is to randomly assign subjects to groups. In a random selection procedure, each individual has an equal chance of being chosen. For instance, if the researcher were to write each welder's name on a slip of paper, place the slips in a hat, and perform a "blind" draw for assignment to groups, then everybody's name would have an equal chance of being chosen.

Matching Control

If you put the names of 100 welders in a hat and randomly divided them into two groups of 50, the performance of the two groups should be virtually identical. But if you did the same with only four welders, it is unlikely that one pair would perform the same as the other. The smaller the N (the number of participants studied), the smaller the likelihood that the group can be divided into subsets of similar performance. To divide a small N group into equivalent subsets, it is generally necessary to match the subsets. In a matching procedure, the researcher needs to know what are likely to influence performance. (An organismic variable is a variable that is a characteristic of the organism.) The researcher may determine, for instance, that the organismic variables that predict welding performance are age and years of welding experience. To match the four welders on these two variables, the researcher would assign the oldest and most experienced welder to group 1, the second-oldest, second- most-experienced welder to group 2, the youngest and least experienced welder to group 1, and the closest match for this person (and final welder) to group 2. Matched groups provide a sensitive measure of the effect of a treatment variable, but the process of matching is not always simple. Knowing what variables to match may require a preliminary study. While it is easy to match on one or two variables, it becomes increasingly difficult, if not impossible, to match on several variables. Imagine having to identify the oldest, most experienced, nonsmoking, locally trained, underweight welder, and the closest match for this individual.

Control of Experimenter Bias

A researcher performing a study is likely to have some emotional investment in the outcome. Even the most dispassionate and objective scientist surely has expectations, assumptions, and hopes that may cause him or her to inadvertently influence the outcome of the study. This influence is termed , or . A researcher can totally control E bias by hiring a technician to "run" the participants (i.e., to conduct the trials) in a process known as . A researcher who is blind to (uninformed about) the purpose and expectations underlying the study is unlikely to communicate bias. Researchers always try to treat the participants neutrally, often by avoiding advance knowledge of which participants are receiving which treatment.

Control of Demand Characteristics

Research participants bring their own assumptions to the study. As noted in our discussion of the Hawthorne effect earlier in this chapter, demand characteristics relate to the fact that every intervention carries implied expectations or demands. When you say "hello" to someone, a specific response is demanded by the social situation. When you ask people to serve as research participants, the typical response (given an appropriate context) is one of compliance and cooperation. In the Hawthorne studies, it was the demand characteristics of the intervention, not the changes in work environment, that affected employee performance. To control demand characteristics, the researcher can use a —meaning that neither the experimenter nor the participants know which is the treatment and which is the control—and ensure that instructions to participants are kept as neutral as possible.