Synopsis ten: research methods, surveys and sampling.

If you want to find out something, just ask. That's the basis of survey research. Normally surveyors use questionnaires, but researchers may also conduct interviews or telephone-based polls. Data collected can offer descriptive information. Surveys can't normally show causal relationships, however. For that we need to conduct an experiment.

Most difficult for many survey researchers is establishing questions. It's possible to rely on standard books of communication measurement resources, base questions on surveys found through a literature review, or develop a questionnaire from scratch. Many pitfalls can creep into question wording to taint the data, however. A few of the most common are the loaded question which biases the response, such as "Do you favor reduced military expenditure in light of the reduced world threat from the former Soviet Union.?" Also common is the vague question, yielding data difficult to interpret, such as: "What do you think about your local newspaper?" Ambiguous terms can be difficult to interpret: "Do you believe students at NDSU get their due?" And leading questions muddy the data: "Do you agree with most Americans that TV news is poor?"

Researchers can be guided in question formation by making a few basic decisions:

1. Should questions be direct or indirect? Indirect questions sometimes make it easier to introduce sensitive issues, but make a survey more lengthy.

2. Specific or general questions? Often researchers begin generally, and get specific later, the "funnel concept."

3. Questions or statements? Statements can be set up using a Likert-type scale for responses.

Sometimes researchers use reliability-test questions or the MMPI Lie Scale to maintain reliability and validity in survey research. Generally researchers need a 50 percent response rate to a survey to collect useable data. A 60 percent response is good, a 70 percent, very good.

Most survey researchers rely on the concept of sampling. A sample is "events" is chosen form a "population," to obtain "statistics." This is in contrast to including every event in a population, a "parameter," or a character of the population. A statistic is designed to estimate a character of a population. "Bias" is the tendency for a sample not to represent the entire population. Researchers try to eliminate this. "Sample error" (margin of error) is a degree to which the sample accurately reflects the population. It is presented as a percentage, based on a formula. Another formula calculates "confidence intervals," an indicator that the sample really does "capture" a population.

Samples are drawn using methods of randomization. Sometimes researchers do not fully randomize a sample for a variety of reasons, but rely on a systematic sample or convenience sample. Data from these are more limited and biased, however.