Term
|
Definition
| selection bias, history, maturation, testing, instrumentation, mortality, contamination, regression to the mean, and suggestion bias |
|
|
Term
|
Definition
| changes that might occur in posttest results due to the pretest |
|
|
Term
|
Definition
| inconsistent measurement or use of varying instruments |
|
|
Term
|
Definition
| when people in a study discuss what they expect from a study and it effects the outcome |
|
|
Term
|
Definition
| when extreme results change the average values |
|
|
Term
|
Definition
| uses every kth element in a sampling frame |
|
|
Term
| what are the types of probability sampling? |
|
Definition
| simple random, cluster, probability proportional to size, stratified, and systematic |
|
|
Term
| what are the types of non-probability sampling? |
|
Definition
| haphazard, snowball, deviant case, quota, purposive, and sequential |
|
|
Term
|
Definition
| uses a set number of cases in different specified categories |
|
|
Term
|
Definition
| gets all possible cases that fit a certain criteria (ex: prostitutes) |
|
|
Term
|
Definition
| special type of purposive sample; uses cases that are very different from the dominant group (ex: high school dropouts THAT grew up in a wealthy home) |
|
|
Term
|
Definition
| "operationalization" of a population that includes a comprehensive list of all elements in a population- ex: telephone directory |
|
|
Term
|
Definition
| measured by defining a statistic in a sample that estimates a characteristic in a population |
|
|
Term
|
Definition
| the deviation between sample results and the actual population parameter |
|
|
Term
| what kinds of information can be determined through survey questions? |
|
Definition
| behaviors, self-classification, attitudes/beliefs, characteristics, expectations, knowledge |
|
|
Term
| what should NOT be asked in a survey, and what are its limitations? |
|
Definition
self-report is unreliable. "why" questions should not be asked, they are unfair & unreliable |
|
|
Term
| what should be avoided in survey questions? |
|
Definition
| ambiguity and vagueness, jargon or slang, prestige bias, emotional language, double-barrelled questions, leading questions, etc. |
|
|
Term
|
Definition
| asking a question that uses a "knowledgeable" figure to compare information. ex: do you agree with dr.s that soy is bad for you? |
|
|
Term
|
Definition
| occurs when answers are distorted because participants want to conform to social norms |
|
|
Term
|
Definition
| 2-part question in which the answer to the first (usually yes/no) determines whether or not they answer the next |
|
|
Term
| open vs. closed questions |
|
Definition
| open questions do not offer possible answers, while closed questions do. |
|
|
Term
| what is the problem with neutral positions? |
|
Definition
| if no neutral response is offered, participants are forced to pick even if they have no opinion. if a neutral option IS offered, they may pick it because it is easier than choosing an answer. |
|
|
Term
| how can context effects be minimized? |
|
Definition
| context effects can be minimized by giving half of a sample one order of questions and the other another. for example: specific --> general, then general --> specific |
|
|
Term
| what are the types of pre-experimental research design? |
|
Definition
| one-shot case study, static group comparison, one-group pretest/posttest |
|
|
Term
|
Definition
| introduces an independent variable and observes the effect without a pretest |
|
|
Term
|
Definition
| one group has IV, the other doesn't. this is similar to case-control EXCEPT there is no randomization, and there are problems with selection bias |
|
|
Term
| one-group pretest/posttest |
|
Definition
| one group is tested before and after the IV, but there may be other explanations for a change: maturation, testing, etc. |
|
|
Term
| what are the types of experimental research design? |
|
Definition
| classic, posttest only control group design, and solomon 4-group design |
|
|
Term
| what are the problems with attribute --> effect research designs? |
|
Definition
| they are not able to completely control the independent variable (usually social experiments) and so we cannot fulfill all of lazerfield's requirements for causality |
|
|
Term
| nonreactive measures (definition) |
|
Definition
| measures used to avoid the hawthorne effect or social desirability bias |
|
|
Term
| what are the types of nonreactive measures |
|
Definition
| erosion measures, accretion measures, public records, content analysis, and existing statistics |
|
|
Term
|
Definition
erosion- looks for wear & tear of use accretion- looks for what is deposited or added on after use |
|
|
Term
| description & problems with content analysis |
|
Definition
| content analysis used social artifacts and CODES for themes. it must be coded to analyze quantitatively, but this is objective |
|
|
Term
| description & problems with existing statistics |
|
Definition
| secondary data analysis- the data was collected for something else (ex: census). problems are that it may not be valid in the context of your study. issues of inferred causality and the fact that definitions of concepts or words may change over time. |
|
|
Term
| what is most commonly used as a nonreactive measure? |
|
Definition
| content analysis-- but also secondary data analysis |
|
|