Webquest Handout
Task 1: 1. The research methods knowledge database is a database of generalized topics about performing social research. These vary from how to collect data, who to collect data from, where to collect the data, etc. a. What is the difference between qualitative data and quantitative data?
How do you determine what type of data to collect?
Can your topic be represented by solid numbers, or is it based on opinion? 2. Quantitative because the data given is concrete and generalizations like mean and mode can easily be identified. 3. Quantitative data is easily compiled into something meaningful because it is based on concrete data. On the other hand, qualitative data is presented in a raw form and needs to be categorized to be meaningful. Quantitative data is better at summarizing large amounts of data, like statistics, whereas qualitative data is better at telling the opinion of the participant, and is richer in details. 4. The three different types of ways to collect qualitative data are in-depth interviews, direct observation and written documents. Interviews can be conducted on an individual basis as well as a group interview and can be recorded in a multitude of ways. In an interview, the participant is being asked questions by the interviewer. This is where direct observation and interviews differ, because in the case of observation, the interviewer does not ask questions from the participant. Instead, the interviewer just stands by and observes how the participant reacts to certain situations, or their interactions with others. As for written documents, these include previous recorded printings, like books, websites, newspapers, journals, conversations, previous interviews, etc.
Task 2: 1. How would you explain the Research Methods Simulation book to a friend? The simulation book describes the 2 different ways to collect simulation data: manual and computer. It describes the advantages and disadvantages of both collection methods, and when each should be used. a. Scenario: You and a classmate have read about type of simulations that can be used for research design. What questions could you ask to test their understanding of what they have read?
What is an example of a manual simulation?
What is an example of a computer simulation?
2. If you needed to simulate a very large amount of data, which method for running a simulation would you suggest, computer based or manual? Why? Because the data required is of a significant amount, a computer simulation would be better because of the cost and time needed. Manual collection on a large scale takes countless hours to gather the sufficient data as well as significant time in which to do so. In comparison, a computer simulation can compute similar results, needs only a single person to run, and can be done in a matter of minutes. 3. If you were to have a debate between manual simulations and computer simulations, what would be the main points to express for both types of simulations? Manual simulations are more costly and time consuming compared to computer simulations. However, manual simulations are better at showing that while data suggests how the outcome should be, it doesn’t always follow the trend. In comparison, computer simulations, because of how they are constructed, seem to have a better chance at following the given trend. 4. Give 2 examples of where manual simulations would suffice and 2 examples of where computer simulations would be better for the same test. For a coin flip, testing the results of 20 coin flips could easily be done with manual simulation, whereas in comparison, simulating the flips for 2,000 times a computer simulation would be better. For rolling 2 dice for a total, testing 30 dice rolls can easily be done manually, whereas testing 900 rolls of the dice would be better under a computer simulation.
The first section I found helpful in the Research Methods Knowledge Base, was the section that talked about internal validity. In this section, it talks about how to collect reliable data that is valid, and how invalid data can be brought about by external alternate causes. The first section talks about first establishing the cause and effect relationship that you are going to be testing. By determining the cause and effect relationship that you intend to test, so that an outsider can test its validity based on your intended cause and effect, and the realized cause and effect. The following section talks about single groups, more in particular about threats of a single group to your cause and effect. It talks about 6 threats to single groups, including history threats, maturation threats, testing threats, instrumentation threats, mortality threats and regression threats, and gives good examples on how these threats may appear in your data. It then moves on to talk about multiple group threats, and how it relates to selection bias or selection threats. It describes and gives concrete examples of six types of multiple group threats, which include including selection-history threats, selection-maturation threats, selection-testing threats, selecting-instrumentation threats, selection-mortality threats and selection-regression threats. Lastly, this section talks about social interaction threats as it relates to validity. It identifies and gives examples for four social threats, which are diffusion or imitation treatment, compensatory rivalry, resentful demoralization, and compensatory equalization of treatment. The reason that I found this section to be interesting and helpful was due to the fact that testing your data for validity is an important step in the organization of your data. While in the process of verifying the validity of your data set, if it is found that an external factor has corrupted or influenced your data, then the test needs to be retested in a way that the external influence is eliminated. I feel that others would find this section useful for the same reason.
The second topic on the Research Methods Knowledge Base that I found to be helpful was the section on external validity. The point of conducting research is usually to get some sort of a generalized idea for the data set collected. Then, that generalized idea is applied to another study and tested repeatedly to verify the integrity of the generalization. We do this in order to apply this generalized data to produce many objects, like goods and services. This section concentrates on just that. It emphasizes the need to obtain randomly selected sampling groups, and the threats associated with external validity. When gathering the first set of data collected from the first test group, we use the generalized data computed from this sample as a starting point for the ideal generalization that we are looking for. This generalization is then tested over and over, using new test groups in order to prove or disprove the truthfulness behind our first generalization, and then adjusted if necessary. If the generalization is consistently produced over multiple test groups, then we can conclude that this was the ideal generalization that was being researched, and can now be applied to whatever the research was being conducted for, like standard door heights. The reason I found this helpful, and why I feel others would find this helpful as well is the fact that you can’t just go with the “one and done” approach for data collection. In order for data to be generalized, it must be run through the wringer multiple times to verify and integrity of the data collected, as well as the validity. While it may be hard to get the funds to pay for multiple tests of the same thing, it is important to emphasize the reasoning behind getting proper data sets and calculations.