...PSY475 (Week 1 DQ 2) Compare and contrast reliability and validity? Going back to Plato’s cave wall of shadows, if we all held up mirrors and reflected a particular point outside the cave would we all see the same thing and further if we were to take another peek an hour later would we see the same thing (reliability). Second is the subject of validity. My second class at college was a critical thinking class. I loved it. I can remember studying syllogisms: All animals are big, all big things are slow, therefore all animals are slow. This syllogism is valid, but not true (i.e. the conclusion reasonable follows from the premises). The problem with this syllogism is that the premises are untrue. They are both global absolutes, which are almost always false or at the very least not completely true in all instances. It is the same with psychological testing: the conclusion must reasonably flow from the facts gathered during experimentation. It would seem that validity has to do more with the interpretation of test results, than the test results themselves. Referring to the hypothesis as an elucidation of causality, validity is the bridge by which the numerical quantification of numbers is verified as it is translated into causation. It is a hindsight mechanism. It is used to verify the applicability of the test results to the hypothetical conclusion of causation. Both are equally important I think. If not, then the test might give great scores one time and not the next or the results...
Words: 385 - Pages: 2
...PSY475 (Week 2 DQ 1) Explain the steps to test development? 1. The purpose of the test must be stated. Almost always this entails the defining of what trait or behavior is to be measured and what target audience will be tested. It can be just one sentence (Hogan, 2007). 2. It would seem that this step is mostly concerned with the subject of validity, dealing with issues of interpretation and purpose. Items of consideration include: mode of administration, length, item format, number of scores, score reports, administrator training, and background research. 3. Item preparation is the proposition and preliminary design of the items that will make up the test. Items consist of a examiner stimulus, an examinee response, (selected-response or constructed-response) and scoring procedures. 4. Step 4 consists of item tryout, the statistical analysis, and item selection of the items proposed in step 3. 5. The standardization program or the norming program seeks to establish a norm for the test items being administered, to create a standard by which future test scores may be interpreted and compared. 6. Last, publication of the test—at its very basic level—includes a test booklet, the scoring key, and instruction on how the test should be administered. I must admit, there is a lot more involved in creating a test than I had first considered. I can remember from the statistics class that the number of people being tested, the p-value, and the descriptions of central tendency...
Words: 302 - Pages: 2