Free Essay

Item Analysis

In:

Submitted By sneha028
Words 9313
Pages 38
Item Analysis
Item Analysis allows us to observe the characteristics of a particular question (item) and can be used to ensure that questions are of an appropriate standard and select items for test inclusion.
Introduction
Item Analysis describes the statistical analyses which allow measurement of the effectiveness of individual test items. An understanding of the factors which govern effectiveness (and a means of measuring them) can enable us to create more effective test questions and also regulate and standardise existing tests.
There are three main types of Item Analysis: Item Response Theory, Rasch Measurement and Classical Test Theory. Although Classical Test Theory and Rasch Measurement will be discussed, this document will concentrate primarily on Item Response Theory.
The Models
Classical Test Theory
Classical Test Theory (traditionally the main method used in the United Kingdom) utilises two main statistics - Facility and Discrimination. * Facility is essentially a measure of the difficulty of an item, arrived at by dividing the mean mark obtained by a sample of candidates and the maximum mark available. As a whole, a test should aim to have an overall facility of around 0.5, however it is acceptable for individual items to have higher or lower facility (ranging from 0.2 to 0.8). * Discrimination measures how performance on one item correlates to performance in the test as a whole. There should always be some correlation between item and test performance, however it is expected that discrimination will fall in a range between 0.2 and 1.0.
The main problems with Classical Test Theory are that the conclusions drawn depend very much on the sample used to collect information. There is an inter-dependence of item and candidate.
Item Response Theory
Item Response Theory (IRT) assumes that there is a correlation between the score gained by a candidate for one item/test (measurable) and their overall ability on the latent trait which underlies test performance (which we want to discover). Critically, the 'characteristics' of an item are said to be independent of the ability of the candidates who were sampled.
Item Response Theory comes in three forms: IRT1, IRT2, and IRT3 reflecting the number of parameters considered in each case. * For IRT1, only the difficulty of an item is considered,
(difficulty is the level of ability required to be more likely to correctly answer the question than answer it wrongly). * For IRT2, difficulty and discrimination are considered,
(discrimination is how well the question is at separating out candidates of similar abilities). * For IRT3, difficulty, discrimination and chance are considered,
(chance is the random factor which enhances a candidates probability of success through guessing.
IRT can be used to create a unique plot for each item (the Item Characteristic Curve - ICC). The ICC is a plot of Probability that the Item will be answered correctly against Ability. The shape of the ICC reflects the influence of the three factors: * Increasing the difficulty of an item causes the curve to shift right - as candidates need to be more able to have the same chance of passing. * Increasing the discrimination of an item causes the gradient of the curve to increase. Candidates below a given ability are less likely to answer correctly, whilst candidates above a given ability are more likely to answer correctly. * Increasing the chance raises the baseline of the curve.
This simple simulation allows the user to investigate the factors governing the shape of the Item Characteristic Curve. All three well known IRT models are represented (referred to as IRT1, IRT2 and IRT3) and Item Characteristic Curves can be super-imposed on one another to see how they relate.

Click to View the Simulation.
Of course when you carry out a test for the first time you don't know the ICC of the item because you don't know the difficulty (and discrimination of that item). Rather, you estimate the parameters (using parameter estimation techniques) to find values which fit the data you observed.
Using IRT models allows Items to be characterised and ranked by their difficulty and this can be exploited when generating Item Banks of equivalent questions. It is important to remember though, that in IRT2 and IRT3, question difficulty rankings may vary over the ability range.
Rasch Measurement
Rasch measurement is very similar to IRT1 - in that it considers only one parameter (difficulty) and the ICC is calculated in the same way. When it comes to utilising these theories to categorise items however, there is a significant difference. If you have a set of data, and analyse it with IRT1, then you arrive at an ICC that fits the data observed. If you use Rasch measurement, extreme data (e.g. questions which are consistently well or poorly answered) is discarded and the model is fitted to the remaining data.
Purpose of Item Analysis
OK, you now know how to plan a test and build a test * Now you need to know how to do ITEM ANALYSIS
--> Looks complicated at first glance, but actually quite simple

-->even I can do this and I'm a mathematical idiot

* Talking about norm-referenced, objective tests * mostly multiple-choice but same principals for true-false, matching and short answer * by analyzing results you can refine your testing

SERVES SEVERAL PURPOSES 1. Fix marks for current class that just wrote the test * find flaws in the test so that you can adjust the mark before return to students * can find questions with two right answers, or that were too hard, etc., that you may want to drop from the exam * even had to do that occasionally on Diploma exams, even after 36 months in development, maybe 20 different reviewers, extensive field tests, still occasionally have a question whose problems only become apparent after you give the test * more common on classroom tests -- but instead of getting defensive, or making these decisions at random on basis of which of your students can argue with you, do it scientifically 2. More diagnostic information on students * another immediate payoff of item analysis
Classroom level: * will tell which questions they were are all guessing on, or if you find a questions which most of them found very difficult, you can reteach that concept * CAN do item analysis on pre-tests to: * so if you find a question they all got right, don't waste more time on this area * find the wrong answers they are choosing to identify common misconceptions * can't tell this just from score on total test, or class average
Individual level: * isolate specific errors this child made * after you've planned these tests, written perfect questions, and now analyzed the results, you're going to know more about these kids than they know themselves 3. Build future tests, revise test items to make them better * REALLY pays off second time you teach the same course * by now you know how much work writing good questions is * studies have shown us that it is FIVE times faster to revise items that didn't work, using item analysis, than trying to replace it with a completely new question * new item which would just have new problems anyway

--> this way you eventually get perfect items, the envy of your neighbours * SHOULD NOT REUSE WHOLE TESTS --> diagnostic teaching means that you are responding to needs of your students, so after a few years you build up a bank of test items you can custom make tests for your class * know what class average will be before you even give the test because you will know approximately how difficult each item is before you use it; * can spread difficulty levels across your blueprint too... 4. Part of your continuing professional development * doing the occasional item analysis will help teach you how to become a better test writer * and you're also documenting just how good your evaluation is * useful for dealing with parents or principals if there's ever a dispute * Once you start bringing out all these impressive looking stats parents and administrators will believe that maybe you do know what you're talking about when you fail students... * Parent says, I think your "question stinks",

well, "according to the item analysis, this question appears to have worked well -- it's your son that stinks"

(just kidding! --actually, face validity takes priority over stats any day!) * And if the analysis shows that the question does stink, you've already dropped it before you've handed it back to the student, let alone the parent seeing it... 5. Before and After Pictures * long term payoff * collect this data over ten years, not only get great item bank, but if you change how you teach the course, you can find out if innovation is working * If you have a strong class (as compared to provincial baseline) but they do badly on same test you used five years ago, the new textbook stinks.

ITEM ANALYSIS is one area where even a lot of otherwise very good classroom teachers fall down * They think they're doing a good job; they think they've doing good evaluation, but without doing item analysis, they can't really know * Part of being a professional is going beyond the illusion of doing a good job to finding out whether you really are * But something just a lot of teachers don't know HOW to do
Do it indirectly when kids argue with them...wait for complaints from students, student's parents and maybe other teachers...
ON THE OTHER HAND.... * I do realize that I am advocating here more work for you in the short term, but, it will pay off in the long term
But realistically:

* Probably only doing it for your most important tests * End of unit tests, final exams --> summative evaluation * Especially if you're using common exams with other teachers * Common exams give you bigger sample to work with, which is good * Makes sure that questions other teacher wrote are working for YOUR class * Maybe they taught different stuff in a different way * Impress the daylights out of your colleagues
*Probably only doing it for test questions you are likely going to reuse next year

*Spend less time on item analysis than on revising items * Item analysis is not an end in itself, * No point unless you use it to revise items, * And help students on basis of information you get out of it
I also find that, if you get into it, it is kind of fascinating. When stats turn out well, it's objective, external validation of your work. When stats turn out differently than you expect, it becomes a detective mystery as you figure out what went wrong.
But you'll have to take my word on this until you try it on your own stuff.

Sample Size

The larger the sample the better, more accurate the data you get. * Government would not accept any sample smaller than 250 students, balanced for urban/rural, public/separate, and zone: north/south/central/etc. * But not much you can do about small class/sample size, except

(a) give test to more than one class (secondary teachers) * Need to balance reusing tests three periods in a row against chance of student's in first class tipping off third class on questions...

(b) accumulate stats over a couple of years * IF you haven't changed the question AT ALL, you can add this year's results to last years to increase sample size * If you have changed the question, you can compare this year's stats to last year's to see if they have changed in direction you wanted * Have to be cautious because with very small samples, some changes may just be random chance...
(c) Mostly just do it with your 30 students and just be cautious how you interpret these results * Procedures will give you hints at 30 students, proof positive at 400

Procedure works fastest if you have had students use a separate answer sheet * Separate answer sheet means faster marking, so more time for interpretation * Separate answer sheet means more accurate marking too * Believe me, if you're marking "A,B,C,D", turn the page, sooner or later you're going to turn two pages at once and go 3/4 the way through the booklet before you notice that you're now on the last page and have six more answers on the answer key than you have answers to mark... * Separate answer sheet means much faster for counting you have to do for item analysis UNLESS you're in grades 1-3, in which case have them write on the booklet because they get confused transferring answer to separate page
Eight Simple Steps to Item Analysis

1. Score each answer sheet, write score total on the corner * obviously have to do this anyway 2. Sort the pile into rank order from top to bottom score
(1 minute, 30 seconds tops) 3. If normal class of 30 students, divide class in half * Same number in top and bottom group: * Toss middle paper if odd number (put aside) 4. Take 'top' pile, count number of students who responded to each alternative * Fast way is simply to sort piles into "A", "B", "C", "D" // or true/false or type of error you get for short answer, fill-in-the-blank

OR set up on spread sheet if you're familiar with computers ITEM ANALYSIS FORM TEACHER CONSTRUCTED TESTSCLASS SIZE = 30 ITEM UPPER LOWER DIFFERENCE D TOTAL DIFFICULTY 1. A 0 *B 4 C 1 D 1 O *=Keyed Answer | * Repeat for lower group ITEM ANALYSIS FORM TEACHER CONSTRUCTED TESTSCLASS SIZE = 30 ITEM UPPER LOWER DIFFERENCE D TOTAL DIFFICULTY 1. A 0 *B 4 2 C 1 D 1 O *=Keyed Answer | * This is the time consuming part --> but not that bad, can do it while watching TV, because you're just sorting piles

THREE POSSIBLE SHORT CUTS HERE (STEP 4)
(A) If you have a large sample of around 100 or more, you can cut down the sample you work with * take top 27% (27 out of 100); bottom 27% (so only dealing with 54, not all 100) * put middle 46 aside for the moment * Larger the sample, more accurate, but have to trade off against labour; using top 1/3 or so is probably good enough by the time you get to 100; --27% magic figure statisticians tell us to use * I'd use halves at 30, but you could just use a sample of top 10 and bottom 10 if you're pressed for time * But it means a single student changes stats by 10% * Trading off speed for accuracy... * But I'd rather have you doing ten and ten than nothing
(B) Second short cut, if you have access to photocopier (budgets) * Photocopy answer sheets, cut off identifying info
(Can't use if handwriting is distinctive) * Colour code high and low groups --> dab of marker pen color * Distribute randomly to students in your class so they don't know whose answer sheet they have * Get them to raise their hands * For #6, how many have "A" on blue sheet? how many have "B"; how many "C" * For #6, how many have "A" on red sheet.... * Some reservations because they can screw you up if they don't take it seriously * Another version of this would be to hire kid who cuts your lawn to do the counting, provided you've removed all identifying information * I actually did this for a bunch of teachers at one high school in Edmonton when I was in university for pocket money
(C) Third shortcut, IF you can't use separate answer sheet, sometimes faster to type than to sort SAMPLE OF TYPING FORMAT FOR ITEM ANALYSIS ITEM # 1 2 3 4 5 6 7 8 9 10 KEY T F T F T A D C A B STUDENT Kay T T T F F A D D A C Jane T T T F T A D C A D John F F T F T A D C A B | *
Type name; then T or F, or A,B,C,D == all left hand on typewriter, leaving right hand free to turn pages (from Sax) * IF you have a computer program -- some kicking around -- will give you all stats you need, plus bunches more you don't-- automatically after this stage
OVERHEAD: SAMPLE ITEM ANALYSIS FOR CLASS OF 30 (PAGE #1) (in text)

5. Subtract the number of students in lower group who got question right from number of high group students who got it right * Quite possible to get a negative number ITEM ANALYSIS FORM TEACHER CONSTRUCTED TESTSCLASS SIZE = 30 ITEM UPPER LOWER DIFFERENCE D TOTAL DIFFICULTY 1. A 0 *B 4 2 2 C 1 D 1 O *=Keyed Answer | 6. Divide the difference by number of students in upper or lower group * In this case, divide by 15 * This gives you the "discrimination index" (D) ITEM ANALYSIS FORM TEACHER CONSTRUCTED TESTSCLASS SIZE = 30 ITEM UPPER LOWER DIFFERENCE D TOTAL DIFFICULTY 1. A 0 *B 4 2 2 0.333 C 1 D 1 O *=Keyed Answer | 7. Total number who got it right ITEM ANALYSIS FORM TEACHER CONSTRUCTED TESTSCLASS SIZE = 30 ITEM UPPER LOWER DIFFERENCE D TOTAL DIFFICULTY 1. A 0 *B 4 2 2 0.333 6 C 1 D 1 O *=Keyed Answer | 8. If you have a large class and were only using the 1/3 sample for top and bottom groups, then you have to NOW count number of middle group who got each question right (not each alternative this time, just right answers) 9. Sample Form Class Size= 100. * if class of 30, upper and lower half, no other column here 10. Divide total by total number of students * Difficulty = (proportion who got it right (p) ) ITEM ANALYSIS FORM TEACHER CONSTRUCTED TESTSCLASS SIZE = 30 ITEM UPPER LOWER DIFFERENCE D TOTAL DIFFICULTY 1. A 0 *B 4 2 2 0.333 6 .42 C 1 D 1 O *=Keyed Answer | 11. You will NOTE the complete lack of complicated statistics --> counting, adding, dividing --> no tricky formulas required for this * Not going to worry about corrected point biserials etc. * One of the advantages of using fixed number of alternatives

Interpreting Item Analysis

Let's look at what we have and see what we can see * 90% of item analysis is just common sense... 1. Potential Miskey 2. Identifying Ambiguous Items 3. Equal Distribution to all alternatives. 4. Alternatives are not working 5. Distracter too attractive. 6. Question not discriminating. 7. Negative discrimination. 8. Too Easy. 9. Omit. 10. Relationship between D index and Difficulty (p).

1. Item Analysis of Computer Printouts
.

1. What do we see looking at this first one? [Potential Mis-key] Upper Low Difference D Total Difficulty 1. *A 1 4 -3 -.2 5 .17 B 1 3 C 10 5 D 3 3 O <----means omit or no answer | * #1, more high group students chose C than A, even though A is supposedly the correct answer * More low group students chose A than high group so got negative discrimination; * Only .16% of class got it right * Most likely you just wrote the wrong answer key down
--> This is an easy and very common mistake for you to make * Better you find out now before you hand back then when kids complain * OR WORSE, they don't complain, and teach themselves that your miskey as the "correct" answer * So check it out and rescore that question on all the papers before handing them back * Makes it 10-5 Difference = 5; D=.34; Total = 15; difficulty=.50
--> nice item
OR:
* You check and find that you didn't miskey it --> that is the answer you thought

Two possibilities: * One possibility is that you made slip of the tongue and taught them the wrong answer * Anything you say in class can be taken down and used against you on an examination.... * More likely means even "good" students are being tricked by a common misconception -->
You're not supposed to have trick questions, so may want to dump it
--> Give those who got it right their point, but total rest of the marks out of 24 instead of 25
If scores are high, or you want to make a point, might let it stand, and then teach to it --> sometimes if they get caught, will help them to remember better in future
Such as: * Very fine distinctions * Crucial steps which are often overlooked
REVISE it for next time to weaken "B"
-- Alternatives are not supposed to draw more than the keyed answer
-- Almost always an item flaw, rather than useful distinction

1. What can we see with #2: [Can identify ambiguous items] Upper Low Difference D Total Difficulty 2. A 6 5 B 1 2 *C 7 5 2 .13 12 .40 D 1 3 O | * #2, About equal numbers of top students went for A and D.
Suggests they couldn't tell which was correct * Either, students didn't know this material (in which case you can reteach it) * Or the item was defective ---> * Look at their favorite alternative again, and see if you can find any reason they could be choosing it * Often items that look perfectly straight forward to adults are ambiguous to students
Favorite examples of ambiguous items: * If you NOW realize that D was a defensible answer, rescore before you hand it back to give everyone credit for either A or D -- avoids arguing with you in class * If it's clearly a wrong answer, then you now know which error most of your students are making to get wrong answer * Useful diagnostic information on their learning, your teaching

1. Equally to all alternatives Upper Low Difference D Total Difficulty 3. A 4 3 B 3 4 *C 5 4 1 .06 9 .30 D 3 4 O | * Item #3, students respond about equally to all alternatives * Usually means they are guessing
Three possibilities: * May be material you didn't actually get to yet * You designed test in advance (because I've convinced you to plan ahead) but didn't actually get everything covered before holidays.... * Or item on a common exam that you didn't stress in your class * Item so badly written students have no idea what you're asking * Item so difficult students just completely baffled * review the item: * If badly written ( by other teacher) or on material your class hasn't taken, toss it out, rescore the exam out of lower total * BUT give credit to those that got it, to a total of 100%

* If seems well written, but too hard, then you know to (re)teach this material for rest of class.... * Maybe the 3 who got it are top three students, * Tough but valid item: * OK, if item tests valid objective * Want to provide occasional challenging question for top students * But make sure you haven't defined "top 3 students" as "those able to figure out what the heck I'm talking about"

1. Alternatives aren't working Upper Low Difference D Total Difficulty 4. A 1 5 *B 14 7 7 .47 21 .77 C 0 2 D 0 0 O | * Example #4 --> no one fell for D --> so it is not a plausible alternative * Question is fine for this administration, but revise item for next time * Toss alternative D, replace it with something more realistic * Each distracter has to attract at least 5% of the students * Class of 30, should get at least two students

* Or might accept one if you positively can't think of another fourth alternative -- otherwise, do not reuse the item
If two alternatives don't draw any students --> might consider redoing as true/false

2. Distracter too attractive Upper Low Difference D Total Difficulty 5. A 7 10 B 1 2 C 1 1 *D 5 2 3 .20 7 .23 O | * Sample #5 --> too many going for A
--> No ONE distracter should get more than key

--> No one distracter should pull more than about half of students

-- Doesn't leave enough for correct answer and five percent for each alternative * Keep for this time * Weaken it for next time

3. Question not discriminating Upper Low Difference D Total Difficulty 6. *A 7 7 0 .00 14 .47 B 3 2 C 2 1 D 3 5 O | * Sample #6: low group gets it as often as high group * On norm-referenced tests, point is to rank students from best to worst * So individual test items should have good students get question right, poor students get it wrong * Test overall decides who is a good or poor student on this particular topic * Those who do well have more information, skills than those who do less well * So if on a particular question those with more skills and knowledge do NOT do better, something may be wrong with the question * Question may be VALID, but off topic * E.G.: rest of test tests thinking skill, but this is a memorization question, skilled and unskilled equally as likely to recall the answer * Should have homogeneous test --> don't have a math item in with social studies * If wanted to get really fancy, should do separate item analysis for each cell of your blueprint...as long as you had six items per cell

* Question is VALID, on topic, but not RELIABLE * Addresses the specified objective, but isn't a useful measure of individual differences * Asking Grade 10s Capital of Canada is on topic, but since they will all get it right, won't show individual differences -- give you low D

4. Negative Discrimination Upper Low Difference D Total Difficulty 7. *A 7 10 -3 -.20 17 .57 B 3 3 C 2 1 D 3 1 O | * D (discrimination) index is just upper group minus lower group * Varies from +1.0 to -1.0 * If all top got it right, all lower got it wrong = 100% = +1 * If more of the bottom group get it right than the top group, you get a negative D index * If you have a negative D, means that students with less skills and knowledge overall, are getting it right more often than those who the test says are better overall * In other words, the better you are, the more likely you are to get it wrong
WHAT COULD ACCOUNT FOR THAT?

Two possibilities: * Usually means an ambiguous question * That is confusing good students, but weak students too weak to see the problem * Look at question again, look at alternatives good students are going for, to see if you've missed something

OR: * Or it might be off topic

--> Something weaker students are better at (like rote memorization) than good students

--> Not part of same set of skills as rest of test--> suggests design flaw with table of specifications perhaps
((-if you end up with a whole bunch of -D indices on the same test, must mean you actually have two different distinct skills, because by definition, the low group is the high group on that bunch of questions
--> End up treating them as two separate tests)) * If you have a large enough sample (like the provincial exams) then we toss the item and either don't count it or give everyone credit for it * With sample of 100 students or less, could just be random chance, so basically ignore it in terms of THIS administration. * Kids wrote it, give them mark they got * Furthermore, if you keep dropping questions, may find that you're starting to develop serious holes in your blueprint coverage -- problem for sampling * But you want to track stuff this FOR NEXT TIME * If it's negative on administration after administration, consistently, likely not random chance, it's screwing up in some way * Want to build your future tests out of those items with high positive D indices * the higher the average D indices on the test, the more RELIABLE the test as a whole will be * Revise items to increase D
-->If good students are selecting one particular wrong alternative, make it less attractive

-->or increase probability of their selecting right answer by making it more attractive * May have to include some items with negative Ds if those are the only items you have for that specification, and it's an important specification * What this means is that there are some skills/knowledge in this unit which are unrelated to rest of the skills/knowledge
--> but may still be important

* e.g., statistics part of this course may be terrible on those students who are the best item writers, since writing tends to be associated with the opposite hemisphere in the brain than math, right... but still important objective in this course * May lower reliability of test, but increases content validity

5. Too Easy Upper Low Difference D Total Difficulty 8. A 0 1 *B 14 13 1 .06 27 .90 C 0 1 D 1 1 O | * Too easy or too difficult won't discriminate well either * Difficulty (p) (for proportion) varies from +1.0 (everybody got it right) to 0 (nobody)
REMEMBER: THE HIGHER THE DIFFICULTY INDEX, THE EASIER THE QUESTION * If the item is NOT mis keyed or some other glaring problem, it's too late to change after administered --> everybody got it right, OK, give them the mark
TOO DIFFICULT = 30 to 35% (used to be rule in Branch, now not...) * If the item is too difficult, don't drop it, just because everybody missed it --> you must have thought it was an important objective or it wouldn't have been on there; * And unless literally EVERYONE missed it, what do you do with the students who got it right? * Give them bonus marks? * Cheat them of a mark they got?
Furthermore, if you drop too many questions, lose content validity (specs)
--> If two or three got it right may just be random chance,
So why should they get a bonus mark

* However, DO NOT REUSE questions with too high or low difficulty (p) values in future
If difficulty is over 85%, you're wasting space on limited item test * Asking Grade 10s the Capital of Canada is probably waste of their time and yours --> unless this is a particularly vital objective * Same applies to items which are too difficult --> no use asking Grade 3s to solve quadratic equation * But you may want to revise question to make it easier or harder rather than just toss it out cold
OR SOME EXCEPTIONS HERE:

You may have consciously decided to develop a "Mastery" style tests
--> Will often have very easy questions -& expect everyone to get everything trying to identify only those who are not ready to go on

--> In which case, don't use any question which DOES NOT have a difficulty level below 85% or whatever
Or you may want a test to identify the top people in class, the reach for the top team, and design a whole test of really tough questions
--> Have low difficulty values (i.e., very hard)

* So depends a bit on what you intend to do with the test in question
This is what makes the difficulty index (proportion) so handy * You create a bank of items over the years
--> Using item analysis you get better questions all the time, until you have a whole bunch that work great

-->Can then tailor-make a test for your class

You want to create an easier test this year, you pick questions with higher difficulty (p) values;

You want to make a challenging test for your gifted kids, choose items with low difficulty (p) values

--> For most applications will want to set difficulty level so that it gives you average marks, nice bell curve * Government uses 62.5 --> four item multiple choice, middle of bell curve,

* Start tests with an easy question or two to give students a running start * Make sure that the difficulty levels are spread out over examination blueprint * Not all hard geography questions, easy history * Unfair to kids who are better at geography, worse at history * Turns class off geography if they equate it with tough questions
-->REMEMBER here that difficulty is different than complexity, Bloom * So can have difficult recall knowledge question, easy synthesis * Synthesis and evaluation items will tend to be harder than recall questions so if find higher levels are more difficult, OK, but try to balance cells as much as possible * Certainly content cells should be the roughly the same

1. OMIT Upper Low Difference D Total Difficulty 9. A 2 1 B 3 4 *C 7 3 4 .26 10 .33 D 1 1 O 2 4 |
If near end of the test * --> They didn't find it because it was on the next page

--format problem
OR
* --> Your test is too long, 6 of them (20%) didn't get to it

OR, if middle of the test: 3. --> Totally baffled them because: * Way too difficult for these guys * or because also 2 from high group too: ambiguous wording

1. & 2. RELATIONSHIP BETWEEN D INDEX AND DIFFICULTY (p) Upper Low Difference D Total Difficulty 10. A 0 5 *B 15 0 15 1.0 15 .50 C 0 5 D 0 5 O --------------------------------------------------- 11. A 3 2 *B 8 7 1 0.6 15 .50 C 2 3 D 2 3 O | * 10 is a perfect item --> each distracter gets at least 5 discrimination index is +1.0
(ACTUALLY PERFECT ITEM WOULD HAVE DIFFICULTY OF 65% TO ALLOW FOR GUESSING) * High discrimination D indices require optimal levels of difficulty * But optimal levels of difficulty do not assure high levels of D * 11 has same difficulty level, different D * On four item multiple-choice, student doing totally by chance will get 25%
Program Evaluation

When your kids write the Diploma or Achievement Test Department sends out a printout of how your class did compared to everybody else in the province
Three types of report: 1. ASSESSMENT HIGHLIGHTS (pamphlet) * How are kids doing today in terms of meeting the standards? * How are they doing compared to four years ago? eight years ago?
(monitor over time) 2. PROVINCIAL REPORT * Format keeps changing --> some years all tests in one book to save on paper and mailing costs; other years each exam gets its own report * Tons of technical information (gender stuff, etc.) 3. JURISDICTION & SCHOOL REPORTS
(Up to superintendent what happens to these after that
--> can publish in newspaper, keep secret central office only, etc.) * Get your hands on and interpret * Either you do it or someone else will do it for/to you * Better teachers take responsibility rather than top down * New table formats are so easy to interpret no reason not to * This means you can compare their responses to the responses of 30,000 students across the province * Will help you calibrate your expectations for this class * Is your particular class high or low one? * Have you set your standards too high or too low? * Giving everyone 'CÕs because you think they ought to do better than this, but they all ace the provincial tests?
Who Knows Where This is? OVERHEAD: SCHOOL TABLE 2 (June 92 GRADE 9 Math Achievement) * Check table 2 for meeting standard of excellence * Standards set by elaborate committee structure
This example (overhead): Your class had 17 students
Total test out of 49 (means test of 50, but dropped on after item analysis)
Standard setting procedures decided that 42/49 is standard of EXCELLENCE for grade 9s in Alberta
Next column shows they expect 15% to reach this standard

Standard setting procedure decided that 23 out of 49 which Acceptable
Standard; next column says expect 85% to reach that standard

Columns at end of table show that actually, only 8.9% made standard of excellence, and only 67.4% made acceptable standard

(Bad news!)

But looking at YOUR class, 5.9, almost 6% made standard of excellence (so fewer than Province as a whole) but on the other hand, 76.5% meeting acceptable standard.
Need comparison -- otherwise, fact that only get 6% to excellence might sound bad...
Interpretation: either you only have one excellent math student in your class, or you are teaching to acceptable standard, but not encouraging excellence?
BUT can use tables to look deeper, * Use tables to identify strengths and weaknesses in student learning * And therefore identify your own strength and weaknesses
Problem solving & knowledge/skills broken down --> table of specs topics interestingly, though, above provincial on problem solving at excellence...

ASK: -How do you explain % meeting knowledge and % meeting problems both higher than % meeting standard on whole test?

Answer: Low correlation between performance on the two types of questions
(i.e., those who met standard on the one often did not on the other) which means (a) can't assume that easy/hard = Bloom's taxonomy and (b) that you have to give students both kinds of questions on your test or you are being unfair to group who is better at the other stuff
Don't know where this is OVERHEAD: SCHOOL TABLE 5.1 (GRADE 9 MATH, JUNE 92) * Check tables 5.1 to 5.6 for particular areas of strengths and weaknesses * Look for every question where students in your school were 5% different on keyed answer from those in provincial test * If 5% or more higher, is a particular strength of your program * If 5% or more lower, is a particular weakness * Note that score on question irrelevant, only difference from rest of province
--> e.g., if you only got 45% but province only got 35%, than that's a significant strength

--> The fact that less than 50% just means it was a really tough question, too hard for this grade * Similarly, just because got 80% doesn't make your class good if province is 98% * If find all strengths or all weaknesses, where is the gap lowest? * Least awful = strengths; least strong = weakness to work on
THIS EXAMPLE? All above provincial scores on these skills converts a decimal into a fraction 76.5-60.9 = 15.6% above provincial norm so decimal to fraction is a strength all of these are good, but find least good --> that’s the area to concentrate on question 10: on 4.5% difference -- so your weak spot, one area you aren’t significantly above rest of province

You can even begin to set standards in your class as province does
--> i.e., ask yourself BEFORE the test how many of these questions should your class be able to do on this test?
Then look at actual performance.

How did my students do? Compared to what?
My classroom expectations
School's expectations
Jurisdiction's expectations
Provincial expectations
The last/previous test administered
Community expectations
(Each jurisdiction how has its own public advisory committee)
You can even create your own statistics to compare with provincial standard * Lots of teachers recycle Diploma and Achievement test questions, but they only do it to prep kids for actual exam --> losing all that diagnostic info
HOWEVER:-Avoid comparisons between schools * Serves no useful purpose, has no logic since taken out of context * e.g., comparing cancer clinic and walk in clinic --> higher death rate in cancer clinic doesn't mean its worse; may be best cancer clinic in the world, be doing a great job given more serious nature of problems it faces * Invidious comparisons like this become "blaming" exercise * Self-fulfilling prophecy: parents pull kids from that school
Provincial authorities consider such comparisons a misuse of results * School report = your class if only one class; but if two or more classes, then we are talking about your school's program

--> Forces you to get together with other teachers to find out what they're doing

--> Pool resources, techniques, strategies to address problem areas....

Major Uses of Item Analysis
Item analysis can be a powerful technique available to instructors for the guidance and improvement of instruction. For this to be so, the items to be analyzed must be valid measures of instructional objectives. Further, the items must be diagnostic, that is, knowledge of which incorrect options students select must be a clue to the nature of the misunderstanding, and thus prescriptive of appropriate remediation.
In addition, instructors who construct their own examinations may greatly improve the effectiveness of test items and the validity of test scores if they select and rewrite their items on the basis of item performance data. Such data is available to instructors who have their examination answer sheets scored at the Scoring Office.

Item Analysis Reports
As the answer sheets are scored, records are written which contain each student's score and his or her response to each item on the test. These records are then processed and an item analysis report file is generated. An instructor may obtain test score distributions and a list of students' scores, in alphabetic order, in student number order, in percentile rank order, and/or in order of percentage of total points. Instructors are sent their item analysis reports as e-mail attachments. The item analysis report is contained in the file IRPT####.RPT, where the four digits indicate the instructor's GRADER III account. A sample of an individual long form item analysis listing for the item response pattern is shown below. Item 10 of 125. The correct option is 5. | | Item Response Pattern | | | 1 | 2 | 3 | 4 | 5 | Omit | Error | Total | Upper 27% | N | 2 | 8 | 0 | 1 | 19 | 0 | 0 | 30 | | % | 7 | 27 | 0 | 3 | 63 | 0 | 0 | 100 | Middle 46% | N | 3 | 20 | 3 | 3 | 23 | 0 | 0 | 52 | | % | 6 | 38 | 6 | 6 | 44 | 0 | 0 | 100 | Lower 27% | N | 6 | 5 | 8 | 2 | 9 | 0 | 0 | 30 | | % | 20 | 17 | 27 | 7 | 30 | 0 | 0 | 101 | Total | N | 11 | 33 | 11 | 6 | 51 | 0 | 0 | 112 | | % | 10% | 29% | 11% | 5% | 46% | 0% | 0% | 100% |

Item Analysis Response Patterns
Each item is identified by number and the correct option is indicated. The group of students taking the test is divided into upper, middle and lower groups on the basis of students' scores on the test. This division is essential if information is to be provided concerning the operation of distracters (incorrect options) and to compute an easily interpretable index of discrimination. It has long been accepted that optimal item discrimination is obtained when the upper and lower groups each contain twenty-seven percent of the total group.
The number of students who selected each option or omitted the item is shown for each of the upper, middle, lower and total groups. The number of students who marked more than one option to the item is indicated under the "error" heading. The percentage of each group who selected each of the options, omitted the item, or erred, is also listed. Note that the total percentage for each group may be other than 100%, since the percentages are rounded to the nearest whole number before totalling.
The sample item listed above appears to be performing well. About two-thirds of the upper group but only one-third of the lower group answered the item correctly. Ideally, the students who answered the item incorrectly should select each incorrect response in roughly equal proportions, rather than concentrating on a single incorrect option. Option two seems to be the most attractive incorrect option, especially to the upper and middle groups. It is most undesirable for a greater proportion of the upper group than of the lower group to select an incorrect option. The item writer should examine such an option for possible ambiguity. For the sample item on the previous page, option four was selected by only five percent of the total group. An attempt might be made to make this option more attractive.
Item analysis provides the item writer with a record of student reaction to items. It gives us little information about the appropriateness of an item for a course of instruction. The appropriateness or content validity of an item must be determined by comparing the content of the item with the instructional objectives.

Basic Item Analysis Statistics
A number of item statistics are reported which aid in evaluating the effectiveness of an item. The first of these is the index of difficulty which is the proportion of the total group who got the item wrong. Thus a high index indicates a difficult item and a low index indicates an easy item. Some item analysts prefer an index of difficulty which is the proportion of the total group who got an item right. This index may be obtained by marking the PROPORTION RIGHT option on the item analysis header sheet. Whichever index is selected is shown as the INDEX OF DIFFICULTY on the item analysis print-out. For classroom achievement tests, most test constructor’s desire items with indices of difficulty no lower than 20 nor higher than 80, with an average index of difficulty from 30 or 40 to a maximum of 60.
The INDEX OF DISCRIMINATION is the difference between the proportion of the upper group who got an item right and the proportion of the lower group who got the item right. This index is dependent upon the difficulty of an item. It may reach a maximum value of 100 for an item with an index of difficulty of 50, that is, when 100% of the upper group and none of the lower group answer the item correctly. For items of less than or greater than 50 difficulty, the index of discrimination has a maximum value of less than 100. The Interpreting the Index of Discrimination document contains a more detailed discussion of the index of discrimination.

Interpretation of Basic Statistics
To aid in interpreting the index of discrimination, the maximum discrimination value and the discriminating efficiency are given for each item. The maximum discrimination is the highest possible index of discrimination for an item at a given level of difficulty. For example, an item answered correctly by 60% of the group would have an index of difficulty of 40 and a maximum discrimination of 80. This would occur when 100% of the upper group and 20% of the lower group answered the item correctly. The discriminating efficiency is the index of discrimination divided by the maximum discrimination. For example, an item with an index of discrimination of 40 and a maximum discrimination of 50 would have a discriminating efficiency of 80. This may be interpreted to mean that the item is discriminating at 80% of the potential of an item of its difficulty. For a more detailed discussion of the maximum discrimination and discriminating efficiency concepts, see the Interpreting the Index of Discrimination document.

Other Item Statistics
Some test analysts may desire more complex item statistics. Two correlations which are commonly used as indicators of item discrimination are shown on the item analysis report. The first is the biserial correlation, which is the correlation between a student's performance on an item (right or wrong) and his or her total score on the test. This correlation assumes that the distribution of test scores is normal and that there is a normal distribution underlying the right/wrong dichotomy. The biserial correlation has the characteristic, disconcerting to some, of having maximum values greater than unity. There is no exact test for the statistical significance of the biserial correlation coefficient.
The point biserial correlation is also a correlation between student performance on an item (right or wrong) and test score. It assumes that the test score distribution is normal and that the division on item performance is a natural dichotomy. The possible range of values for the point biserial correlation is +1 to -1. The Student's t test for the statistical significance of the point biserial correlation is given on the item analysis report. Enter a table of Student's t values with N - 2 degrees of freedom at the desired percentile point. N represents the total number of students appearing in the item analysis.
The mean scores for students who got an item right and for those who got it wrong are also shown. These values are used in computing the biserial and point biserial coefficients of correlation and are not generally used as item analysis statistics.
Generally, item statistics will be somewhat unstable for small groups of students. Perhaps fifty students might be considered a minimum number if item statistics are to be stable. Note that for a group of fifty students, the upper and lower groups would contain only thirteen students each. The stability of item analysis results will improve as the group of students is increased to one hundred or more. An item analysis for very small groups must not be considered a stable indication of the performance of a set of items.

Summary Data
The item analysis data are summarized on the last page of the item analysis report. The distribution of item difficulty indices is a tabulation showing the number and percentage of items whose difficulties are in each of ten categories, ranging from a very easy category (00-10) to a very difficult category (91-100). The distribution of discrimination indices is tabulated in the same manner, except that a category is included for negatively discriminating items.
The mean item difficulty is determined by adding all of the item difficulty indices and dividing the total by the number of items. The mean item discrimination is determined in a similar manner.
Test reliability, estimated by the Kuder-Richardson formula number 20, is given. If the test is speeded, that is, if some of the students did not have time to consider each test item, then the reliability estimate may be spuriously high.
The final test statistic is the standard error of measurement. This statistic is a common device for interpreting the absolute accuracy of the test scores. The size of the standard error of measurement depends on the standard deviation of the test scores as well as on the estimated reliability of the test.
Occasionally, a test writer may wish to omit certain items from the analysis although these items were included in the test as it was administered. Such items may be omitted by leaving them blank on the test key. The statistics for these items will be omitted from the Summary Data.

Report Options
A number of report options are available for item analysis data. The long-form item analysis report contains three items per page. A standard-form item analysis report is available where data on each item is summarized on one line. A sample report is shown below. ITEM ANALYSIS Test 4 125 Items 49 Students | Percentages: Upper 27% - Middle - Lower 27% | Item | Key | 1 | 2 | 3 | 4 | 5 | Omit | Error | Diff | Disc | 1 | 2 | 15-22-31 | 69-57-38 | 08-17-15 | 00-04-00 | 08-00-15 | 0-0-0 | 0-0-0 | 45 | 31 | 2 | 3 | 00-26-15 | 00-00-00 | 92-65-62 | 00-04-08 | 08-04-15 | 0-0-0 | 0-0-0 | 29 | 31 |
The standard form shows the item number, key (number of the correct option), the percentage of the upper, middle, and lower groups who selected each option, omitted the item or erred, the index of difficulty, and the index of discrimination. For example, in item 1 above, option 2 was the correct answer and it was selected by 69% of the upper group, 57% of the middle group and 38% of the lower group. The index of difficulty, based on the total group, was 45 and the index of discrimination was 31.

Item Analysis Guidelines
Item analysis is a completely futile process unless the results help instructors improve their classroom practices and item writers improve their tests. Let us suggest a number of points of departure in the application of item analysis data. 1. Item analysis gives necessary but not sufficient information concerning the appropriateness of an item as a measure of intended outcomes of instruction. An item may perform beautifully with respect to item analysis statistics and yet be quite irrelevant to the instruction whose results it was intended to measure. A most common error is to teach for behavioural objectives such as analysis of data or situations, ability to discover trends, ability to infer meaning, etc., and then to construct an objective test measuring mainly recognition of facts. Clearly, the objectives of instruction must be kept in mind when selecting test items. 2. An item must be of appropriate difficulty for the students to whom it is administered. If possible, items should have indices of difficulty no less than 20 and no greater than 80. It is desirable to have most items in the 30 to 50 range of difficulty. Very hard or very easy items contribute little to the discriminating power of a test. 3. An item should discriminate between upper and lower groups. These groups are usually based on total test score but they could be based on some other criterion such as grade-point average, scores on other tests, etc. Sometimes an item will discriminate negatively, that is, a larger proportion of the lower group than of the upper group selected the correct option. This often means that the students in the upper group were misled by an ambiguity that the students in the lower group, and the item writer, failed to discover. Such an item should be revised or discarded. 4. All of the incorrect options, or distracters, should actually be distracting. Preferably, each distracter should be selected by a greater proportion of the lower group than of the upper group. If, in a five-option multiple-choice item, only one distracter is effective, the item is, for all practical purposes, a two-option item. Existence of five options does not automatically guarantee that the item will operate as a five-choice item.

Further Resources
Item Analysis is an enormous field, and is particularly popular in the United States where much of the research has been conducted. * CAA Centre Bluepapers http://caacentre.lboro.ac.uk/resources/bluepapers/index.shtml The CAACentre TLTP3 project published two documents on issues relating to Item Analysis, both by Mhairi McAlpine of the University of Glasgow. These papers cover the areas of "Methods of Item Analysis" (Bluepaper 2) and "Item Banking" (Bluepaper 3) and provide an ideal introduction to this extensive subject. * Institute of Objective Measurement http://www.rasch.org/ The web site of the Institute of Objective measurement (an American Organisation) is full of useful resources relating to Item Analysis. * ERIC Clearinghouse on Assessment and Evaluation http://ericae.net/ Ericae is the ERIC Clearing House on Assessment and Evaluation. (ERIC is the Educational Resources Information Center, an American resource set up to provide easy access to education research and literature). Like the rasch.org site, this site provides a vast resource of useful papers, articles and links. * Item Response Theory, Frank Baker http://ericae.net/irt/baker/ One of the most useful resources on the Ericae web site is the online book "The Basics of Item Response Theory" by Frank Baker, (2001). * ETS: The Educational Testing Service http://www.ets.org/ ETS, The Educational Testing Service is a private testing and measurement organisation based in the United States. It has a well-respected research group. * The IRT Modelling Lab, University of Illinois http://work.psych.uiuc.edu/irt/ This site provides another good general Introduction to IRT, and includes a tutorial on IRT explaining the underlying theory as well as how to utilise it.

Similar Documents

Premium Essay

Item Analysis Paper

...To begin with, the most crucial unit and basic important part of any assessment or a test is item. Brown, D. (2012) defines item as “the smallest unit that produces distinctive and meaningful information or feedback on a test when it is scored or rated”. (p. 41) Items format analysis defined as “the degree to which each item is properly written so that it measures all and only the desired content” (Brown, D. 2012, p. 42). Item analysis is a very useful and special to examine each particular item in a test or assessment and helps the instructors to create and make the item better. In addition, instructors can use item analysis as guidance and make the test items evaluate and revise in a persuasive way. The aims of using...

Words: 1123 - Pages: 5

Free Essay

Brave New World

...very honest and trustworthy individual guy, but is often shy and a bit awkward. D- Nicks main role in this story is that he’s the narrorator, and all of these events are based on his life. He is a very down to earth guy, calm and collected with a bunch of crazy events and people surrounding him. Instead of being in surroundings thatmirror him as a person, he is almost in an opposing environment, rivaling his ways of living. “There are only the pursued, the pursuing, the busy and the tired.” This quote goes to show a category for most characters in this book, for example Gatsby is a pursuer and a busy man. While someone like Daisy is the pursued. Being a go with the flow type of woman, instead of a go getter. A main symbolic item is that of the guests names on gatbys timetable. It shows that his connections he claims to have with most may be legitimate. It also goes to show that Gatsby seemingly surrounds himself with people and objects to put on a front of being knowledgeable and well...

Words: 326 - Pages: 2

Premium Essay

13-7

...Case 13-07 Facts: Fuzzy Dice Inc. manufactures novelty items that it distributes to wholesalers and large online and direct-mail retailers. Because of the bestselling new products, Fuzzy is having a record-breaking year and is holding a large amount of cash on its balance sheet. Fuzzy operates in an area where several other light manufacturers operate, one of which is Tiny Tots Toys LLC (“Tiny”), an educational children’s toy manufacturer. Tiny has been unable to turn a profit for the past few years and has recently filed for Chapter 11 bankruptcy protection.  Tiny’s primary asset is its manufacturing facility. The location and capabilities of this facility are the key reasons why it represents an acquisition target to Fuzzy. However, Fuzzy is undecided on how it should use Tiny’s factory after the acquisition. The Company will either (1) continue to use the facility to manufacture children’s toys and enter another business line alongside its novelty business or (2) renovate the factory in order to expand its novelty item production capacity to grow its current business. Since the acquisition will be structured as an asset purchase rather than a stock purchase, Fuzzy will not assume the employment relationships with the Tiny employees. In both scenarios, Fuzzy expects to hire all the current Tiny employees; however, the Company believes its current workforce is capable of operating the Tiny facility if necessary. Issues: 1. If Fuzzy decides to operate the factory in its current...

Words: 1426 - Pages: 6

Premium Essay

Accounting for Fuzzy Dice Inc. Acquisition of Tiny Tots Toys Llc

...ISSUE: Accounting for Fuzzy Dice Inc. acquisition of Tiny Tots Toys LLC related to decision (1) to use purchased facility to enter another business line or (2) renovate the facility to expand the current production. BRIEF BACKGROUND OF COMPANY Fuzzy Dice Inc. (“Fuzzy” or “the Company”) manufactures novelty items that it distributes to wholesalers and large online and direct-mail retailers. Fuzzy operates in an area where several other light manufacturers operate, one of which is Tiny Tots Toys LLC (“Tiny”), an educational children’s toy manufacturer. Tiny has been unable to turn a profit for the past few years and has recently filed for Chapter 11 bankruptcy protection. Tiny’s primary asset is its manufacturing facility. The location and capabilities of this facility are the key reasons why it represents an acquisition target to Fuzzy. However, Fuzzy is undecided on how it should use Tiny’s factory after the acquisition. The Company will either (1) continue to use the facility to manufacture children’s toys and enter another business line alongside its novelty business or (2) renovate the factory in order to expand its novelty item production capacity to grow its current business. Since the acquisition will be structured as an asset purchase rather than a stock purchase, Fuzzy will not assume the employment relationships with the Tiny employees. In both scenarios, Fuzzy expects to hire all the current Tiny employees; however, the Company believes its current workforce is...

Words: 1696 - Pages: 7

Free Essay

Word Association by Robin Russ

...having acquired a large English lexis for high school examination purposes, when students are “off the page” and speaking extemporaneously, even about familiar everyday topics, they experience firsthand the limitations of their productive vocabulary. Engaged by a class activity yet restricted by insufficient vocabulary, a common expedient is to revert to speaking in Japanese. How is language organized and what are the mechanisms that allow us to retrieve the words we know immediately and correctly? Psycholinguistic studies have shown that words are not stored in the mental lexicon as single independent items but form clusters or webs with other related concepts so that words acquire their full meaning in reference to related terms (Aitchison, 1994). In addition, context illustrates the scope and depth of a word’s meaning as well as its relationship to other lexical items, thus learning words in context and in association with their common connected notions enables learners to recall them more readily. If we take into account the common ways in which words associate with one...

Words: 3169 - Pages: 13

Free Essay

Lexical Borrowing

...Lexical borrowing = slovní výpůjčky - adoption from another lg with the same meaning English is tolerant to other lgs, nenasytný vypůjčovatel (70% non-anglosaxon origin), welcomes foreign words, not homogenous lg like French (majority of expressions was taken from F.) reasons: lg feels a need for a new word; to pre-denote a special concept (Sputnik, gradually disappeared from lg; certain lg has a kind of prestigious position (matter of fashion, but overuse of English words; matter of political force); distinction of functional style (matter of development) – three synonymical expressions of diff. origin (anglo-saxon origin: home, French words (additional meanings): resindence, Latin words: domicile, Greek origin, etc.) layers of three origins : hunt/chase/pursue rise/mount/ascend ask/question (certain amount of intensity)/interrogate high tolerance in English; in French and in German – used to avoid it; in Czech – had to defend its position to German, Linguists tried to set certain rules for using words=re-establishion of Czech lg English changes pronunciation of borrowed words (E. is simply a germanic lg, but more Romans lg in vocabulary) the basic vocabulary=core vocabulary (be, have, do) is Anglo-Saxon, surrounding periphery of v. maybe borrowed (count a word each time that occurs) wave of new adoptions: swift adotion - in some periods in lg more words than usual are adopted, in the 13. century after the Norman conquest, natural mechanism!! self-regulated – if there...

Words: 7575 - Pages: 31

Premium Essay

Second Language Acquisition

...methods. No matter how much attention has been paid to reading and writing training, teachers and students generally think that vocabulary teaching is of vital importance and it is the foundation of English learning. However, a majority of students acquire new words by rote learning and some students even try to memorize the words based on the vocabulary list. Although students' vocabulary size is increased quickly, their English language skills have not been improved. In order to resolve these problems, we can try to update the teaching idea of vocabulary and to improve the teaching method. 1.2 Purpose of the Paper The focus of traditional English teaching is mainly on grammar and vocabulary, which are regarded as two independent items. In fact, students cannot express themselves very well even though they have acquired many words and grammar rules. So, how can we change the situation? Many studies have been conducted to improve English teaching efficiency in order to equip students with practical ability in English. Among them, the Lexical Approach, which was put forward by American scholar Lewis, sheds some new light on English teaching in China. In view of the problem that most students know many new words but unable to use them correctly, the paper, based on the Lexical Approach, discusses the strong points of integrating the ideas of...

Words: 3572 - Pages: 15

Free Essay

Organzing Cost- Reducing Program

...preparing the meeting minutes should update the task list to show current status. * The team members should discuss cost- reduction ideas in a free- flowing manner. The ideas may come from the team members or from others in the company. All of the ideas should be captured on a paper. After discussing all of the ideas, the team should decide if each idea should be pursued. If the team thinks an idea has a merit, in most cases it will go to the affected department manager. Key Questions: 1.) Do we have a cost – reduction effort in place? 2.) Do we have cost – reduction targets? 3.) How do we identify and eliminate unnecessary costs? 4.) What obstacles will we encounter, and how will we get around them? COST PAERTO ANALYSIS * Identifying and ranking all of the organizations current costs is best presented on a department-by-department basis, and by overhead cost categories for the entire company. It’s important to do this for each department and by overhead cost category to identify...

Words: 895 - Pages: 4

Free Essay

In Basket

...that every supervisor should follow. During the research conducted with other team members on a hypothetical situation involving the supervision of a retail store, a list of tasks were left for the store supervisor to either undertake or delegate to the employees. Some of the items on the list were unanimously agreed upon and certain others were not. Many factors come into play when delegating or retaining duties as a supervisor. For instance, the management pyramid within an organization may determine the length of responsibilities each department supervisor may hold. Although ideally a supervisor governs the daily responsibilities, some of them are controlled by upper management. The reluctance to assign undertakings to subordinates may fall on the company philosophy. Department supervisors like other people do not always make the correct decisions in delegating responsibilities. Sometimes because of a fear of failure he or she will preserve many of the tasks for him or herself. The 10 items reflected during the study were unquestionably straightforward and did not require much thought. Perhaps had the list been more complicated, the decision to retain or delegate may be more challenging. Item number three, determining the store...

Words: 720 - Pages: 3

Premium Essay

What Is My Role in Managing the Contract?

...completion of the lesson you will be able to answer these questions: What Is the COR's Role in Contract Administration? Why Should the COR Talk with the KO? What Makes Up a Contract? What Else Might I Encounter When Dealing with a Contract? Print Version For a printer friendly version of this lesson, select the icon on the left. To print a single page, select the 'Print' button at the top of the screen. Page 1 of 41 Review the lesson learning objectives. Recognize the basic information (period of performance, Performance Work Statement (PWS), contract value) found in a contract to include the uniform contract format. Identify methods of tracking contract obligations using Accounting Classification Requirements Number (ACRNS) and Contract Line Item Numbers (CLINs) in a contract. Recognize the COR’s role in tracking the contract schedule. Analyze contract schedule compliance, to include all Statement of Work (SOW) requirements and Contract Deliverable Requirements List (CDRL) deliverables. What Is My Role in Managing the Contract? Introduction Know Your Contract Page 2 of 41 Animated Vignette Alternative Note: The animated vignette does not employ the use of audio. Please select the Next button when the animated vignette is complete. What Is My Role in Managing the Contract? What Is the CORs Role in Contract Administration? CORs Role in Contract Administration The contract administration process is admittedly complex, time consuming, and detail driven. The government...

Words: 12645 - Pages: 51

Free Essay

Gathering Information

...working as a forklift operator at Ashley’s furniture; I pull orders from a pick list and prepare the manufactures with the items necessary. I have item numbers on my pick list that vary by the amounts, for example ill have an item number and it’ll tell me to get 24 pieces, each box contains an amount of 3 pieces 4 pieces and 6 pieces in each box so I have to determine how many boxes I will have to pick on put on my pallet to complete the order. I also put away items that are received in trucks and place them in the correct location marked in 14 aisles labeled from AQ to AZ with 3 tears high labeled 1-3, the other side of the aisle is labeled BQ to BZ the 3 tears also. When I scan the item plate it’ll direct me to a destination to put the pallet in its proper location. Gathering the right information at my job is very critical, if I have an order that needs me to pick 3 orders of 36 pieces which is 108 pieces and I short the order by not gathering the right information and give our manufactures a short amount of pieces it will stop the assembly line which can cause the lines to stop and all productivity as at a standstill, now the supervisor is on me because the company is losing out on money. Also if I gather my information wrong and over pick the amount of pieces necessary for the order that causes the manufacturing to now stop the production, re label the item and send it back to our department and put it back in its proper location. Now with putting away product if I gather the...

Words: 425 - Pages: 2

Premium Essay

Week 6 Discussion 1 Contract Mgmt

...read all documents thoroughly, so you understand all of the requirements. Example, failure to submit all required documentation could result in an offer being returned; which, in turn will greatly increase the processing time. Here are some basic guidelines a successful vendor should always follow: * “Read the solicitation thoroughly and follow the instructions. * Provide the requested information in the appropriate format. * Submit your completed Standard Form 1449 with your electronic signature when responding. Only the person legally authorized to enter into contracts for your company should sign the form. * Submit a dated copy of your commercial pricelist(s), with the appropriate Special Item Number (SIN) next to each offered item. Also include the Commercial Sales Practice Format (CSP-1), to provide details about your pricing history. Pay close attention to the pricing included in your submission, and be ready to negotiate your best offer for the government. All products or services offered in your pricelist should be within the scope of the solicitation as well as the specific SIN(s) being utilized. Take the time to understand all the requirements detailed in the solicitation. Once on contract, your company will be responsible for adhering to all relevant portions of...

Words: 535 - Pages: 3

Premium Essay

Stand-Up Meetings

...Stand up meetings are short daily meetings organize to discuss and give updates of ongoing project. Every day the whole team meets in order to discuss the flow of the project. If the things in the project went wrong, then those issues are discussed in order to know how to fix the issue. If there is no stand up meeting in the company, then the team leader and the project manager would do not have any updates regarding the project then if anything in the project goes wrong then the whole flow of the project goes wrong until the meeting is held and there is lot of time waste and budget waste. So in order to avoid such kind of situations stand up meetings are held by the team leader. Such meetings are important to guide the team members in the right path by the team leader who has good experience. In present IT world, stand up meetings has become a common strategy along with the agile methods. Stand up meetings are also known as daily scrum, daily huddle, morning roll-call and so on. The stand-up meetings would take not more than 15 minutes and discusses only the status of the project. Let us consider the IT organizations, which involve no. of teams for an ongoing project such as infrastructure team, application team, database team, networking team, development team and the ERP (Enterprise resource planning) system. So each and every team individually will be holding the stand-up meetings with their concern teams for discussing about the open and closed tickets regarding the ongoing...

Words: 1095 - Pages: 5

Free Essay

Oihh; J

...your business. The System Inventory Management System (IMS) will be an intelligent software to control all your process flows of inventory. First of all products (Finished Suites) will be defined in system with their unique item code and every suite will have different colors. In this way, IMS will track every suite along with its color so that management can view at any time about stock position of any color of any suite. Suites can be added in stock through purchase. Every purchase bill will have its number, date and other information for future tracking. Different stores will also be managed in IMS so that we can chose any store we want to put the stock in. Suites once added through purchase will be available for sale. Once sale invoice is booked, inventory in every store will be updated accordingly. Inventory subsection of IMS will track record of inventory in every store and then an accumulative inventory will also be available at any time. Production section can also be incorporated in IMS. If production section is required, process flow will be a bit different. Raw materials will also be defined in IMS and coded uniquely. Raw materials will then be linked with finished products (Dopatta, Lining, Shalwar) etc. For example, if we say item number 1001(Embroided Dopatta), color blue will be made through, Blue Color Fabric, Embroidery on that, Blue Color Thread etc. In this case when one blue dopatta will be added in inventory, it will automatically substract...

Words: 614 - Pages: 3

Premium Essay

African Leadership

...FINANCIAL INTEGRITY REVIEW & EVALUATION CHECKLIST For Locally Administered Projects PIN: _________________________ LOCAL SPONSOR: _________________________________________________ DATE: _______________________ CONTRACTOR:_________________________________________________________ Award: ___________ Orig. Completion Date: ___________ % Complete ________ Award Amount: $_________________ Interviewed: ___________________________________________ EIC ____ Res. Engr.____ Off Engr. ____ Inspt. ____ NYSDOT RLPL (or representative) Present During Review? YES NO ____________________________________ Local Sponsor Official Present During Review? YES NO ____________________________________ Construction Monitoring and Staffing: Was a pre-construction meeting held for this project? YES NO Date of meeting:_____________________ Has a Construction Monitoring Plan (CMP) been prepared for this project? YES NO Is the CMP available for review? YES NO (Please make a copy and attach to report) Does the plan specify a required level of on-site project staffing? YES NO If YES, what is the specified project staffing:______ EIC/RE/PM _______ Office Engineer ________ On-site Inspectors Is the project staffing requirement being met? YES NO N/A If NO, why? _________________________________________________________________________________________ ACTUAL on-site construction inspection staff: Local Forces Consultant:__________________________________________ ...

Words: 961 - Pages: 4