Free Essay

Online Evaluations

In:

Submitted By adp1542
Words 3644
Pages 15
Online Evaluations: The Risks, Benefits, and FSU

Traditionally, universities have taken stock of their faculty and class offerings by means of paper evaluations, typically administered during class towards the end of the term. This practice began in the 1920s and has been the standard since (Mau & Opengart 2012). However, during the last fifteen years universities have increasingly abandoned paper forms in favor of an online system, in which students complete course and faculty evaluations through a website, out of class and usually on their own time. The adoption rate among schools rose from 2% in 2000 to 33% in 2005, with the most common reasons cited being the cost and time savings (Guder & Malliaris 2013). Despite these potential benefits, many professors fear repercussions in the form of lowered response rates and biased scores. This paper aims to examine these concerns and also propose how a small school such as Fitchburg State University can implement online evaluations. First, I would like to briefly discuss why a school would want to switch to online evaluations. There are three primary reasons: online evaluations reduce paper costs (and postage for distance learning courses), which in turn is a way to “go green,” or be more environmentally friendly; online evaluations also saves many hundreds of hours across the various staff that have to prepare, print, scan, and analyze paper-based evaluations; and lastly, online evaluations do not take up valuable class time. As one UC Davis student complained, “Professors sometimes skip over material and tell us we have to learn it at home in order to make time for evaluations” (Toussi 2013, p. 2). Other possible advantages to the online format are the reduced chance of human error (e.g., a student missing a question or not properly filling in a ‘bubble’) and a greater ability to manipulate and analyze the results, since the data is already digitized. For a larger school the cost savings (potentially hundreds of thousands of sheets and tens of thousands in dollars) are reason enough to make the switch; for a smaller school like FSU, the savings are certainly there, but they have not been enough to alleviate faculty concerns regarding online evaluations’ well-documented weaknesses. Foremost among faculty concerns has been the lowered response rates for evaluations conducted online versus their traditional in-class, paper-based counterparts. When evaluations are performed during class, the participation rate is generally 100% for all students present; rarely will a student be so obstinate as to refuse to spend class time critiquing their instructor. Factoring in for absent students, then, the overall response rate is usually high, enough to safely say an accurate representation of the class has been captured. With online evaluations, however, the “social pressure” of in-class evaluations is gone and students must have the self-discipline to, on their own time, go online and complete the survey(s). Consequently, most schools experience a drastic drop in their response rate when transitioning to online evaluations. For example, when Temple University switched to online evaluations in 2012, they noted the participation rate had dropped to 50%, compared to 70% for paper evaluations the year prior (Snyder 2013). While the administration and faculty at Temple University felt confident the information from a 50% response rate was still reliable, other schools aren’t so comfortable with that. How, then, could a school see to it that response rates are kept high? One study at California State University sought to compare response rates for online evaluations when different incentives were used (Dommeyer, Baum, Hanna & Chapman 2004). 16 instructors were selected for the study; each instructor taught two sections of the same course – one section utilized an in-class evaluation, and the other section used an online evaluation. Eight instructors were assigned a “treatment” for their online sections – four instructors were to offer a small grade incentive (quarter of a percentage point); two instructors were to perform a live, in-class demonstration of how to complete the online evaluation; and two instructors were to tell their students they would receive early grade feedback if the class achieved two-thirds participation rate. The remaining eight instructors were not assigned any “treatment” for their online section (all online sections received written instructions, however). Dommeyer et al. (2004) compiled the following results:

Overall, the in-class survey response rate was 75%, compared to a paltry 43.4% for online surveys. Worse still is that of the eight instructors who didn’t offer any “treatment” for their online sections (instructors I-P), the overall response rate was 29% for the online surveys, compared to 70% for their offline counterparts. Plainly, not placing an emphasis on completing the online survey detrimentally affects the response rate. But which “treatment” was the most effective? In compiling the “grade,” “demo,” and “feedback” courses, Dommeyer et al. (2004) made the following tally:

While demonstrating how to complete the online evaluation during class and offering ‘early feedback’ do improve the response rate considerably, about 50% compared to 29% for no treatment, this still would not be satisfactory, however, for many critics of the online method. However, the grade incentive sections achieved an impressive 86.67% response rate, nearly identical to the response rate (86.99%) for the corresponding in-class sections. Clearly, if sample size is a concern, schools would do well to offer a small extra credit.
Some may balk at the suggestion of offering extra credit; students should not be rewarded for something completely unrelated to course material, they say. Other methods for improving response rates include “sweepstakes” prizes (cash, gift certificates, iPods) and altering when students can access their grades (earlier for those who complete their online evaluations), but other studies confirm that that grade incentives are the single most effective method (Ballantyne 2003).
So far we’ve only discussed the statistical values themselves; but what factors affect whether or not a student completes their online evaluations? I.e., what influences them to complete them or not complete them? When the University of Houston College of Pharmacy surveyed its students on what would hinder them from completing an online evaluation (Hatfield and Coyle 2012), the chief concern among students was that the survey would not result in change or would not benefit them; other reasons included timing of evaluations during exams week and how many evaluations they would have to complete.
When Hatfield and Coyle (2012) examined the response rates for four courses among 368 unique students, the single largest distinguishing factor for completion rates was age—of those born before 1987, 23.7% completed both course and faculty evaluations compared to 8.7% for those born after 1987 (this school separates its evaluations into one for the course and one for the instructor). The disparity continues for those students who only completed course evaluations (37.5% to 19.6%) and those who only completed faculty evaluations (34.8% to 18.5%). One might think younger students be more “tech-savvy,” but considering the average student in this sample was 28, other factors must have been at play.
In spring 2009, the Quinlan School of Business at Loyola University Chicago switched entirely to an online-based evaluation system, and promptly noticed a 26% drop in the response rate. Guder and Malliaris (2013) sought to determine why this was; using a sample of 341 graduate students and 771 undergraduates, their survey found that graduates were much more like to complete ‘all’ of their evaluations (67.45% grad to 44.10% undergrad), but undergraduates were much more likely to complete ‘some’ (41.5% UG to 13.49% G). The researchers surmise this see-saw effect might be due to undergraduates typically having five to six courses, compared to a graduate only having two to four.
Guder and Malliaris (2013) also questioned students on why they completed “some” or “none” of their evaluations, allowing them multiple choice between “don’t matter”, “forgot”, “busy”, “identify me”, and “other”.

The “it doesn’t matter” excuse heard previously in the Hatfield and Coyle (2012) study is an issue for some undergraduates, less so for graduates. Among both groups in the Guder & Malliaris (2013) sample, students were more likely to list that reason if they’d completed “none” of their evaluations. A small number of grads and undergrads said they had privacy concerns (could professors see their answers), even though these students had been told their answers would remain anonymous.
The most commonly cited reasons students listed for impartial or non-completion was that they forgot or were busy – graduate students chose these answers in equal numbers (45.95% overall for each), but undergraduate students were especially likely to say they were just too “busy” (64.97% to 46.17% “forgot”). Some of the written comments from these students indicated many of them felt the online survey was “way too long!!,” (p. 336) as one student put it. As the evaluations used in this study consisted of twenty multiple choice questions and three open-ended questions (other schools typically have about 10 or 12 multiple choice instead of 20), the evaluations’ length itself may have had a negative impact on the response rate, at least for the “busy” students.
As for the “I forgot” students, Guder and Malliaris (2013) concluded that having the professor simply encourage students to complete the online evaluations had a positive effect on participation; sending email instructions with follow-up reminder emails complemented this boost in participation.
One final observation I would like to take from the Guder and Malliaris (2013) study is regarding which courses students are choosing to evaluate when they only evaluate some courses:
Among the answers under the choice “other”, many of the written comments indicated that students completed evaluations when they felt that the class was either very good or very bad. Otherwise, they did not respond. Examples of this type of response were: “Nothing positive or negative to say about the class”, “If the class is average, there is nothing to complain about and there is nothing exceptional, then evaluation is just a waste of time”, “I only fill out for the profs I really enjoy or really don't enjoy”, and “I had no strong opinions either way about some courses” (p 336).
The most honest student may have been the one that wrote that they were “too lazy to do surveys for mediocre teachers,” (p. 335) as mediocrity and laziness are underlying themes in the above student quotations. For these students, emphasizing the importance of the evaluations (tenure, promotion, performance reviews) and also making the evaluation process as painless as possible would make a difference in getting their cooperation. But this last observation brings us to the other major concern many instructors have regarding online evaluations, that “disgruntled students [are] more highly motivated than other students” (Burton, Civitano & Steiner-Grossman 2012), which combined with a lowered response rate could lead to very skewed results. However, in practice, this has shown to generally not be the case. The authors of that paper counted 18 prior studies, of which 14 noted minimal or no differences in the median or mean ratings between online and paper evaluations; two of the four that did note differences found students gave higher ratings on the online evaluations. Of the last two studies, only one stated they’d found their students graded more harshly on the online form (the other study noted differences in the scores but didn’t indicate which way).
So, instructors can rest assured that they will be reviewed fairly with online evaluations; as one study at Eastern Illinois University noted, “despite the lower online response rate, the overall mean of online student evaluation items… was almost identical to the mean of classroom evaluation items” (Stowell, Addison & Smith 2012, p. 469). Kansas State University’s IDEA Center (Miller 2010) surveyed classes at 300 institutions (271,727 paper; 13,101 online) between 2002 and 2008, and found the only significant difference in evaluation scores was that students completing their evaluations online were likelier to give their professors high marks on the use of educational technology. In other words, professors have no cause for concern regarding “disgruntled” students.
Furthermore, there is evidence that the written comments in online evaluations tend to be more thorough and more positive than in traditional classroom evaluations. The Burton, Civitano and Steiner-Grossman (2012) study found that comments gathered online were longer, more informative and had fewer “instances of negativity” (p. 66), which the researchers attributed to the stresses of in-class evaluations performed following an exam, and to how the greater privacy afforded by online evaluations may lead to more thoughtful answers. Echoing that sentiment, Stowell, Addison, & Smith (2012) postulate that some students may worry that their handwritten comments will be identified, which (along with the typically more relaxed nature of online evaluations) accounts for why “online evaluations yielded five times the total amount of written commentary” (p. 471).
To summarize the last few passages, “there is no apparent justified reason to fear that only students who have the most negative things to say about an instructor will make comments online” (Stowell, Addison, & Smith 2012, p. 471). So, having discussed at length the question of lowered response rates (and how to get them to the levels seen from paper evaluation), and having dispelled the myth that online evaluation results will be skewed by low sample sizes and slackers, what about students’ concerns? Most of the literature on the topic only covers the issue from the perspective of administration and faculty, but how can schools make students more comfortable with online evaluations? Erica Smith (2013), a student writer and resident assistant at Lock Haven University in Pennsylvania, felt that her school’s implementation of online evaluations had been a “disaster” (p. 1) for several reasons, among them that the email reminders were too easy to miss with bland subject lines as simple as “Survey”; that the surveys were too difficult to access, buried in menus; and that not only did students (especially freshman) not understand the importance of the evaluations, but they even tended to fill them out based on the wrong courses. Interestingly, Smith also finds the online evaluations’ brevity an issue, as well. If LHU wants to get the most out of its investment in online evaluations, it would take some hints from Smith’s thoughts: streamline the evaluation process, promote the importance of participating, as well as emphasize the anonymity of students’ answers. It may also want to analyze whether the evaluation questions are as efficient as possible, to balance brevity versus depth.
So, what can a school like Fitchburg State University take from all this? Currently, FSU only administers online evaluations for courses taught online. All classroom-based undergraduate and graduate courses are currently evaluated using the traditional paper forms; considering that the IT department is in the process of purchasing an expensive new Scantron machine, and the MSCA teacher’s union has, in the past, voiced the common objections regarding skewed results, FSU is unlikely to adopt an all-online evaluation system anytime soon. But, if it were to consider the possibility, it would do well to take the following steps.
Faculty support is key. Research shows that while response rates might be high in the first semester of implementation, when professors are actively promoting the new system, during the following semesters as faculty support wanes, so do the response rates (Ballantyne 2003). Thus, it is essential that all staff and faculty are apprised of the research that shows online evaluations will not result in skewed results (and negative repercussions for those up for tenure and promotion), and that in-class encouragement of the process is key to high response rates. If certain classes do result in too low of a response rate, however, (say, under 25%), FSU could disregard that set of results.
Students should be emailed customized links to their survey(s), which they can only complete once. Students who don’t complete the survey should be sent at least one follow-up reminder. FSU could also make these links available from a centralized source, such as Web4; a mobile app that lets students complete the form on their phone or tablet would be an excellent feature.
The survey should ask what students’ expect to received for a grade, so the results can be adjusted accordingly, if desired. Ideally, the survey system would also save students’ progress, if they’re interrupted and need to return. (Progress bars should be fine if the evaluation is mostly multiple-choice, which I also recommend; if the survey includes a number of open-ended questions, there is some evidence that progress indicators can “dampen” online results, perhaps because students find the prospect of more written comments displeasing (Dommeyer et al. 2004).) The survey should be relatively brief and easy to complete, ten to fifteen multiple choice questions (the current five-point Likert scale FSU uses is fine) with a couple written comments being optional. Professors should stress to their students the importance of their participation, emphasize that their answers can affect change, and also assure them of the anonymity of the process – nobody will be able to link students to their answers. To bolster response rates, FSU could encourage professors to offer a small grade incentive (less than a percentage point) – this has been shown to be the single most effective way to boost grades, in addition to in-class demonstrations/encouragement and email reminders. FSU could also offer drawings for prizes, where every completed survey gets the student one “lottery” ticket. Also, while currently students can access their final grades as soon as they are submitted, FSU could artificially “delay” student access by a couple weeks – unless the student completes the corresponding evaluation, in which case they get early access. This way, access to grades isn’t restricted entirely (a probable legality), but considering most students want their grades as early as possible, it would encourage participation. Another point to consider is when, and for how long, would online evaluations be available? Students only get one chance to complete an in-class form—absent students lack any second chance to be included. Older studies show that student evaluations are “fairly stable” (Dommeyer et al. 2004, p. 622) from mid-term to the end of term, so opening up online evaluations as early as midterm is not out of the question. This would leave students ample time to participate. FSU might also consider opening up some of the evaluation data to students during enrollment to assist in course selection, something Temple University has just started doing (Snyder 2013). TU sees this as a superior alternative to such sites as “Rate My Professor”, which is oftentimes populated by “disgruntled” students and teacher’s pets; TU lets students see professor’s ratings only for “feedback, grading fairness, teaching ability, and
Learning” (p. 2). A small school like FSU, in which choice between sections and professors is limited, means this is an interesting idea but may not be as useful to students at a larger school like TU. In conclusion, online evaluations offer convenience, reduced potential for human error, more privacy and time for students to reflect on their answers, greater flexibility in data analysis, less class time consumed by administering paper evaluations, and significant savings in costs (paper, postage,) and “man hours”. It would also be a way to “go green,” which a school like FSU can advertise for self-promotion. With proper planning and careful implementation, and taking cues from other schools that have gone before it, Fitchburg State University would do well for itself to begin transitioning towards an all-online evaluation system.

References
Ballantyne, C. (2003). Online Evaluations of Teaching: An Examination of Current Practice and Considerations for the Future. New Directions For Teaching & Learning, (96), 103-112.
Burton, W. B., Civitano, A., & Steiner-Grossman, P. (2012). Online versus paper evaluations: Differences in both quantitative and qualitative data. Journal of Computing in Higher Education, 24(1), 58-69. doi: http://dx.doi.org/10.1007/s12528-012-9053-3
Dommeyer, C. J., Baum, P., Hanna, R. W., & Chapman, K. S. (2004). Gathering faculty teaching evaluations by in-class and online surveys: their effects on response rates and evaluations. Assessment & Evaluation In Higher Education, 29(5), 611-623. doi:10.1080/02602930410001689171
Guder, F., & Malliaris, M. (2013, May/June). Online course evaluations response rates. American Journal of Business Education (Online),6(3), 333-338. Retrieved from http://search.proquest.com/docview/1418450175?accountid=10896
Hatfield, C. L., PharmD., & Coyle, E. A., PharmD. (2013). Factors that influence student completion of course and faculty evaluations. American Journal of Pharmaceutical Education, 77(2), 7-27. Retrieved from http://search.proquest.com/docview/1355868567?accountid=10896
Mau, R. R., & Opengart, R. A. (2012). Comparing ratings: In-class (paper) vs. out of class (online) student evaluations. Higher Education Studies, 2(3), 55-68. Retrieved from http://search.proquest.com/docview/1045689061?accountid=10896
Miller, M.H. (2010 May 6). Online Evaluations Show Same Results, Lower Response Rate. The Chronicle of Higher Education. Retrieved from http://chronicle.com/blogs/wiredcampus/online-evaluations-show-same-results-lower- response-rate/23772
Smith, E. (2013, Nov 21). New course evaluations: Ineffective and inaccessible. University Wire. Retrieved from http://search.proquest.com/docview/1460317493?accountid=10896
Snyder, S. (2013, November 7). Temple opens all course-rating results to all students. The Philadelphia Inquirer, Retrieved from www.lexisnexis.com/hottopics/lnacademic
Stowell, J. R., Addison, W. E., & Smith, J. L. (2012). Comparison of online and classroom-based student evaluations of instruction. Assessment & Evaluation In Higher Education, 37(4), 465-473. doi:10.1080/02602938.2010.545869
Toussi, A. (2013, October 17). Online evaluations ready for use: students, faculty can now rate courses via web. The California Aggie. Retrieved from http://www.theaggie.org/2013/10/17/online-evaluations-ready-for-use/

Similar Documents

Premium Essay

Online Gambling Swot Evaluation

...SWOT: Online Gambling Jonathan Moore PHL/320 April 6, 2015 Aileen Smith SWOT: Online Gambling Gambling has always been a form of entertainment in the United States. Visuals of gambling by means of poker have been displayed as far back as the 1800’s in the days of the “Wild West”. Statements can be heard on the playground of elementary schools by kids saying “I bet you…” Sometimes these stakes are small and sometimes they are large. The total gambling industry is a large profit maker for many people and covers many different forms of betting such as legalized casinos, state lottery drawings, and sports wagering. Unfortunately, in the United States, gambling is still frowned upon when done online. However, is online gambling a completely negative form of allowing individuals to take part in their gambling ways? SWOT Analysis [pic] The biggest strength for the push to legalize online gambling is that there is a total worldwide revenue for gambling in the amount of $400 billion. This is an amount that has the potential to grow by allowing more customers a chance to log in and entertain themselves in the privacy of their own home. Estimates show that this revenue can reach over $500 billion by only expanding throughout Asia. The U.S. gambling, or gaming, industry consists of about 500 casinos, about 450 Indian casinos and bingo halls, and lotteries in about 40 states with combined annual revenue of about $85 billion (First Research, 2015)...

Words: 1164 - Pages: 5

Premium Essay

Report

...guest 4. What are the AC brands u are aware of? Voltas, hitachi, Carrier, samsung 5. Which of these brands did u consider unworthy of consideration before ur purchase? Why? NA Information search 1. Sources of info about the brand? (memory, personal sources like friends, independent sources like magazines, marketing ads, experience of it elsewhere) Friends, newspapers, experience of it elsewhere 2. Any info search done online? Yes Evaluation of alternatives Most important evaluative criteria to choose between brands? (Price, Aesthetics, Features like sleepmode/automatic off etc., service offered by the brand, warranty/guarantee etc.) Aesthetics, warranty, Purchase process 1. How did u choose the outlet to buy it from? (Variety offered, proximity etc, size of the outlet) Big outlet 2. Did you state your ac needs to the salesman at the outlet? No, 3. Did the salesman try to explain other brands and their benefits? yes 4. Where there any deals or price discounts? Yes 10% off Post purchase evaluation 1. How often do u use the ac (how many hrs/day, how many months in a year) 3-4 months 2. Reason for non-use? 3. In case of dissatisfaction with the brand, what action would you take: * Complain to store/manufacturer * Stop buying that brand or from that store * Engage in negative word of mouth * Complain to pvt or govt comsumer protection agencies * Take legal action 4. Will u consider another brand for...

Words: 2148 - Pages: 9

Premium Essay

Content Quality Assessment Related Frameworks for Social Media

...Content Quality Assessment Related Frameworks for Social Media Kevin Chai, Vidyasagar Potdar, and Tharam Dillon Digital Ecosystems and Business Intelligence Institute, Curtin University of Technology, Perth, Australia kevin.chai@postgrad.curtin.edu.au, {v.potdar,t.dillon}@curtin.edu.au Abstract. The assessment of content quality (CQ) in social media adds a layer of complexity over traditional information quality assessment frameworks. Challenges arise in accurately evaluating the quality of content that has been created by users from different backgrounds, for different domains and consumed by users with different requirements. This paper presents a comprehensive review of 19 existing CQ assessment related frameworks for social media in addition to proposing directions for framework improvements. Keywords: content quality assessment, social media, discussion forums, wikis, weblogs. 1 Introduction Information quality (IQ) has been widely defined in literature by its fitness for use. Existing research in the field of assessing IQ within traditional information systems is relatively mature and has led to the discovery and validation of numerous quality dimensions and metrics [2, 15, 17, 29, 35]. Knight & Burn (2005) have presented a comprehensive review of literature in the realm of assessing IQ frameworks. Common IQ dimensions that were identified from their review are presented in Table 1. The definitions of these dimensions have been included from Wang & Strong (1996)...

Words: 6799 - Pages: 28

Premium Essay

Week5 Group

...Introduction – Rushell Baderman Island Resort ensures all guests enjoy a relaxed and care free stay. In doing so the interior of company must be structured to function at its highest level of performance. This essay will outline the general functions of the Food and Bar Management and Front Office Supervisor. It will focus on the performance management systems, discuss the advantage and disadvantage of two different job evaluations, compare and contrast compensation plans, and explain the importance of providing employee benefits plans. Each area is unique to this company and its success. Describe the general function of performance management systems. If your assigned company does not use a performance management system, would you recommend it use one? LISA Overall the success of our company in general is due to our commitment and dedication given to our employees. The company strives to ensure that every employee is given an equal opportunity, and it is our duty to embrace each employee to the fullness of their potential. The company accomplishes this task by the use of performance management systems. Performance management systems are used to sustaining and measuring an employee’s performance. These methods are the key to an organization success because an organization can assess individuals' performance and reward them appropriately. To do this successfully an organization needs to have a system that manages performance: not just for reward purposes but also to help...

Words: 1671 - Pages: 7

Free Essay

Recreuit Select and Induct

...qwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmrtyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmrtyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmrtyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmrtyuiopasdfghjklzxcvbnmqwer...

Words: 1206 - Pages: 5

Premium Essay

Human Resourse

...concerns different things. Some people concern about current salary while some people worry about retirement. So why should every staff need to receive the same package? If employees receive the same package, it will not be able to stimulate them to work hard. Must let employees happy, let them know the company really care them, and they will be kind and work hard when they service for their customers. Third one is the new bonus sharing system, which new idea comes from Eva in the case of Organizational transformation in a Taiwanese company. The new bonus sharing system makes everybody know why they can have the bonus and make bonus information transparency. Under the new system, how much bonus you receive will be tied to their performance evaluation ratings. A clear and transparency bonus sharing system will stimulate the staffs to work hard, increase loyalty and make more profit for the company. After I took this class, I realize that human resource management is really important for running a company. And I will use the strategy I learned from class if I will...

Words: 458 - Pages: 2

Free Essay

Avon Case

...Avon Case Study Brief information about the domain of the company Avon is New York based 125 year old beauty product manufacturer that appoints the manufacture and marketing of beauty and fashion products, which has for over a century left a mark on millions of people throughout hundreds of countries The Company covers primarily North America, Latin America, Europe, and Asia Pacific. Avon’s products are classified into three categories: Beauty, Beauty Plus, and Beyond Beauty. The “Beauty” category consists of cosmetics, fragrances, skin care, and toiletries; “Beauty Plus” includes fashion jewelry, watches, apparel, and accessories; and “Beyond Beauty” comprises home products, gift and decorative products, candles, and toys. Avon earned a great reputation with its direct selling technique. This technique consists of selling the products directly to distributors or final consumers rather than trading companies or other intermediaries in order to accomplish greater control over the marketing function and to earn higher profits. The status of the company that led to its Determination that a change was necessary Several decades ago, Avon brought a perfect sagacity on many wives stayed at home, and some overprotective husbands didn't want them out shopping. Avon jumped up to the marketing arena with its growing fleet of representatives offering...

Words: 3292 - Pages: 14

Free Essay

Standardizing Policy for Online Background Checks

...concerning the candidate online. The information uncovered was pertinent to the eligibility of the applicant, yet whether intentionally or otherwise it was not provided. My personal recommendation to Management is to seriously consider the inclusion of online background checks as part of hiring policy. The following will shows that due consideration has been given to facilitate optimal hire, ensure legality, and to risk management. The process and procedure should be designed as following: Step 1: Written Agreement for Background Search The HR department should author a standardized written notice of online check for applicant signature. The form should be approved by the legal department prior to use. The information should be used for a additional consideration for the position. The applicant should be given the right to review and explains all search results retrieved from internet and public media. The working process • The applicant should be notified prior to the search • Acquired signed permission from applicant Step 2: Online Data Collection Considering the unlimited number of search parameters available, we should mainly focus on a few that are most pertinent to the consideration of the applicant. Search parameters should be more extensive for upper management positions and other public representatives of the company. The working process • Preliminary research should be done dedicated HR personnel • General evaluation of non-official info...

Words: 638 - Pages: 3

Free Essay

Nt2580 Week 2 Forum

...services? * What is the importance of an effective acceptable use policy (AUP) for a user and an employer? Be sure to include examples from your research to substantiate your responses. Participation Requirements: Discussion forums improve the online learning process by allowing students to engage in meaningful discourse. You can increase your participation grade by following these guidelines: * You should post your responses to the above questions and then respond to a minimum of two of your classmates' posts. Take a position on each question and justify your opinion on the basis of the textbook, the lesson, documents found in the ITT Tech Virtual Library, and your personal or professional experience. The quality of your submissions is a critical element in the evaluation process. Your submissions should not be of the type that state "I agree" or "Good post" as these responses neither have substance nor give any new information for a productive discussion. * If possible, share your own subject-related job experience. Remember, the goal is to learn from the experience of others. * Post your initial reply earlier in the week to maximize the opportunity for thoughtful exchanges between you and your classmates. Evaluation Criteria: Click here to view the forum rubric that will be used to evaluate you in this assessment. Security policy is a document that states in writing how a company plans to protect the company's physical and information technology assets. The...

Words: 534 - Pages: 3

Premium Essay

Associated Catholic Charities Customer Satisfaction

...Running head: ASSOCIATED CATHOLIC CHARITIES CUSTOMER SATISFACTION The Customer Satisfaction of Associated Catholic Charities Of the Diocese of Galveston-Houston Jill Brasher Elizabeth Haberer Mini Joseph Reena Mathew Karen Villareal Kirk Workman Craig Wondergem University of Houston Abstract One hundred seventy four individuals participated in a study to measure satisfaction of services received. The study hypothesized that at least 80 percent of the clients surveyed would report satisfaction with the services they received from the Associated Catholic Charities of the Diocese of Galveston-Houston (ACC). The study also hypothesized that there would be no significant differences between gender and reported satisfaction. It was further hypothesized that there would be no significant differences between ethnicity and reported satisfaction. Moreover, it was hypothesized that there would be a positive correlation between age and reported satisfaction scores. A ten question three point Likert scale instrument to survey clients satisfaction with convenience, timeliness, language, treatment, and overall help received by worker, program, and agency. Data was collected through telephone and written survey format on an immediate feedback or mail-out basis. The results indicated that 89 percent of the clients surveyed were satisfied with the services they received from ACC. Furthermore, the...

Words: 8183 - Pages: 33

Premium Essay

Validate Online Information

...How to assess the validity of online information Executive summary Starting with an answer on question “Why to evaluate?” this document explains techniques and methods of evaluation of online information through two similar approaches. It does not compare and it does not suggest better way of evaluation either. However, it raises the importance of evaluation but it is up to the reader himself/herself what will do with information covered here. Introduction The World Wide Web provides information from all around the world. There is extremely wide variety of material, different in its reliability, accuracy and value. No one has to approve the content before publishing like in more traditional form (books or magazines) and everyone can publish. Internet by its nature was designed to provide unrestricted information. There are no rules or standards as far as quality of information which writer can put on the internet are concerned. This information can be found in a large variety of kinds and was created for different purposes. Each of these different kinds and purposes has various levels of quality, credibility and reliability. Purpose of this report is to discuss how to assess validity of online information and most appropriate methods of evaluation. Discussion 1. Why to evaluate online information The nature of the web itself and the fact that anyone can publish or even change content of some...

Words: 2060 - Pages: 9

Premium Essay

How to Assess the Validity of Online Information

...techniques and methods of evaluation of online information through two similar approaches. It does not compare and it does not suggest better way of evaluation either. However, it raises the importance of evaluation but it is up to the reader himself/herself what will do with information covered here. Introduction The World Wide Web provides information from all around the world. There is extremely wide variety of material, different in its reliability, accuracy and value. No one has to approve the content before publishing like in more traditional form (books or magazines) and everyone can publish. Internet by its nature was designed to provide unrestricted information. There are no rules or standards as far as quality of information which writer can put on the internet are concerned. This information can be found in a large variety of kinds and was created for different purposes. Each of these different kinds and purposes has various levels of quality, credibility and reliability. Purpose of this report is to discuss how to assess validity of online information and most appropriate methods of evaluation. Discussion 1. Why to evaluate online information The nature of the web itself and the fact that anyone can publish or even change content of some websites means that excellent resources reside along the most dubious and these are the reasons why there is a need for evaluation. The main reasons why...

Words: 2051 - Pages: 9

Premium Essay

Professional Development Plan

...Professional Development Plan I. Professional Achievement is encourage, cultivated and encourage. This Team Development Plan enables each team member to analyze his or her developmental needs, set specific goals and target opportunities to meet these identifiable goals. This achievement of our professional goals will, ultimately benefit each member and our managing team. 2. Objective: To Increase our knowledge and skill base in the area of leadership management and organizational behavior therefore allowing us to contribute as a unit, thru our disc assessments we can enhance our skillsets and increase our knowledge thru further taking coursework Educating us in Communication and Group Behavior, Motivation and Satisfaction, Opportunities strategic Change and Leading Organizational Change and best practices in Organizational leadership at the Graduate Level. Through our disc assessments we can see we have different skill levels. The group as a whole has college degrees, and each is pursuing a degree at the masters level. Our work experience lends to our education because we have been in the work force for many years. That experience is priceless. 3. I will I will gain advanced skills in research and policy by taking coursework in Management and managerial research methods and...

Words: 553 - Pages: 3

Premium Essay

Aspire

...u a basic outline about the basic stuffs which one should need before u step into TCS. 2. How do we learn in Aspire? Ans: In Aspire, we have courses covering various areas like Web Technologies, Unix, Software Technology, DBMS etc. Each course would follow a weekly timeline. For each week, you will be given some learning objectives, learning activities and online learning materials. Participants are expected to cover the materials and activities and accomplish the learning objectives for the week. At the end of each week, you will need to update the activity tracker to record the progress of your learning. And Please make proper use of it. To be frank all the answers for the assignments will be available in net. But make sure u give a try. It would surely help u!!!! 3. Do we have evaluations on the topics covered in Aspire? Ans: We have some assessment modules designed for participants in Aspire. This would include online assignments and self evaluation quizzes. Also u will be having evaluations based on it once u enter TCS. 3rd day. 4. Do we have any other assessments other than the online assessments? Ans: No... Nothing like that is there. 5. I was not able to complete a week’s learning activity within the timeline. Will the material still be available? Ans: Yes. The materials will still be available in the course page.However, we would be uploading more materials and activities for subsequent weeks. So, participant may need to spend some more...

Words: 698 - Pages: 3

Premium Essay

The Effects of Frontliners Service Orientation Behaviours of the Department of Trade and Industry Cavite on Customers’ Service Evaluation: an Assessment

...THE EFFECTS OF FRONTLINERS SERVICE ORIENTATION BEHAVIOURS OF THE DEPARTMENT OF TRADE AND INDUSTRY CAVITE ON CUSTOMERS’ SERVICE EVALUATION: AN ASSESSMENT A Thesis Proposal Presented to the Faculty of the Graduate School Philippine Christian University Dasmariñas, Cavite In Partial Fulfillment Of the Requirements for the Degree in Masters In Business Administration MOJICA, KRISHA MAY S. March 2015 APPROVAL SHEET The thesis entitled “THE EFFECTS OF FRONTLINERS SERVICE ORIENTATION BEHAVIOURS OF THE DEPARTMENT OF TRADE AND INDUSTRY CAVITE ON CUSTOMERS’ SERVICE EVALUATION: AN ASSESSMENT” is prepared and submitted by Mojica, Krisha May S. in partial fulfillment of the requirements for the degree of Master of Business Administration who has been examines and recommended for Oral examination. ___________________ ___________________ Date O’Land C. Nwoke Adviser PANEL OF EXAMINERS Approved by the Committee on Oral Examination with a grade of _________________. ___________________ Chairman ___________________ _________________ Member Member Comprehensive Examination Grade __________ Date Taken _____________ Accepted and approved as partial fulfillment of the requirements for the degree in Master in Business Administration (MBA). __________________ _________________________ Date Dr. Teresita C. Arnaldo Dean, Graduate School ...

Words: 5291 - Pages: 22