Free Essay

Face Detection

In:

Submitted By kumarvivek9015
Words 3596
Pages 15
6.867 Final Project: Comparing Machine Learning Methods for Detecting
Facial Expressions
Vickie Ye and Alexandr Wang

Abstract

is used in this project uses the approach described in [3].
Kazemi et. al. uses localized histograms of gradient orienIn this project, we compared di↵erent methods for facial ex- tation in a cascade of regressors that minimize squared error, pression recognition using images from a Kaggle dataset re- in which each regressor is learned through gradient boostleased as a part of an ICML-2013 workshop on representation ing. This approach is robust to geometric and photometric learning. We found that classification using features extracted transformations, and showed less than 5% error on the LFPW manually from facial images using principal component anal- dataset. ysis yielded on average 40% classification accuracy. Using fea- The facial landmarks (eyes, eyebrows, nose, mouth) are intutures extracted by facial landmark detection, we received on itively the most expressive features in a face, and could also average 52% classification accuracy. However, when we used serve as good features for emotion classification. a convolutional neural network, we received 65% classification accuracy. 1.3

1

Support vector machines are widely used in classification problems, and is an optimization problem that can be solved in its dual form,

Introduction

Detecting facial expressions is an area of research within computer vision that has been studied extensively, using many different approaches. In the past, work on facial image analysis concerned robust detection and identification of individuals
[5]. More recently, work has expanded into classification of faces based on features extracted from facial data, as done in
[6], and using more complex systems like convolutional neural networks, as done in [1] and [2]. In our project, we compared classification using features manually extracted from facial images against classification using a convolutional neural net, which learns the significant features of images through convolutional and pooling neural layers. The two manual feature extraction methods we explored were principal component analysis (PCA) and facial landmark detection. These extracted features were then classified using a kernel support vector machine (SVM) and a neural network.

1.1

Multiclass Classification

max n ↵2R

n
X
i=1

↵i

1 XX
↵i ↵j y (i) y (j) K(x(i) , x(j) )
2 i j

s.t. 0  ↵i  C
X
↵i y (i) = 0 i where C parameterizes the soft-margin cost function, and
K(x(i) , x(j) ) is our kernel function, which allows us to model nonlinear classifiers. For our multiclass classification problem, we use the nonlinear radial basis kernel function,
K(x, z) = exp ( (x

z)2 )

where is a free parameter. The performance of the kernalized SVM is highly dependent on these free parameters C and
.

PCA

Neural networks are also widely used in multiclass classification problems, because hidden layers with nonlinear activation functions can be made to model nonlinear classifiers well.
Here, the number of nodes in each hidden layer, as well as the number of hidden layers in each network, highly determines the accuracy of predictions. In this project, we explored the performance of both methods.

In facial recognition, PCA is used in generating a lowdimensional representation of faces as linear combinations of the first k eigenfaces of training data. Eigenfaces are the eigenvectors of the training data’s covariance matrix; the first eigenfaces are the vectors along which the training data shows the highest variance. Thus, we can express a facial image vector of thousands of pixels in terms of the linearly independent basis of the first k eigenvectors.

1.4

Convolutional Neural Network

Convolutional neural networks (CNNs) are artificial neural networks that work particularly well for image data. The structure of CNNs exploits strong local correlation in the inFacial landmarks can be extracted from facial images into puts. This is done by enforcing local connectivity between lower-dimensional feature vectors. The implementation that neurons of adjacent layers. The inputs of a hidden unit at

1.2

Facial Landmark Detection

1

layer n are some locally-connected subset of the units of the previous layer n 1, done in such a way that the input to layer n represent some tile of the units of layer n 1, with overlapping tiles.
In addition, CNNs utilize shared parameterizations (weight vector and bias) across layers. By constraining the same types of weights across layers, essentially replicating units across layers, allows for features to be detected regardless to their position in the initial input, making the network much more robust to real world image data. Constraining multiple weights also reduces the number of free parameters, increasing learning e ciency.

2
2.1

Experimental Details
Datasets

Figure 1: The first 12 test images with labeled facial landmarks as generated from DLib’s shape recognizer.

For this project, we primarily used a dataset released by Kaggle as a part of an ICML-2013 workshop in representation learning. This dataset included 28,709 labeled training images, and two labeled test sets of 3,589 images. Faces were labeled with one of 7 labels: 0 (Angry), 1 (Disgust), 2 (Fear),
3 (Happy), 4 (Sad), 5 (Surprised), and 6 (Neutral).

2.2

two-layer neural networks, we performed model selection on the number of hidden nodes and learning rate to optimize the final performance.

2.5

PCA

To implement our convolutional neural network, we used
Google’s TensorFlow neural network library. Our model follows the architecture described in [4], with a few di↵erences in the top few layers. In particular, we use fully connected layers instead of locally connected layers. The model described in [4] was designed for ImageNet’s object classification, but it translated well to our problem of classifying facial expressions.

We used SciPy’s implementation of PCA to extract eigenfaces from our data. In our experiments, we optimized the number of principal components k we took as our eigenface basis. Our features were then the orthogonal projection of each image onto this basis. When performing PCA, we added whitening to ensure that eigenfaces were uncorrelated.

2.3

Facial Landmarks
2.5.1

We used the open source implementation of [3], DLib, to detect facial landmarks in our images. We detected 68 notable points for each facial image, which we then used as features for our classification. The landmarks found for each image are shown in Figure 1.

2.4

Convolutional Neural Network

Network Structure

We implemented a deep CNN with five layers, which we will refer to as conv1, conv2, local3, local4, and softmax linear. This structure is illustrated in Figure
2.
The two convolution layers, conv1 and conv2, divide the image into overlapping tiles and use those as inputs to the next layer. That is, the output can be written as
X
y(i, j) = x(s1 i + di, s2 j + dj)f (di, dj)

Multiclass Classification

We used SVM with a radial basis kernel to classify our manually extracted features. We used SciPy’s implementation of
SVM for these purposes To optimize the performance of SVM, we performed grid search cross-validation on the training and validation data to optimize the free parameters C and in our model.

di,dj

where y(·, ·) is the output, di and dj are changes which range over the tile size, s1 and s2 are strides between convolution tiles across each dimension, x(·, ·) is the input, and f (·, ·) are filters, i.e. the weights for the layer. For our model, we used
5 by 5 pixel tiles with 1 pixel strides. We did not regularize these convolution layers.

We also used neural networks with one to two hidden layers to classify our features. We used Google’s TensorFlow neural network library for these purposes. For both one-layer and
2

As illustrated in Figure 2, after each convolution layer, we performed a pool operation and a norm operation. The max pooling operation takes the maxima among input tiles. That is, y(i, j) = max x(s1 i + di, s2 j + dj) di,dj Note that this is not a layer because there are no weights. The norm operation is a local response normalization response, where each input is normalized by dividing it by the squared sum of inputs within a certain radius of it. This technique is described in more detail in [4], and serves as form of “brightness normalization” on the input.
The local3 and local4 are fully connected layers with rectified linear activation f (x) = max(x, 0). We regularize both of these layers with the L2-loss of weights because these layers are fully connected and very prone to overfitting.
Finally, the softmax linear layer is the linear transformation layer that produces the unnormalized logits used to predict the class of an input.
2.5.2

Training

The loss function we minimized was primarily the cross entropy between the multinoulli distribution, formed by the normalized logits outputted by our CNN, and the probability distribution that is 1 for the correct class and 0 for all other classes. The expression for cross entropy between two probability distributions p, q is
X
H(p, q) = Ep [ log q] = p(x) log q(x) x which measures the divergence between the two discrete probability distributions. In addition, we added standard weight decay, i.e. L2-loss, for the weights of the local3 and local4 layers for weight regularization.
For training, we fed the model randomly distorted training examples (see subsection 2.5.3) in randomly shu✏ed batches of 128 samples each. For each of these examples, we determined our loss based on our current inference. After each batch, we performed back propagation using gradient descent to learn the weights, and we called each time we finished a batch a step.

Figure 2: TensorFlow computation graph for the neural network in the CNN we implemented, illustrating each of the One technique which improved model accuracy by 3% when implemented was using moving averages of the learned varilayers and computation done in between each layer. ables. During training, we calculate the moving averages of all learned variables, and during model evaluation, we substitute all learned parameters with the moving averages instead.
2.5.3

Minimizing Overfitting

Since the size of our training set was 28,709 labeled examples, and our neural net needed to go through many more than
3

28,000 examples to achieve convergence, we employed a few techniques to ensure that our convolutional neural net did not overfit to the training set. This is also a risk because our top layers were fully connected, which means that our model could easily overfit using them.
We first implemented learning rate decay on our convolutional neural net, so that the learning rate decreased as it had been trained through more examples. We used exponential decay, so that that the learning rate decayed by a factor of 0.1 after training through 1, 200, 000 examples, and we had it decay in a step change manner as is visible in Figure 8. We found that the step change learning rate decay worked better than a continuous exponential decay. This could be because maintaining a high learning rate initially could ensure that the CNN was trained towards a good local optimum in fewer steps, whereas a steadily decreasing learning rate could limit the range of the neural network.
In addition, we implemented distortion of the images while training to artificially increase the size of our training set, and make our convolutional neural net more robust to slightly distorted inputs. We processed our images in a few di↵erent ways. We cropped our 48 by 48 pixel images to a 40 by 40 pixel box, centrally for model evaluation and randomly for training.
Then we approximately whiten the photos, scaling the pixel values linearly so that they pixel values have a mean of 0 and a standard deviation of 1, essentially normalizing them. This ensures consistent inputs to our neural network.
For training in particular, we performed a few more distortions to artificially increase our training set. We randomly flipped the image from left to right, randomly distorted the image brightness, and randomly distorted the image contrast for each image. These changes to randomly distort the images greatly improved the performance of the model, raising the correct classification rate from 45.4% to 53.8% when initially implemented.

3
3.1

Results and Analysis
PCA

We used the first k = 150 principal components of the training data as our eigenface basis. We found that this many Figure 3: The first twelve images in the test set projected eigenfaces explained 95% of the variance seen in our training onto the 150-dimensional eigenface basis. data. However, we found that the features extracted from
PCA were not very powerful representations of the data. In
Figure 3, we show the images obtained from PCA against the original images in the test set.
We then optimized RBF SVM on the new features, and found that the optimal set of parameters C = 5 and
= 0.005 yielded a test accuracy of 41%. The model selection grid search can be seen in Figure 4. Here, we see that we indeed have converged upon at least a local minimum over the ranges
4

C = { 1, 5, 10, 50, 100, 500, 1000, 5000} and
0.0005, 0.001, 0.005, 0.01}.

= {5e-5, 0.0001,

this problem we attempted using facial landmarks, which are more robust to variation among individual faces.

3.2

Facial Landmarks

The landmarks extracted from facial images are shown in Figure 1. For RBF SVM, we performed the same model selection grid search as described above, which can be seen in Figure
6. We found that the optimal set of parameters yielded a test accuracy of 52%, an improvement over the features extracted using PCA. Again we see that we have converged upon a minimum over the ranges C = {1, 5, 10, 50, 100, 500} and =
{1e-7, 5e-7, 1e-6, 5e-6, 1e-5, 5e-5}.

Figure 4: Cross validation on SVM with PCA features. Test accuracy for best parameters was 41%.
We also fed the features into our neural network for classification. We found that although di↵erent combinations of parameters yielded di↵erent convergence behaviors, those we tried all converged to at most accuracy of 40%. The convergence of a two-layer neural net with 80 hidden nodes is shown in Figure 5.

Figure 6: Cross validation on SVM with facial landmarks.
Test accuracy for best parameters was 52%.
We again fed our features into a neural network for classification, and found that the models we tried all converged to the same value of around 50%, albeit with slightly di↵erent convergence behaviors. The convergence of a two-layer neural net with 100 hidden nodes is shown in Figure 7.
Here we note that although there was an increase in performance, the performance of both methods of classification was not impressive. In an attempt to improve the descriptiveness of our features, we normalized the landmark points with reFigure 5: The convergence of a two-layer neural net with 80 spect to the bounding box of the face, to minimize variance among individual feature dimensions. However, we found that hidden nodes for features extracted with PCA. this did not aid in test accuracy. We think this is because we
The mediocre classification accuracy on features extracted did not extract enough landmark points to capture the fine using PCA is unsurprising given the data. The features ex- facial changes between di↵erent expressions. Related to this tracted were sensitive to the variance in the individual faces of is the quality of the landmark detector. However, such work the subjects, in addition to their facial expressions. Therefore is beyond the scope of this project. To improve performance, our standard models of classification were unable to pick out we experimented with convolution nets for both feature exthe significant features to identify facial features. To address traction and classification.
5

Figure 7: The convergence of a two-layer neural net with 100 hidden nodes for facial landmark features.

3.3

Convolutional Neural Network

We trained our model on Google Cloud Compute server with
4 CPUs and 15GB RAM for 20 hours using a training set of
28,709 labeled examples, and tested against a test set of 7,178 labeled examples. The model ran for 24,000 steps within those
20 hours, being fed a total of 3, 072, 000 distorted training examples. Our model achieved a correct classification rate of 65.2% on the 7-way classification problem, which is comparable to the
69% correct classification rate of the winning model on Kaggle. Our model correctly determines the correct label within the two highest logits with 82.3% accuracy, and determines the correct label within the three highest logits with 90.5% accuracy. We can see how the total loss and the loss across the layers changed along with the learning rate in Figure 8. There are a few interesting patterns. First, the weight decay loss on the local3 and local4 layers fell very quickly (by the
5000th step) and did not decrease past that, while still decreasing the cross entropy loss. The local4 loss even increased slightly.
In addition, the total loss, and in particular the cross entropy loss, dropped dramatically once the learning rate dropped according to the exponential learning rate decay schedule. This makes sense, since it allows the neural net to much more finely optimize the weights on the space it settled in with the larger learning rate. Finally, at the extremely small learning rate of
0.001, the loss converged at around 0.5, and any additional learning in the model at this point did not increase the clas- Figure 8: The learning rate, total loss, cross entropy loss, and sification success on the test set. regularized loss of layers local3 and local4 of the convoluWe can also see the change in the parameters of the model, tional neural net over time. The x-axis is the step, and the weights and biases, and how they changed over time in Fig- y-axis is the value. ure 9. We can see that the regularized layers local3 and
6

Figure 9: Histograms of the weights and biases of the layers of the CNN over time. The x-axis is the step, and the y-axis is the value.

7

local4 had weights that quickly converged and stayed with [2] Matsugu, M.; Mori, K.; Mitari Y.; Kaneda Y. (2003) approximately the same distribution. The weights on the non“Subject independent facial expression recognition with regularized layers, i.e. conv1, conv2, and softmax linear, robust face detection using a convolutional neural netmostly diverged with each step, but with the smaller learning work” Neural Networks 16 (5):555-559 rate they stopped increasing and maintained approximately
[3] Kazemi, V.; Sullivan, J. (2014) “One Millisecond Face the same distribution. Finally, it is interesting that the biases
Alignment with an Ensemble of Regression Trees” The for layers conv1 and local4 were learned to be extremely
IEEE Conference on Computer Vision and Pattern Recogskewed distributions. This indicates that the performance nition (CVPR) of the network could potentially be improved by initializing those weights at a more skewed distribution, decreasing the [4] Krizhevsky, A.; Sutskever; Hinton, I.; Hinton, G. (2012) initial learning time.
“ImageNet Classification with Deep Convolutional Neural Networks” Advances in Neural Information Processing
One feature which could be implemented and would likely
Systems 25 (1):1097-1105 improve the performance of the algorithm is dropout, which could further prevent overfitting. This would randomly drop [5] Samal, A.; Iyengar, P. (1992) “Automatic Recognition and
Analysis of Human Faces and Facial Expressions: A Surindividual nodes with a certain probability p at each step, and vey” Pattern Recognition 25 (1):65-77 would increase necessary training time by a factor 1/p.

4

[6] Bartlett, M. S.; Littlewort, G.; Frank, M.; Lainscsek, C.;
Fasal, I.; Movellan, J. (2005) “Recognizing Facial Expression: Machine Learning and Application to Spontaneous
Behavior” The IEEE Conference on Computer Vision and
Pattern Recognition (CVPR)

Conclusion

In this project we explored the classification of facial images into seven emotion classes: anger, disgust, fear, happy, sad, surprise, and neutral. We found that the representation of faces using eigenfaces from PCA was not robust or descriptive enough for successful classification; the variance between features was explained more by the di↵erences in individual faces than by the di↵erences in expressions. We obtained test accuracy of 41% using this method on our dataset. We found that the representation using facial landmarks extracted from subject faces was more succesful. However, the number of landmarks and the accuracy of the landmarks extracted were not particularly high, and we obtained a test accuracy of 52% using this method on our dataset.
When we applied a convolutional neural network with two convolutional layers and two regularized fully connected hidden layers, we found that test accuracy improved significantly.
We also found that certain practices, such as randomly distorting, highlighting, darkening, cropping, and flipping images, during training helped build a more robust predictor.
Our convolutional network obtained a test accuracy of 65% when applied to our dataset. We also found that 82% of the time, the true label was in the top 2 CNN predictions, and
91% of the time the true label was in the top 3 predictions.
Further work on this problem could involve including dropout to further reduce overfitting during training and exploring different network structures.

References
[1] Lawrence, S.; Giles, L.; Tsoi, A. C.; Back, A. D. (1997)
“Face Recognition: A Convolutional Neural-Network Approach” Neural Networks, IEEE transactions on 8 (1):98113

Similar Documents

Free Essay

Human Face Detection and Recognition Using Web-Cam

...Journal of Computer Science 8 (9): 1585-1593, 2012 ISSN 1549-3636 © 2012 Science Publications Human Face Detection and Recognition using Web-Cam Petcharat Pattanasethanon and Charuay Savithi Depatment of Business Computer, Faculty of Accountancy and Management, Mahasarakham UniversityKamreang, Kantharawichai, Mahasarakham 44150, Thailand Abstract: Problem statement: The illuminance insensitivity that reflects the angle of human facial aspects occurs once the distance between the object and the camera is too different such as animated images. This has been a problem for facial recognition system for decades. Approach: For this reason, our study represents a novel technique for facial recognition through the implementation of Successes Mean Quantization Transform and Spare Network of Winnow with the assistance of Eigenface computation. After having limited the frame of the input image or images from Web-Cam, the image is cropped into an oval or eclipse shape. Then the image is transformed into greyscale color and is normalized in order to reduce color complexities. We also focus on the special characteristics of human facial aspects such as nostril areas and oral areas. After every essential aspectsarescrutinized, the input image goes through the recognition system for facial identification. In some cases where the input image from the Web-Cam does not exist in the database, the user will be notified for the error handled. However, in cases where the image exists...

Words: 1996 - Pages: 8

Free Essay

Robotics

...A Seminar Report ON Human-Oriented Interaction With An Anthropomorphic Robot CONTENTS ➢ Abstract ➢ Index Terms ➢ Introduction ➢ Related Work ➢ Robot Hardware ➢ Detecting And Tracking People * Face Detection * Voice Detection * Memory-Based Person Tracking ➢ Integrating Interaction Capabilities * Speech And Dialog * Emotions And Facial Expressions * Using Deictic Gestures * Object Attention * Dynamic Topic Tracking * Bringing It All Together ➢ Experiments * Scenario 1: Multiple Person Interaction * Scenario 2: Showing Objects to BARTHOC * Scenario 3: Reading Out a Fairy Tale ➢ Conclusion ➢ References Abstract A very important aspect in developing robots capable of human-robot interaction (HRI) is the research in natural, human-like communication, and subsequently, the development of a research platform with multiple HRI capabilities for evaluation. Besides a flexible dialog system and speech understanding, an anthropomorphic appearance has the potential to support intuitive usage and understanding of a robot, e.g .. human-like facial expressions and deictic gestures can as well be produced and also understood by the robot. As a consequence of our effort in creating an anthropomorphic appearance and to come close to a human-human...

Words: 7843 - Pages: 32

Free Essay

Eye Centre Localistaion

...shape analysis. Abstract: The estimation of the eye centres is used in several computer vision applications such as face recognition or eye tracking. Especially for the latter, systems that are remote and rely on available light have become very popular and several methods for accurate eye centre localisation have been proposed. Nevertheless, these methods often fail to accurately estimate the eye centres in difficult scenarios, e.g. low resolution, low contrast, or occlusions. We therefore propose an approach for accurate and robust eye centre localisation by using image gradients. We derive a simple objective function, which only consists of dot products. The maximum of this function corresponds to the location where most gradient vectors intersect and thus to the eye’s centre. Although simple, our method is invariant to changes in scale, pose, contrast and variations in illumination. We extensively evaluate our method on the very challenging BioID database for eye centre and iris localisation. Moreover, we compare our method with a wide range of state of the art methods and demonstrate that our method yields a significant improvement regarding both accuracy and robustness. 1 INTRODUCTION The localisation of eye centres has significant importance in many computer vision applications such as human-computer interaction, face recognition, face matching, user attention or gaze...

Words: 4275 - Pages: 18

Premium Essay

Mary Kay Cataloge Outline for Eng 101

...insinuating that the woman reading this catalogue isn't already beautiful. c. Organization: Chronological Order? iv. "Countdown to Smooth, Glowing Skin" is the very first article, trying to sell the 'TimeWise' collection in order to improve skin to become a pallet for the makeup. v. "The Secret to Flawless Foundation" is the next article in order to create a base for the main makeup. vi. "Timeless and Oh-So-Wearable" is the next article to advertise make-up looks that has been around for a while and that will always been around, the "classic daytime" look. vii. "All Eyes on You" is the next article that advertises mainly eye make-up that pops out along with subtle make-up for the rest of the face, in order to make the eyes pop out even more, for a "stunning nighttime look." viii. "Tools for Tackling Touch-Ups" is the next article that advertises make-up special for touching up, small things on your "big day." ix. "Gift Ideas" is the next article that advertises make-up...

Words: 450 - Pages: 2

Free Essay

Stack Measurement

...volume of different individual stacks, which is important if different timber products are being sold separately from the one harvesting operation Disadvantages All logs in a stack must be of uniform length and the stack should be built neatly and tidy for easy measurement and accuracy. Large stacking space is required to ensure that all harvested material can be stacked at roadside before any removal is carried out by timber trucks. Length * Width * Height = Volume (unit = m3) Some definitions  Stack width The width is the specified length of the timber product in the stack. A number of sample lengths (billets) should be checked to verify the stack width.  Stack length Stack length is the average length of the front and back face of the stack. The stack should be measured from the centre point of the outermost billets at one end of the stack to the centre of the outermost billets at the other end.  Stack height Stack height is the perpendicular height from the bottom of the stack to the centre of the highest billet at the top of the stack. Average stack height is the average value of a series of height measurements taken along the length of the...

Words: 622 - Pages: 3

Premium Essay

Brainstorming

...country faces many different disaster threats. Over half of the country’s population is under the age of fifteen. Kava’s economy has a strong production of natural resources, as well as specific foods, such as bananas, coffee, cocoa, sugars and fish. Kava’s economy offers inexpensive, quality labor. Kava has support from the local, state and national government, community based organizations, faith-based groups and businesses in order to grow the economy in the significant island country. The island has a wide variety of ethnicities, creating language barriers because of all the different ethnicities Bluegrass Construction Company, Inc. (BCC) hired me, Nik, to study, synthesize, analyze, and prescribe the best business decisions in order to establish a greater presence of our company in Kava. I have decided to implement a decision-making technique to create an appropriate solution for BCC to overcome challenges in our overall goal of establishing a greater presence of our business in Kava. After visiting the island, seeing the problems our company is facing, and knowing where we want to go with our business in Kava, I think the best course of action is to implement brainstorming as our decision-making technique in order to strengthen our presence in Kava. Analysis of Specific Steps in Applying Brainstorming Through my analysis of the island, the disasters and threats it faces and the potential the island has I think the first step and the greatest challenge we face is finding...

Words: 318 - Pages: 2

Premium Essay

Trustworthiness Is Beautiful to Behold

...Trustworthiness is Beautiful to Behold Introduction First impressions are of critical importance when two individuals interact. Within seconds, judgments have been made and opinions formed about each other even though neither person has any previous information regarding the other. Often, these perceptions are the results of past experiences and the information from other sources, formulated based on similarities between the person and these sources of information. For instance, many categorizations and stereotypes are created over a period of years from the information we receive from music, television, families, social media, and our peers. One of the most critical factors involved in this decision-making process, physical appearance, greatly influences how we view other people; individuals perceived as more attractive are assigned more positive characteristics. On the opposite end of the spectrum, unattractive people are greatly associated with more negative attributes such as untrustworthiness and incompetence. As Aronson suggests “since we have categorized a person as good-looking or homely, we tend to attribute other qualities to that person; for example, good looking people are likely to strike us as being warmer, sexier, more exciting, and more delightful than homely people.” (376). This is commonly referred to as the Physical Attractiveness Stereotype or the “beauty is good” stereotype. For the purposes of this paper, the relationship between attractiveness...

Words: 1952 - Pages: 8

Free Essay

Chaos and Mayhem

...Mayhem- Danielle Hamilton Chaos- Melissa Mayhem’s Victim- Crystal Chaos’ Victim- Whitney K. Hass & May Hemm, Attys. At Law Commercial Mayhem- I’m the face of your ex-boyfriend’s new girlfriend that you just can’t stop thinking about and the more you drink that tequila the harder it is to forget me. But go ahead, try one more. Maybe I’ll go away then. (Crystal is sitting at a table obviously intoxicated looking sad and mad crying and so on.) Chaos- O my god! Don’t you just love me! I’m the new ipod and wow I love this song too! Let’s get jiggy with it! Oh no! What’s happening? I’m stuck. I’m stuck. I’m stuck. (Whitney is pretending to drive jamming on her ipod when something goes wrong causing her to take her eyes off the road and focus on her device) At the same time Crystal gets up to leave and drive home. Swaying as she walks she says: Crystal- “I’m fine…I’mmmm fine I shay….” (While driving Whitney knocks over a street sign for a one way road but she is too focused on her ipod to even notice) Chaos- Weeeeeeee!!!! Allright! Let’s get the jams going again!!!! Now in her car Crystal is driving and attempting to watch the road. Mayhem- I’m still here. And the longer you focus on me instead of the road with your double vision glasses on the madder you become… and forget where to turn. Yup, this looks good. (Crystal turns onto the one way road and runs into the...

Words: 273 - Pages: 2

Premium Essay

Communication Channel Scenarios

...written and immediately sent. Downward communication is used for employee communication. It’s keeping them informed and providing them with instructions on the new login and password. (Robbins & Judge,(11) pp 344, 351) Scenario III Channel richness would be used to inform employees about the reduction of workforce. This channel is best choice because you will be utilizing different angles of communication to your employees. You want to use written communication which is providing something concrete for them to review. Along with oral communication, which is, letting the employees understand the seriousness of the situation and why this restructuring is taking place. Next, the written communication will go in combination with a face to face communication with the affected employees. I feel this is best and shows more personable action on handling the individual employee also an opportunity to present to them the benefits and resources that are...

Words: 335 - Pages: 2

Free Essay

Hongcong

...salt removes dry, flaky, dead skin. Wet face (or anywhere on the body), apply a couple of tablespoons of sea salt, then GENTLY massagewith a wet washcloth or fingers. Focus on the T-zone and cheeks, but avoid the eye area. After one or two minutes, rinse with cold water to tighten the pores. It is important to do this regularly, or face-creams will not penetrate. You can do this maximum once per week. Fruity Lip Gloss 2 Tbls solid shortening 1 Tbls fruit-flavored powdered drink mix 35 mm film container Mix shortening & drink mix together in a microwave safe bowl till smooth! Place in microwave on high for 30 seconds until mix becomes a liquid! Pour into clean film container, or any other small airtight container! Place mix into fridge for 20-30 minutes or till firm! Dark Circles Under Eyes... To lighten dark circles under your eyes, wrap a grated raw potato in cheesecloth and apply to eyelids for 15-20 minutes. Wipe off the residue and apply an eye cream. Home made Facial Mask Face mask for dry skin 1. Take 1 tbsp olive oil and mix with 2 tbsp of fresh cream, leave it on the face for 10 minutes and then wash your face with warm water. 2. Mix 1 tbsp of honey with 15 drops of orange juice and 1 tbsp of fuller’s earth and add 1 tbsp of rose water. Mix well and apply on the face, wash off after 10 miuntes. Face mask for oily skin 1. Mix a tbsp of honey with 1 egg white and apply thickly on the face and neck. Leave for 10 minutes and wash off...

Words: 988 - Pages: 4

Free Essay

Logical

...were so, it would be; but as it isn't, it ain't. That's logic. Tweedledee in Lewis Carolls Through the Looking Glass. If the above line confused you, trust me you are not alone. Even God can vanish in a puff of logic. To know how, you can probably jump to the end of this post. To those who choose not to skip let us discuss a few common types of Logical Reasoning problems. Type 1: Cube problems A cube is given with an edge of unit N. It is painted on all faces. It is cut into smaller cubes of edge of unit n. How many cubes will have x faces painted? In these types of questions, the first thing that we need to figure out is the number of smaller cubes. For this, we look at one particular edge of the big cube and figure out how many smaller cubes can fit into this. It will be N/n. So, the number of smaller cubes will be (N/n)3 A cube has 6 faces and none of the smaller cubes will have all faces painted. As a matter of fact, none of the smaller cubes will have even 5 or 4 faces painted. The maximum number of faces, which will be painted on a smaller cube, will be www.pagalguy.com/news/cubes-matchsticks-logical-reasoning-tricks-cat-2011-a-8786850 1/6 10/15/13 Of Cubes and Matchsticks - Logical Reasoning Tricks for CAT 2011 : PaGaLGuY News & Channels 3. This...

Words: 1783 - Pages: 8

Free Essay

Respiratory Safety

...They require that the employer provides the employee with a certain type of respirator depending on the type of work the employee may be doing. There are many types of respirators but the two main types are the air-purifying respirators which use filters, cartridges, or canisters to remove contaminants from the air you breathe. Another is atmosphere-supplying respirators, which provide you with clean air from an uncontaminated source. They also can be classified as tight or loose fitting. The Tight-fitting respirators need a tight seal between the respirator and the face and neck of the respirator user in order to work properly. If the respirator's seal leaks contaminated air will be pulled into the face piece and can be breathed in. Loose-fitting respirators do not depend on a tight seal with the face to provide protection. Fit test are required for the tight-fitted respirators to ensure it is sealed to the workers face properly. An employer must measure the level of airborne contaminant present in your workplace; they use this information to determine the type of respirator which is called the Assigned Protection Factor also known as APF. There are certain numbers assigned to each class of respirator to...

Words: 569 - Pages: 3

Free Essay

The Necklace

...English 385 The Necklace In that moment after Matilda realized what has happened she was in shock. She stood in the face of her friend with total rage in heart. She was angry and immediately demanded the necklace back. She begins yelling at Ms. Forestier that she has been in debt the last 10 years to replace that necklace and now she wants the necklace back. Matilda offers to give her the original value of 500 francs for the necklace if she will return the necklace. Ms. Forestier declines. She explains that the necklace is one of her prize possessions and cannot part with. Matilda has a hard time accepting her response and tells Ms. Foreister if she refuses to return the necklace she would be sorry. They depart, Matilda returns home to explain to her husband what occurred. As Loisel is home relaxing, from his hard days work, his wife rushes in total rage. He is unaware of what transpired with Ms. Foreister but notices the utter turmoil on Matilda’s face. HE begins to ask what’s wrong before he can complete the statement Matilda begins to yelling “ We have worked the last 10 years paying off the necklace and it is not even real. I demanded that it be returned and she refused”. Loisel is confused, and unsure how to respond. Matilda proceeds to explain that she ran in to Ms. Foreister today and she acted as if she didn’t recognize whom she was. They begin to talk and she explained how she lost the necklace and worked 10 years to replaced it for 10,000 fances. Ms. Foreister...

Words: 542 - Pages: 3

Free Essay

English Patient

...1 In memory of Skip and Mary Dickinson For Quintin and Griffin And for Louise Dennys, with thanks ‘Most of you, I am sure, remember the tragic circumstances of the death of Geoffrey Clifton at Gilf Kebir, followed later by the disappearance of his wife, Katharine Clifton, which took place during the 1939 desert expedition in search of Zerzura. “I cannot begin this meeting tonight without referring very sympathetically to those tragic occurrences. “The lecture this evening ...” From the minutes of the Geographical Society meeting of November 194-, London I The Villa SHE STANDS UP in the garden where she has been working and looks into the distance. She has sensed a shift in the weather. There is another gust of wind, a buckle of noise in the air, and the tall cypresses sway. She turns and moves uphill towards the house, climbing over a low wall, feeling the first drops of rain on her bare arms. She crosses the loggia and quickly enters the house. In the kitchen she doesn’t pause but goes through it and climbs the stairs which are in darkness and then continues along the long hall, at the end of which is a wedge of light from an open door. She turns into the room which is another garden—this one made up of trees and bowers painted over its walls and ceiling. The man lies on the bed, his body exposed to the breeze, and he turns his head slowly towards her as she enters. Every four days she washes his black body, beginning at the destroyed feet. She wets a washcloth...

Words: 83532 - Pages: 335

Free Essay

Adapat

...This is obviously a face to face conversation because adams is addressing people directly and this is a form of verbal communication because words are used to transform ideas. People are chanting and showing their excitement like they understand each and every word. Adams is using the terms which can be understood by everyone. In other words, he is really acting and behaving like he is from one of them. There are some jargons used in this clip like Pork packers, rump wrappers and bull shippers which are obviously well understood by the people. Also we see colloquialism in this clip because it looks like familiar conversation when Adams uses phrases like you know, we packers have been taking a bad rap for years, Yeah! - I'm proud of my meat! And I know you're proud of your meat! and Whip it, zip it and send it out! From these phrases it looks like Adams is familiar of those people. The tone and volume of Adams’ voice proves that he is very excited and wants to achieve something from this conversation. From the crowd’s reaction, looks like he is succeeded. People in the crowd are active listeners. They are concentrating on the sender and what’s he saying. They are very enthusiastic and energetic which shows that Patch is talking about their interest. They are obviously acknowledging Patch when he says that now is the time for cow! And the crowd begins to chant Eat cow! Eat cow! As far as Hurier model is concerned, people in this clip are hearing and paying attention to...

Words: 338 - Pages: 2