...speeches of four US presidents and party political manifestos of two British political parties during the period between 1974 and 1997 are analysed. The main purpose of undertaking this kind of comparative study of the British and the American political discourses is quite evident, these discourses symbolize intriguing and complex methods of cultural values and political differences as depicted in the respective linguistic contexts. The key findings are that metaphors from the domains of conflict, journey and buildings are general across the divide. However, the British corpus contain metaphors that draw on the source domain of plants whereas the American corpus hugely draws on source domains like fire and light and the physical environments that are excluded in the context of the British corpus. Therefore, the variations offer quite a significant dissimilarity in the use of metaphors among the two set of political discourse and cultural differences. Keywords: Metaphor, Corpus, discourse, manifesto, and politics. Introduction Political and socio-cultural dimensions have been applied extensively in all kinds of linguistic studies on the...
Words: 6092 - Pages: 25
...Nigerian English variety comparable to the British or American Standard English exists. Codification is one such step but prior to it must come a compilation of an extensive database of English language use in Nigeria and the application of empirical methods in examining and determining the character of English in the Nigerian context so that the continuum of forms of the language can be properly ascertained, classified and documented. With such reliable evidence based on valid findings arising from empirical investigations, we can then hope for realistic descriptions of English in Nigeria which qualifies for codification for general use as a representational variety of English in Nigeria. Key words: Codification, Nigerian English, Corpus and...
Words: 6571 - Pages: 27
...Lexical cohesion and the organization of discourse First year report PhD student: Ildikó Berzlánovich Supervisors: Prof. Dr. Gisela Redeker Dr. Markus Egg Center for Language and Cognition Groningen University of Groningen 2008 Table of contents 1 Introduction.........................................................................................................1 2 Lexical cohesion...................................................................................................2 2.1 Lexical cohesion and discourse organization................................................2 2.1.1 Introduction.............................................................................................2 2.1.2 Lexical cohesion and genre.....................................................................2 2.1.3 Lexical cohesion and coherence .............................................................3 2.2 The role of lexical cohesion in the segmentation and centrality of discourse units......................................................................................................................5 2.2.1 Introduction.............................................................................................5 2.2.2 Discourse segmentation ..........................................................................6 2.2.3 Central discourse units............................................................................8 2.2.4 Conclusion .........................................
Words: 14120 - Pages: 57
...An Approach to Corpus-based Discourse Analysis: The Move Analysis as Example THOMAS A . UPTON AND MARY ANN COHEN Abstract This article presents a seven-step corpus-based approach to discourse analysis that starts with a detailed analysis of each individual text in a corpus that can then be generalized across all texts of a corpus, providing a description of typical patterns of discourse organization that hold for the entire corpus. This approach is applied specifically to a methodology that is used to analyze texts in terms of the functional/communicative structures that typically make up texts in a genre: move analysis. The resulting corpus-based approach for conducting a move analysis significantly enhances the value of this often used (and misused) methodology, while at the same time providing badly needed guidelines for a methodology that lacks them. A corpus of ‘birthmother letters’ is used to illustrate the approach. Biber et al. (2007) explore how discourse structure and organization can be investigated using corpus analysis; they offer a structured, seven-step corpusbased approach to discourse analysis that results in generalizable descriptions of discourse structure. This article draws on the themes in this book, but focuses in particular on analyses that use theories on communicative or functional purposes of text as the starting point for understanding why texts in a corpus are structured the way they are, before moving to a closer examination and description of...
Words: 8985 - Pages: 36
...Investigating the Complementary Polysemy of the Noun ‘Destruction' in an English to Arabic Parallel Corpus Hammouda Salhi University of Carthage, Tunisia hammouda_s@hotmail.com Abstract: This article investigates a topic at the interface between translation studies, lexical semantics and corpus linguistics. Its general aim is to show how translation studies could profit from the work done in both lexical semantics and corpus linguistics in an attempt to help ‘endear’ linguists to translators (Malmkjær, 1998). The specific objective is to capture the semantic and pragmatic behavior of the noun ‘destruction’ from its different translations into Arabic. The data are taken from an English-Arabic parallel corpus collected from UN texts and their translations (hereafter EAPCOUNT). While it seems that ‘destruction’ is monosemous, it turns out, after exploring its occurrences, to be highly polysemous and shows a case of complementary polysemy, where a number of alternations can be captured. These findings are broadly in line with the results reached in recent developments in lexical semantics, and more particularly the Generative Lexicon (GL) theory developed by James Pustejovsky. Some concrete suggestions are made at the end on how to enhance the relation between linguists and translators and their mutual cooperation. Key words: Lexical semantics, corpus linguistics, translation studies, complementary polysemy, coercion, parallel corpora, lexical ambiguities ...
Words: 8055 - Pages: 33
...The Call Triangle: student, teacher and institution Learning register variation. A web-based platform for developing diaphasic skills∗ Adriano Allora, Elisa Corino and Cristina Onesti Dipartimento di Scienze Letterarie e Filologiche - Università di Torino Via Sant’Ottavio 20, Torino 10124, Italy Abstract The present paper shows the first results of a linguistic project devoted to the construction of web learning tools for reinforcing sensitivity to diaphasic varieties and for learning style variation in L2/LS learning. Keywords: L2/LS learning; style variation; web based platform 1. Introduction This paper aims at presenting a project analysing formal varieties of European online languages, working on a suite of Net Mediated Communication (NMC) corpora and studying lexical, discourse and macro-syntactic phenomena. The practical implications of the studies aim at developing freely available resources devoted to L2/LS learning and the development of a web-based delivery platform. 2. The VALERE project The project (‘Varietà Alte di Lingue Europee in REte’: Formal Varieties in Newsgroups of European Languages: Structural Features, Interlinguistic Comparison and Teaching Applications) aims at investigating the wide range of formal language in some main European languages with particular attention to NMC. The research is based on the NUNC (i.e. Newsgroup UseNet Corpus1), a collection of corpora created from newsgroup messages implemented at the University...
Words: 1730 - Pages: 7
...Investigating the presentation of speech, writing and thought in spoken British English: A corpus-based approach1 Dan McIntyre a, Carol Bellard-Thomson b, John Heywood c, Tony McEnery c, Elena Semino c and Mick Short c a Liverpool Hope University College, UK, b University of Kent at Canterbury, UK, c Lancaster University, UK Abstract In this paper we describe the Lancaster Speech, Writing and Thought Presentation (SW&TP2) Spoken Corpus. We have constructed this corpus to investigate the ways in which speakers present speech, thought and writing in contemporary spoken British English, with the associated aim of comparing our findings with the patterns revealed by the previous Lancaster corpus-based investigation of SW&TP in written texts. We describe the structure of the corpus and the archives from which its composite texts are taken. These are the spoken section of the British National Corpus, and archives currently housed in the Centre for North West Regional Studies (CNWRS) at Lancaster University. We discuss the decisions that we made concerning the selection of suitable extracts from the archives, the re-transcription that was necessary in order to use the original CNWRS archive texts in our corpus, and the problems associated with the original archived transcripts. Having described the sources of our corpus, we move on to consider issues surrounding the mark-up of our data with TEI-conformant SGML, and the problems associated with capturing in electronic form the CNWRS...
Words: 10539 - Pages: 43
...The media Anne O’Keeffe Historical overview of media discourse ‘The media’ is a very broad term, encompassing print and broadcast genres, that is anything from newspaper to chat show and, latterly, much more besides, as new media emerge in line with technological leaps. The study of ‘the media’ comes under the remit of media studies from perspectives such as their production and consumption, as well as their aesthetic form. The academic area of media studies cuts across a number of disciplines including communication, sociology, political science, cultural studies, philosophy and rhetoric, to name but a handful. Meanwhile, the object of study, ‘the media’, is an ever-changing and ever-growing entity. The study of ‘the media’ also comes under the radar of applied linguistics because at the core of these media is language, communication and the making of meaning, which is obviously of great interest to linguists. As Fairclough (1995a: 2) points out, the substantively linguistic and discoursal nature of the power of the media is a strong argument for analysing the mass media linguistically. Central to the connection between media studies and studies of the language used in the media (media discourse studies) is the importance placed on ideology. A major force behind the study of ideology in the media is Stuart Hall (see, for example, Hall 1973, 1977, 1980, 1982). Hall (1982), in his influential paper, notes that the study of media (or ‘mass communication’) has had...
Words: 7914 - Pages: 32
...nature and purpose of his publication as "a cross between a dictionary (lexicon) and an encyclopaedia" (Crystal, 2004: vii). For each term in the glossary there is information one would look up in a dictionary, and the sort of knowledge one would expect to find in an encyclopaedia, such as an etymology of the entry and a hint of its sociolinguistic use. For example: newbie A newcomer to a chatgroup or virtual-world environment, especially one who has not yet learned the way to behave when participating in the dialogue. >>chatgroup; netiquette; virtual world (Crystal, 2004: 79) The coinage of the neologism Slexipedia compounds the acronym SL with lexipedia to provide a term to describe the Second Life-specific lexis in my corpus. In addition to providing a SL glossary according to Crystal's method (Appendix X), this chapter investigates the creative and innovative word-formation processes of SL English and Arabic vocabulary by its residents. Since use of vocabulary reflects identity (Crystal, 2001; Benwell and Stokoe, 2006; Boellstorff, 2008), the final concern of this chapter is the manner in which these SL terms are used in conversational interaction inworld, to reflect the social purposes and circumstances in which these words are utilized. Forming a coherent slexipedia will provide more insight for forming an account of SL identity, or Slidentity. It is argued by myself that communication in SL shares many attributes with internet chat, as they are both...
Words: 11436 - Pages: 46
...Lexical borrowing = slovní výpůjčky - adoption from another lg with the same meaning English is tolerant to other lgs, nenasytný vypůjčovatel (70% non-anglosaxon origin), welcomes foreign words, not homogenous lg like French (majority of expressions was taken from F.) reasons: lg feels a need for a new word; to pre-denote a special concept (Sputnik, gradually disappeared from lg; certain lg has a kind of prestigious position (matter of fashion, but overuse of English words; matter of political force); distinction of functional style (matter of development) – three synonymical expressions of diff. origin (anglo-saxon origin: home, French words (additional meanings): resindence, Latin words: domicile, Greek origin, etc.) layers of three origins : hunt/chase/pursue rise/mount/ascend ask/question (certain amount of intensity)/interrogate high tolerance in English; in French and in German – used to avoid it; in Czech – had to defend its position to German, Linguists tried to set certain rules for using words=re-establishion of Czech lg English changes pronunciation of borrowed words (E. is simply a germanic lg, but more Romans lg in vocabulary) the basic vocabulary=core vocabulary (be, have, do) is Anglo-Saxon, surrounding periphery of v. maybe borrowed (count a word each time that occurs) wave of new adoptions: swift adotion - in some periods in lg more words than usual are adopted, in the 13. century after the Norman conquest, natural mechanism!! self-regulated – if there...
Words: 7575 - Pages: 31
...I am extremely grateful to him for providing me the necessary links and material to start the project and understand the concept of Twitter Analysis using R. In this project “Twitter Analysis using R” , I have performed the Sentiment Analysis and Text Mining techniques on “#Kejriwal “. This project is done in RStudio which uses the libraries of R programming languages. I am really grateful to the resourceful articles and websites of R-project which helped me in understanding the tool as well as the topic. Also, I would like to extend my sincere regards to the support team of Edureka for their constant and timely support. Table of Contents Introduction 4 Limitations 4 Tools and Packages used 5 Twitter Analysis: 6 Creating a Twitter Application 6 Working on RStudio- Building the corpus 8 Saving Tweets 11 Sentiment Function 12 Scoring tweets and adding column 13 Import the csv file 14 Visualizing the tweets 15 Analysis & Conclusion 16 Text Analysis 17 Final code for Twitter Analysis 19 Final code for Text Mining 20 References 21 Introductions Twitter is an amazing micro blogging tool and an extraordinary communication medium. In addition, twitter can also be an amazing open mine for text and social web analyses. Among the different softwares that can be used to analyze twitter, R offers a wide variety of...
Words: 2107 - Pages: 9
...Criterion SM Online Essay Evaluation: An Application for Automated Evaluation of Student Essays Jill Burstein Educational Testing Service Rosedale Road, 18E Princeton, NJ 08541 jburstein@ets.org Martin Chodorow Department of Psychology Hunter College 695 Park Avenue New York, NY 10021 martin.chodorow@hunter.cuny.edu Claudia Leacock Educational Testing Service Rosedale Road, 18E Princeton, NJ 08541 cleacock@ets.org Abstract This paper describes a deployed educational technology application: the CriterionSM Online Essay Evaluation Service, a web-based system that provides automated scoring and evaluation of student essays. Criterion has two complementary applications: E-rater®, an automated essay scoring system and Critique Writing Analysis Tools, a suite of programs that detect errors in grammar, usage, and mechanics, that identify discourse elements in the essay, and that recognize elements of undesirable style. These evaluation capabilities provide students with feedback that is specific to their writing in order to help them improve their writing skills. Both applications employ natural language processing and machine learning techniques. All of these capabilities outperform baseline algorithms, and some of the tools agree with human judges as often as two judges agree with each other. 2. Application Description Criterion contains two complementary applications that are based on natural language processing (NLP) methods. The scoring application, e-rater®, extracts...
Words: 5634 - Pages: 23
...effective at reproducing human assessment, which requires weighing the complexity and thoroughness of a ... Machine Learning Algorithms for Problem Solving in ... - صفحة 136 https://books.google.com.sa/books?isbn... - ترجم هذه الصفحة Kulkarni, Siddhivinayak - 2012 - معاينة - المزيد من الإصدارات It consists of essays written by English language students who are studying English in their third or fourth year at university. The corpus currently has over 3 million words from students from 16 different native languages. The target for each ... Psychology of Learning and Motivation: Advances in ... - صفحة 58 https://books.google.com.sa/books?isbn... - ترجم هذه الصفحة Brian H. Ross - 2002 - معاينة - المزيد من الإصدارات The larger the number and variety of essay grades there were to mimic, the better the human graders agreed with each ... for machine-learning techniques to outperform humans, for example, because they can compare every essay to every ... Artificial Intelligence in Education: Building Technology ... - صفحة 545 https://books.google.com.sa/books?isbn... - ترجم هذه الصفحة Rosemary Luckin, Kenneth R. Koedinger, Jim E. Greer - 2007 - معاينة - المزيد من...
Words: 745 - Pages: 3
...in deciding whether to create a trust. True False 2. A trust might be used by one running for a political office. True False 3. Like a corporation, the fiduciary reports and pays its own Federal income tax liability. True False 4. An estate’s income beneficiary generally must wait until the entity is terminated by the executor to receive any distribution of income. True False 5. With respect to a trust, the terms creator, donor, and grantor are synonyms. True False 6. Corpus, principal, and assets of the trust are synonyms. True False 7. If provided for in the controlling agreement, a trust might terminate when the income beneficiary reaches age 35. True False 8. The decedent’s estate must terminate within four years of the date of death. True False 9. Trusts can select any fiscal Federal income tax year. True False 10. A complex trust pays tax on the income that it retains and adds to corpus. True False 11. A complex trust automatically is exempt from the Federal AMT. True False 12. The first step in computing an estate’s taxable income is the determination of its gross income for the year. True False 13. Generally, capital gains are allocated to fiduciary income, because they arise from current-year transactions as directed by the trustee. True False 14. A realized loss is recognized by a trust when it distributes a non-cash asset. True False 15. A decedent’s...
Words: 16214 - Pages: 65
...QUALITY ASSESSMENT Translation quality assessment has become one of the key issues in translation studies. This comprehensive and up-to-date treatment of translation evaluation makes explicit the grounds of judging the worth of a translation and emphasizes that translation is, at its core, a linguistic operation. Written by the author of the world’s best known model of translation quality assessment, Juliane House, this book provides an overview of relevant contemporary interdisciplinary research on translation, intercultural communication and globalization, and corpus and psycho- and neuro-linguistic studies. House acknowledges the importance of the socio-cultural and situational contexts in which texts are embedded, and which need to be analysed when they are transferred through space and time in acts of translation, at the same time highlighting the linguistic nature of translation. The text includes a newly revised and presented model of translation quality assessment which, like its predecessors, relies on detailed textual and culturally informed contextual analysis and comparison. The test cases also show that there are two steps in translation evaluation: firstly, analysis, description and explanation; secondly, judgements of value, socio-cultural relevance and appropriateness. The second is futile without the first: to judge is easy, to understand less so. Translation Quality Assessment is an invaluable resource for students and researchers of translation...
Words: 66245 - Pages: 265