Free Essay

Grid Computing

In:

Submitted By aelgo
Words 3673
Pages 15
30

IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.8, August 2008

A Hierarchical Scheduling and Replication Strategy
A. Horri, R. Sepahvand, Gh. Dastghaibyfard Department of Computer Science & Engineering, College of Engineering, Shiraz University, Molla Sadra Ave, Shiraz, Iran 71348-51154 3- Deciding what replica files to keep or delete when there is shortage of storage in a site i.e. long-term optimization or dynamic replication strategy. Replication can be static or dynamic. In static replication the replicas have to be manually created, deleted and managed and it becomes tedious with the increase in files and user jobs. So it has the drawback that can not adapt to changes in user behavior. In real scenario, where the data amounts to peta bytes, and the user community is in the order of thousands around the world; static replication does not sound to be feasible. Dynamic replication strategies overcome the problem, where replica creation, deletion and management are done automatically. Dynamic replication strategies have the ability to adapt to changes in user behavior. Dynamic strategies are explained in section-2. In replication consistency is an important issue that needs to be considered. To overcome this problem as in other papers, it is assumed that: access pattern is read only for all replicas in data grid. The remainder of paper is organized as follows. Related work on replication and scheduling is given in section 3. A 3-layerd hierarchical structure is proposed for replication in data grid based on classification of networks, along with a novel algorithm for this structure is given in section 4. Section 5, covers the simulation result with optorsim and section 6 concludes the paper.

Summary
In data grids huge amount of data are generated and processed by users around the world. Objective of dynamic replica strategies is reducing file access time by reducing network traffic, which leads to reducing job runtime. In this paper a replication algorithm for a 3-level hierarchical structure and a scheduling algorithm are proposed. The simulation results with optorsim show better performance (over 13%) comparing to current algorithms.

Key words: Data replication; Data Grid, Job Scheduling, Simulation

1. Introduction
Grid was first proposed by Foster and Kesselman [1], which is a mechanism for resource sharing and problem solving in heterogeneous environment. Grid is a wide heterogeneous distributed system with multiple administrative domains for application that needs huge amount of computational and storage resources. An important kind of grid is data grid, which is used in data-intensive applications such as, High Energy Physics (HEP), Genetic, Earth Observation, …. In such applications, the sizes of the data reach tera byte or even peta byte. In this situation, managing and storing this huge data is very difficult or even impossible [1]. Data grid tries to store this data in decentralize sites and then for each application retrieves it from these sites. The major obstacles that degrade performance of data grid are latency of wide area networks and the bandwidth of internet. Therefore, latency of network and bandwidth between storage sites and processing sites plays an important role in data grid. In order to increase performance of data grids one needs to consider the followings: 1- Deciding where to allocate the job with respect to location of replicas and computational capabilities of the sites i.e. scheduling optimization. 2- Deciding from where to fetch replicas by considering available network bandwidth between sites i.e. short-term optimization.

2. Replication and scheduling strategies
In this section some of the replication and scheduling strategies are explained as well as file access patterns. Dynamic replication strategies are as follows: No replication: do not replicate any file. Best client: select the best client for replication based on the maximum request for specific file. Plain caching: the site that requested files stores a copy of them locally. Fast spread: replica created on the path to the best client. Dynamic scheduling strategies are as follows:

Manuscript received August 5, 2008. Manuscript revised August 20, 2008.

IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.8, August 2008

31

Random: jobs randomly distributed to sites. Job data present: schedule job to a site that has most of the required files, so there will be minimum demand to replica. Job least loaded: schedule job to a site that has the minimum queue length. Job locality: schedule job according to local sites that means by grid structure. And finally file access pattern in a job are as follows: Random: job requests files with random distribution. Locally: job requests local files first. Geographical: job requests both local and non local files. Selecting appropriate strategy and access pattern in data grid absolutely depends on running applications.

In [11], it considers two centralized and decentralized replication algorithms. In centralized method, replica master uses a table that ranks each file access in descending order. If a file access is less than the average, it will be removed from the table. Then it pop files from top and replicates using a response-time oriented replica placement algorithm. In the decentralized method, every site records file access in its table and exchange this table with neighbors. Since every domain knows average number of access for each file and then deletes those files whose access is less than the average, and replicates other files in its local storage.

4. The proposed method
In this section, first we present the network structure that is considered in this paper, and then two algorithms are proposed, one for replication and the other for scheduling.

3. Related work
In this section recent works will be considered. In [2], an algorithm for a 2-level hierarchical structure based on internet hierarchy (BHR) has been introduced which only considers dynamic replication and does not consider scheduling. Nodes in the first level are connected to each other with high speed networks and in the second level via internet. The algorithm replicates the file to the site if there is enough space. Next it, accesses the file remotely if the file is available in the sites that are in the same region. Otherwise it tries to make available space by deleting files using LRU (Least Recently Used) method, and replicates the file. It assumes that master site always has a safe copy of file before deleting. In [3], a structure with few networks connected via internet has been presented and an algorithm similar to [2], along with scheduling is proposed. For replicating a file, first computes the total transfer time, then it selects the best node with shortest transfer time. In [4] they introduce dynamic replication placement (RP) that categorizes the data based on their property. This category is used for job scheduling and replication. Then a job is allocated to a site which has the file in the required category. This leads to reduce the cost for file transfer. In [10], it considers a hybrid of tree structure with ring topology as its grid environment. Tree structure provides scalability and ring topology makes it possible to access the data with low latency. Nodes that are physically near each other are connected with high bandwidth and grouped together. They proposed an algorithm that tries to optimize, access rate to data, traffic rate, read cost and write cost parameters for each file.

4.1 Network Structure
It is better consider a real scenario that is very frequent in most academic and research centers. In an academic center there are several schools (or faculties) that are usually dispersed from each other, we call them regions. Within each school (region) there are several departments, where each department has its own local LAN. Computers within each local LAN, constitute grid nodes. A schematic structure (Grid structure) of such a network is depicted in figure 1. This network has hierarchical structure, and has three-levels.

Figure (1) Grid structure in an academic center

Regions comprise the first level. Due to the far distance between the regions, they are connected via internet which has low bandwidth. Regions may also represent cities of a country, or to a larger scale they may represent the countries within each continent with no direct link between them. Next level contains local LANs within the

32

IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.8, August 2008

region, which they have a moderately higher bandwidth comparing to internet between them. Third level comprises the computers within each local LAN, which are connected by a high bandwidth. These nodes have computing element (CE), storage element (SE), or a combination of both.

4.2 The proposed replication algorithm
The algorithm first checks replica feasibility. If it is not feasible (i.e. the requested file size is greater than SE size), file will be accessed remotely. Next it prepares a list of candidate files for replication. From this list, it chooses a file that has the highest bandwidth to the requester node. If the available space in SE is greater or equal to requested file size, it replicates the file. If not, it checks, if the requested file is available in the local LAN, then the file will be accessed remotely. Otherwise, SE is already filled up, and some files should be deleted based on the following steps: Step 1: Among the files that are available in the file site, prepare a list of candidate files for deletions that are also available in the local LAN, i.e. a copy of them are available in the local LAN in case it is needed later. Sort this list using LRU method. Start deleting the files from this list till it has enough room for the requested file. Step 2: If deleting all the files in Step 1, does not help, repeat step 1 for each available LAN in the current region randomly, till there is enough room for the requested file. Step 3: If deleting all the files in Step 1 and 2, does not help, then sort the remaining files by using LRU method and start deleting the files from this list till it has enough room for the requested file. The complete replica algorithm is given in figure 2.

4.3 The proposed hierarchical scheduling algorithm
For efficient scheduling of any job, the algorithm determines most appropriate region, LAN and site respectively. An appropriate region (LAN, site) is a region that holds most of the requested files (from size point of view). i.e. most of the requested files are available in that region. This will significantly reduce total transfer time, and consequently the network traffic. In other words let: S r ,l ,s be total size of data the requested files available in site s, LAN l, and region r. Lr ,l be total size of the requested data files available in LAN l, region r, i.e. of sites in LAN l.

Figure (2) Proposed replica algorithm

Rr be total size of the requested data files available in region r, i.e. Rr =

UL j =1

p

r, j

where, p is the number of

LANS in region r. Now the algorithm can be summarized as follow: 1- Compute Rr for each region q 2- Select the appropriate region

Rmax = Max R j , q j =1

Lr ,l =

US j =1

k

is the number of regions, i.e. region with largest available requested data files from size point of view. 3- for each LAN in Rmax :

r ,l , j

, k is the number

IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.8, August 2008

33

3.1 Select the appropriate LAN, i.e. Lmax,max = t Storage element (SE): Storage resource in grid. Computing element (CE): Computing resource in grid.

Max Lmax, j , where t j =1

is the number of LANS in

region Rmax , i.e. LAN with largest available requested data files in region Rmax , 3.2 for each site in

Replica manager: It controls data transferring in each node and provide a mechanism for accessing the Data Catalog. Replica Optimizer: It has replica algorithm and control file replication according to proposed algorithm.

Lmax,max :

3.2.1 Select the appropriate site S max,max,max = u Max S max,max, j , u is the number of sites in LAN j =1

Lmax,max , region Rmax , i.e. site with largest available requested data files in LAN Lmax,max , region Rmax , 3.2.2 Schedule the job for executing site S max,max,max , LAN Lmax,max , region Rmax . in

To give an example how the scheduling algorithm works, assume we have the following network based on the network structure given in section 4.1: R1={L1,1(S1,1,1(f1,f2,f3), S1,1,2(f2,f4,f5), S1,1,3(f1,f4,f6)) , L1,2(S1,2,1(f2,f4,f7), S1,2,2(f6,f7,f8))} R2={L2,1(S2,1,1(f1,f2,f4), S2,1,2(f3,f5,f9)), L2,2(S2,2,1(f1,f3,f6), S2,2,2(f7,f8,f10))} fi means file i is available in that site. For simplicity, assume size of files are all the same (i.e. C=100 MB) and the bandwidth between regions, LANSs and sites are according to table 1 and finally we have a job that requires 5 files (i.e. J={ f1,f2,f3,f9,f10}) to execute. The proposed scheduling algorithm selects R2 as the appropriate region and within that region selects LAN L2,1 as the appropriate LAN and within LAN L2,1, selects site S2,1,1 as the appropriate site for executing the above job. In this case files f3,f9 should be replicated from site S2,1,2 and file f10 should be replicated from site S2,2,2. So the total file transfer time is (2*C/1000 + C/100)=1.2 seconds. But if we use simple greedy scheduling algorithms, that selects the site with most available files, it will select site S1,1,1 and assign the above job to this site. In this case f9 should be replicated from site S2,1,2 and f10 should be replicated from site S2,2,2 so the total file transfer time would be (2*C/10)=20 seconds.

Figure (3) simulator architecture [5]

Based on the scheduling algorithm the broker sends jobs to a node. Each job needs a list of files to run. Reducing file access time is the final objective of optimization algorithms.

5.1 Simulation environment
Since the optorsim structure is flat, modification is done in optorsim code to implement the proposed hierarchical structure. There are 3 regions in our configuration. Each region has three nodes on average. Table 1 shows the bandwidth configuration (assume MBS is Mega Byte per Second) and table 2 shows the job configuration.
Table1 : the bandwidth configuration

Parameter Inter-LAN Link Intra-LAN Link Intra-Region Link

Value 1000 MBS 100 MBS 10 MBS

5. Simulation
Optorsim is used as the simulation tool to evaluate the performance of the proposed replication and scheduling algorithms. Optorsim was developed to represent the structure of a real European Data Grid [6]. The components of Optorsim are as follow and also depicted in Figure 3. Resource scheduler: It receives jobs from user and sends them to the best node according to proposed algorithm.

Our configuration has 10 computing elements and 11 storage elements. The average size of storage element is 30 Giga Byte. A node with 150 Giga Byte of storage is used as the main node that holds all initial files. 6 types of jobs have been defined based on access pattern to file. Each job on average needs 15 files (each is 1 giga byte).

34

IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.8, August 2008

Table 2 shows the details. In this simulation as well as in [2, 3, 4], it is assumed that data are read only and valid all the time.
Table 2: job configuration

Figures 6 and 7 shows the total job time of our and RRU algorithms for varying inter region bandwidth and SE size respectively. As inter region bandwidth and SE size increases both algorithms will converge.

parameter Number of jobs types Number of file access per jobs Size of single file Total size of files

Value 6 15 1 GB 100 GB

6. Conclusion and future work
In this paper a 3 level hierarchical structure for dynamic replicating file in data grids was proposed. If there is not space for replica, in order to make room, only those file will be deleted that have a low cost of transfer i.e. considering the bandwidth between source and destination. So deletes those file that are available in local LAN. Bandwidth is an important factor for deleting and this leads to a better performance with LRU method. In contrast to BHR algorithm which considers 2-level, the 3 level proposed performs better and it is more realistic. From job scheduling point of view, the proposed algorithm, first selects the appropriate region (i.e. available maximum requested files), next selects the appropriate LAN in that region and finally selects the appropriate site in that LAN, therefore job execution time decreases since we have minimum data transfer time. Overall the simulation results with optorsim show better performance (over 13%) comparing to current algorithms.
Mean Job Time Chart
1200000 1000000 M e a n Jo b T im e 800000 600000 400000 200000 0 50 100 200 300 400 500 Number of jobs 3LHA BHR LRU

5.2 Simulation Results
The algorithm has been evaluated by comparing it with LRU and BHR. In LRU the files that have not been accessed recently, are candidate for deleting. As these file maybe needed in future and are probably available in other regions, and intra-region bandwidth is low, file transferring time will increase and as a result the job runtime increases. In BHR algorithm this shortcoming of LRU algorithm has been improved and the probability of deleting files that have only one copy in current region was decreased, therefore the job runtime in comparison with LRU decreased. In BHR algorithm some files which are the only copy inside the LAN, maybe deleted. If these files are needed in future, they should be fetched from other LANs even inside the region and since intra-LAN bandwidth is lower than inter-LAN bandwidth, file transferring time increases and which leads to increase in job run time. In 3 Level hierarchical structures, this shortcoming has been overcome by considering the differences between intra-LAN and inter-LAN bandwidth and scheduling method. Figure 4 shows the runtimes of jobs based on changing number of jobs for 3 algorithms. Figure 5 shows the runtimes for 1550 jobs by 3 mentioned algorithms. If the available storage for replication is not enough, our proposed algorithm will only replicate those files that are not available in the local LAN. So it will not delete those file that have a high cost of transfer i.e. overall low bandwidth between source and destination. Therefore this method has better performance comparing to LRU. But if the available storage for replication is enough, or there is a high bandwidth between source and destination, both algorithms have close execution time to each other and are similar to LRU. Overall the simulation results with optorsim show better performance (over 13%) comparing to current algorithms.

Figure 4 : runtimes of jobs based on varying number of jobs

400000 380000 360000 340000 320000 300000 3LHA BHR LRU 100 337189 387137 398331

Figure 5: runtimes of the 1550 jobs

IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.8, August 2008

35

120000

100000 80000 60000 40000 20000 0

3-Level LRU

10 20 50 70 100
Inter Region

Figure 6: Total job time with varying bandwidth

120000 100000 80000 60000 40000 20000 0
00 0 0 0 0 00 00 00 00 50 30 20 40 10 50 00 0

Series Lecture Notes in Computer Science Publisher Springer, Grid Computing — GRID 2002 book, Volume 2536/2002, Pages 46-57 [9] Andrea Domenici, Flavia Donno, Gianni Pucciani, Heinz Stockinger, and Kurt Stockinger : Replica consistency in a Data Grid, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, Volume 534, Issues 1-2, 21 November 2004, Pages 24-28. [10] Houda Lamehamedi, Zujun Shentu, and Boleslaw Szymanski, Simulation of Dynamic Data Replication Strategies in Data Grid, Parallel and Distributed Processing Symposium, 2003. Proceedings. International, Publication Date: 22-26 April 2003, On page(s): 10 pp, ISSN: 15302075 [11] Ming Tang, Bu-Sung Lee, Xueyan Tang, Chai-Kiat Yeo The Impact of Data Replication on Job Scheduling Performance in the Data Grid, Future Generation Computer Systems, Volume 22, Issue 3, February 2006, Pages 254268

T o tal Jo b T im e

Totoal Job Time(sec)

LRU 3-Level

Size of SE

Figure 7: Total job time with varying storage size

References
[1] Ian Foster, Carl Kesselman: The Grid Blueprint for a new computing infrastructure, Morgan Kaufman, 2004.

[2] Sang-Min Park, Jai-Hoon Kim, Young-Bae Ko: Dynamic
Grid Replication Strategy based on Internet Hierarchy, Book Series Lecture Notes in Computer Science, Grid and Cooperative Computing book, Publisher Springer, Volume 3033/2004, Pages 838-846. Ruay-Shiung Chang, Jih-Sheng Chang, Shin-Yi Lin : Job scheduling and data replication on data grids, Future Generation Computer Systems, Volume 23, Issue 7, August 2007, Pages 846-860 Nhan Nguyen Dang, Sang Boem Lim2: Combination of Replication and Scheduling in Data Grids, IJCSNS International Journal of Computer Science and Network Security, VOL.7 No.3, March 2007. OptorSim - A Replica Optimiser Simulation. http://grid-data-management.web.cern.ch/grid-datamanagement/optimisation/optor/ The DataGrid Project. http://www.eu-datagrid.org. William H. Bell1, David G. Cameron1, Luigi Capozza2,A. Paul Millar1, Kurt Stockinger3, Floriano Zini2 : Simulation of Dynamic Grid Replication Strategies in OptorSim, Book

[3]

[4]

[5] [6] [7] [8]

Similar Documents

Free Essay

Grid Computing

...Grid computing Grid computing uses middleware to process and coordinate large amounts of data from different resources across a network, allowing them to function as a virtual whole. Anderson 2004 states that ‘These resources are centrally managed by IT professionals, are powered on most of the time, and are connected by full-time, high-bandwidth network links. There is a symmetric relationship between organizations: each one can either provide or use resources.’ The concept was developed to provide users with access to resources they needed at any point in time. Grid computing has helped increase the development of information systems to become more flexible, cost and power efficient, faster performance, scalability and become more available. Grid computing has enabled groups of networked computers to be pooled and provisioned on demand to meet the changing needs of business. Instead of dedicated servers and storage for each application, grid computing enables multiple applications to share computing infrastructure. As seen from the diagram above, the use of grid computing has improved information systems of a company by increasing the flexibility of resources used amongst each department. In every company the workloads are constantly fluctuating during the course of a day, week, or month. The resources are now spread across all the departments, so they are now able to demand for resources in real time and allowing the business to supply accordingly. The concept is also brought...

Words: 1428 - Pages: 6

Free Essay

What Is Grid Computing?

...Grid computing is the act of sharing tasks over multiple computers. Tasks can range from data storage to complex calculations and can be spread over large geographical distances. In some cases, computers within a grid are used normally and only act as part of the grid when they are not in use. These grids scavenge unused cycles on any computer that they can access, to complete given projects. SETI@home is perhaps one of the best-known grid computing projects, and a number of other organizations rely on volunteers offering to add their computers to a grid. These computers join together to create a virtual supercomputer. Networked computers can work on the same problems, traditionally reserved for supercomputers, and yet this network of computers are more powerful than the super computers built in the seventies and eighties. Modern supercomputers are built on the principles of grid computing, incorporating many smaller computers into a larger whole. The idea of grid computing originated with Ian Foster, Carl Kesselman and Steve Tuecke. They got together to develop a toolkit to handle computation management, data movement, storage management and other infrastructure that could handle large grids without restricting themselves to specific hardware and requirements. The technique is also exceptionally flexible. Grid computing techniques can be used to create very different types of grids, adding flexibility as well as power by using the resources of multiple machines. An...

Words: 368 - Pages: 2

Free Essay

Grid Computing

...Grid computing Modern society as a large-scale scientific and engineering computing need, forcing the computer must constantly improve its computing speed and storage capacity. History has shown that development of the computer, in order to achieve better processing performance, in addition to improve the speed of the hardware, the system must continue to improve the structure, especially when the components of the speed limit, which will become the focus of . Thus, the super parallel computer has become the master of complex scientific computing. However, the super computer-centric computing model has obvious deficiencies, and is currently being subjected to challenge. Although it is a super computer processing power of the "giant", but its cost is extremely expensive and usually only a few state-level sectors such as aerospace, military, meteorological and other departments have the ability to configure such a device. And as people encounter in their daily work more and more complex business computing, there is an urgent need for data-processing capabilities more powerful computers, while the price of supercomputers clearly prevented it from entering areas of work of ordinary people. So, people started looking for a low cost and superior data processing capability computing model, scientists have been trying to find the ultimate answer - Grid Computing (Grid computing). Grid is an integrated computing environment and resources, or a pool of computing resources. Grid is an...

Words: 526 - Pages: 3

Free Essay

Advance America Implements Grid Computing

...Advance America Implements Grid Computing Chances are you have seen places that offer payday loans in your town. Payday loans are short-term loans designed for people that run out of money before payday, but can repay the loan when their paycheck arrives. Advance America is the leading payday loan company in the United States. It includes 3,000 centers in 37 states, and employs nearly 7,000 people, according to its Web site. Advance America is big, and growing bigger every day. Its growth in recent years is straining the capabilities of its client-server information system infrastructure and holding the company back from further growth. Advance America used a system in which each center was equipped with an independent hardware and software environment. Installation and maintenance costs were high, and compiling data for all centers was time consuming and difficult. Each night the thousands of centers would upload their data to the main server for consolidation. With the growing number of centers, there wasn’t enough time in the night to process all of the incoming data. Advance America’s system had run up against a wall. It was time for a change. Advance America decided to invest in a new system based on a grid computing architecture. They installed thin client machines to run in each center, connecting via the Web to a fault-tolerant server cluster running Oracle database software. The server cluster consists of a four-node cluster ofIBM P5 series servers...

Words: 324 - Pages: 2

Free Essay

Green Computing and Green It Best Practices - Jason Harris

...GREEN COMPUTING AND GREEN IT BEST PRACTICES On Regulations and Industry Initiatives, Virtualization, Power Management, Materials Recycling and Telecommuting Notice of Rights: Copyright © Jason Harris. All rights reserved. No part of this book may be reproduced or transmitted in any form by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher. Notice of Liability: The information in this book is distributed on an “As Is” basis without warranty. While every precaution has been taken in the preparation of the book, neither the author nor the publisher shall have any liability to any person or entity with respect to any loss or damage caused or alleged to be caused directly or indirectly by the instructions contained in this book or by the products described in it. Trademarks: Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and the publisher was aware of a trademark claim, the designations appear as requested by the owner of the trademark. All other product names and services identified throughout this book are used in editorial fashion only and for the benefit of such companies with no intention of infringement of the trademark. No such use, or the use of any trade name, is intended to convey endorsement or other affiliation with this book. 1 WRITE A REVIEW & RECEIVE A BONUS EMEREO EBOOK...

Words: 31633 - Pages: 127

Free Essay

Green Computing and Green It Best Practices - Jason Harris

...GREEN COMPUTING AND GREEN IT BEST PRACTICES On Regulations and Industry Initiatives, Virtualization, Power Management, Materials Recycling and Telecommuting Notice of Rights: Copyright © Jason Harris. All rights reserved. No part of this book may be reproduced or transmitted in any form by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher. Notice of Liability: The information in this book is distributed on an “As Is” basis without warranty. While every precaution has been taken in the preparation of the book, neither the author nor the publisher shall have any liability to any person or entity with respect to any loss or damage caused or alleged to be caused directly or indirectly by the instructions contained in this book or by the products described in it. Trademarks: Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and the publisher was aware of a trademark claim, the designations appear as requested by the owner of the trademark. All other product names and services identified throughout this book are used in editorial fashion only and for the benefit of such companies with no intention of infringement of the trademark. No such use, or the use of any trade name, is intended to convey endorsement or other affiliation with this book. 1 WRITE A REVIEW & RECEIVE A BONUS EMEREO EBOOK...

Words: 31633 - Pages: 127

Free Essay

Impact of Power and Politics in Dan Mart Inc. in Achieving the Firm’s

... Abstract The purpose of this project is to identify the impact of power and politics in Dan Mart Inc management decision in choosing information technology architecture that can provide a high availability and clustering in a business environment like Dan Mart Inc, this project will also identify the limitation power and politics, advantages and cost of implementing each one so as to have a choice of choosing from them all. But for the sake of this project the use of Oracle cooperation high availability and clustering technologies will be the target. We would be discussing different types of technologies by Oracle such as real application cluster(RAC), automatic storage management (ASM), data guard, grid infrastructure, grid control, cloud control, Flash back technology, database e-memory that will be suitable for Dan Mart Inc business environment. Brief Company Background DanMart is a high volume customer oriented business organization that require 24/7 availability of their services, they handle online sales...

Words: 1737 - Pages: 7

Free Essay

Security-Oriented Workflows for the Social Sciences

...2010 Fourth International Conference on Network and System Security Security-oriented Workflows for the Social Sciences Prof. Richard O. Sinnott, University of Melbourne, Melbourne, Victoria, 3010, Australia, rsinnott@unimelb.edu.au Sardar Hussain National e-Science Centre University of Glasgow, Glasgow G122 8QQ, Scotland s.hussain@nesc.gla.ac.uk Abstract — The service-oriented computing paradigm and its application to support e-Infrastructures offers, at least in principle, the opportunity to realise platforms for multi- and inter-disciplinary research. Augmenting the service-oriented model for e-Research are mechanisms for services to be coupled and enacted in a coordinated manner through workflow environments. Typically workflows capture a research process that can be shared and repeated by others. However, existing models of workflow definition and enactment assume that services are directly available and can be accessed and invoked by arbitrary users or enactment engines. In more security-oriented domains, such assumptions rarely hold true. Rather in many domains, service providers demand to be autonomous and define and enforce their own service / resource access control using locally defined policy enforcement points (PEP) and policy decision points (PDP) which allow access and usage of resources to be strictly monitored and enforced. In this paper, we outline how it is possible to support security-oriented workflow definition and enactment through chaining...

Words: 6322 - Pages: 26

Free Essay

Nkn History

...OBJECTIVE NKN AIMS TO BRING TOGETHER ALL THE STAKEHOLDERS FROM SCIENCE, TECHNOLOGY, HIGHER EDUCATION, HEALTHCARE, AGRICULTURE AND GOVERNANCE TO A COMMON PLATFORM. NKN is a revolutionary step towards creating a knowledge society without boundaries. It will provide unprecedented benefits to the knowledge community and mankind at large. Knowledge Network (NKN) project is aimed at establishing a strong and robust internal Indian network which will be capable of providing secure and reliable connectivity. Using NKN, all vibrant institutions with vision and passion will be able to transcend space and time limitations in accessing information and knowledge and derive the associated benefits for themselves and for the society. Establishing NKN is a significant step towards ushering in a knowledge revolution in the country with connectivity to 1500+ institutions. NKN is intended to connect all the knowledge and research institutions in the country using high bandwidth / low latency network. Globally, frontier research and innovation are shifting towards multidisciplinary and collaborative paradigm and require substantial communication and computational power. In India, NKN with its multi-gigabit capability aims to connect all universities, research institutions, libraries, laboratories, healthcare and agricultural institutions across the country to address such paradigm shift. The leading mission oriented agencies in the fields of nuclear, space and defence research are also part...

Words: 1383 - Pages: 6

Free Essay

Artificial Intelligence and Grid Computing

...Artificial Intelligence and Grid Computing Scott Carnahan Net 204-102 Artificial Intelligence and Grid Computing This paper is a look into how Artificial Intelligence (AI) can be used in a network environment with grid computing. I will briefly look into how AI works, grid computing and what you get when you put the two together. What is Artificial Intelligence and how does it work Artificial Intellegence is a branch of computer science that atemps to simulate intelligent behavior in computers, that is to say, to get a machine to imitate intelligent human behavior(Merriam-Webster) and with any luck ignore unintelligent human beahviour. AIs have to learn concepts of the things they observer or work with, they then store what they have learned, like any other software, as chunks of data. An AI is usually presented with a goal to achieve they accomplish this by exploiting the relationship between the data they acquired and the goal they are trying to achieve (wiseGEEK). To put it into a simpler concept; AI is broken down into an If, and Then type programing analogy.” If this happens Then do that”, pretty much like Basic Programing. But the If-Thens are attached to something not so simple, complex algorithms that are used to record what all the results of the Ifs and Thens are, so that the data can be accessed by the AI, then when the condition happens again it knows what to do. For a little better explanation of AI algorithms look at Google's cloud-based machine learning...

Words: 1490 - Pages: 6

Premium Essay

Lucius Document

...Make the Entity Classes, which will be the Object Oriented Representation of the Hotel data stored in PracticalTest Database. The Hotel data is stored in a table named Tbl_Hotel. The structure of the Tbl_Hotel has been specified below: Field Name Type hotelID int – Primary Key hotelName varchar (50) – Not null address varchar (250) – Not null description varchar (250) – Not null services varchar (250) – Not null starLevel int hotelAgent int phone varchar (20) isMain bit (1: main hotel) userOwner varchar (20) - The services field stores some service separating with “;” symbol. - The hotelAgent should contain the hotelID to represent the branching of main hotels - The userOwner should contain the value of userID in tbl_User The distributed application should allow to user to do the following activities - Create a new hotel - Search all hotel that the search value are an appropriate address. - Show all hotels and their agents. - Delete the specified hotel - Show a hotel when the owner value is known Functional requirements - The Application must be authenticated to identity the member that can be accessed the application in tbl_User Field Name Type userID varchar (20) – Primary Key password int – Not null roles Bit (1: Admin) - If the user is authenticated with Admin, the menu page is shown with 02 links as Show All Hotel and Search Topic. If the user is authenticated without Admin that means the user is an owner of the hotel agent, the...

Words: 519 - Pages: 3

Free Essay

Case Espn

...program that can transform the encrypted information into something that the computer can understand in order to have fast response times. 4. ESPN users can access their website from their pc, mobile, and tablet platforms. All of these platforms that ESPN uses stores the users personal information into their systems. They want their personalized program to be used in all of their system platforms because they want their system to work off different devices and platforms so that the system will be able to adapt to different platforms and use more data space to keep the information accurate. 5. ESPN uses a relational database that is also referred to as the personalization database. This database consist of two components the grid and...

Words: 465 - Pages: 2

Free Essay

Nothing

...User Interface Guide For Energy Exemplar PLEXOS® for Power System software Prerequisites Before reading this guide you should complete reading the article Power System Modelling 101. Read this guide in conjunction with Concise Modelling Guide. Version This document is current as at PLEXOS Version 6.202 and was last modified 8 June 2011. Document Conventions The following conventions are used:  PLEXOS classes are shown underlined like Generator  Properties are shown in brackets like Generator [Max Capacity]  Collections are shown bracketed like Generator [Fuels] About This Document This document provides an introduction to PLEXOS® for Power Systems software, its features, core data concepts, the graphical user interface, and an overview of its modelling features. It makes references to other articles contained in the PLEXOS Help system where you can find more detail on particular features. User Interface Guide Contents Introduction .............................................................................................................................................................. 5 2 3 Technical Requirements ...................................................................................................................................... 6 2.1 3.1 3.2 Requirements.................................................................................................................................................. 6 Getting started ................

Words: 18325 - Pages: 74

Premium Essay

Understanding Your Customers - Hr Cipd Level 3

...Directors Directors may need to know the staff structure of an organisation in order to prepare for issues that affect the whole business. As a HR Assistant is generally front line, they may have to prioritise conflicting needs from the clients. For example - an employee may phone up to complain that their manager is bullying them and they would like advice. The manager would also phone HR but to complain that an employee is not fulfilling their job expectation and would like advice. It is the role of the HR Assistant to remain impartial and to give clear accurate advice. One helpful method in prioritising needs can be found in Stephen Covey's 7 habits of highly effective people. Stephen has created a time management grid organised by urgency and importance. Using the grid will allow you to see what tasks need to be actioned urgently and what tasks can be delegated to others. Effective Communication Communication is the key to delivering great HR. A good communicator is someone that can active listen, be...

Words: 873 - Pages: 4

Free Essay

Titleworld

...simulator to wait until at least one of the deliberation processes has finished and another event (completion of the process) can be scheduled. 1 5 Evaluation: Agents in T ILEWORLD Initially, T ILEWORLD was developed to test different control, particularly commitment, strategies of IRMA agents [9, 8]. 1 1 2 3 4 5 6 7 B (1) 6/2 A (2) 4/3 B (3) 10/5 2 3 4 A (3) 9/6 5 TGA 6 7 A C A B A A A A C (2) 10/8 OGA C A A A B B C (3) 8/6 B (1) 10/7 A search space and implies a costly deliberation with respect to computing time and memory. The T ILEWORLD scenario we have chosen comprises an 8 by 8 grid, 1000 units of simulation time, and a real-time knob, i.e. factor, of 1. Thus, 1 unit of simulation time should be about 1 second. The grid elements change, e.g. holes and tiles appear and disappear, every 50 time units with a probability of 40%. All agents we tested had a scan range of 5 grid elements, limited, but sufficient fuel, and were planning for two goals simultaneously. Within our implementation of T ILEWORLD scanning requires intensive message exchange. The experiments were run on 2 Ultra 2 machines equipped with about 200 MB each. Each experiment consisted of 15 runs. We first put our algorithm to test using one single agent in the T ILEWORLD....

Words: 797 - Pages: 4