Free Essay

Distributed Memory Management: Design Issues and Future Trends

In:

Submitted By ontim199312
Words 7654
Pages 31
Distributed Memory Management:
Design Issues and Future Trends
Ehab S. Al-Shaer
College of Computer Science
Northeastern University
Boston, MA 02115
March 19, 1993
ABSTRACT
In recent times the shared memory paradigm has received considerable attention in the realm of distributed systems. Distributed Shared Memory (DSM) is the abstraction for supporting the notion of shared memory in a physically non-shared (distributed) architecture. Issues to be addressed in the design and implementation of DSM include maintaining the consistency of the shared data across the network without incurring high overhead and integrating the DSM mechanisms with the local memory management.
The consistency model provided by a given DSM implementation attempts to balance performance and ease of programming: while DSM provides the abstraction of shared memory, it is not true shared memory - both from the point of view of the semantics and the cost of shared memory access.
The focus of this paper is to identify the issues involved in the design of DSM systems, briefly highlight the mechanisms in use by some current DSM implementations and propose some new DSM models for future distributed systems.
I INTRODUCTION
As computers become cheaper, there is increasing interest in using multiple CPUs to speed up individual applications. There are basically two design approaches to achieve this goal of high performance at low cost: multiprocessors and multicomputers.
Multiprocessors contain physical shared memory; Processors in a multiprocessor can easily communicate by reading and writing words in this memory. Multicomputers, on the other hand, do not contain physical shared memory; processors communicate by exchanging messages.
To combine the advantages of multiprocessors (easy to program) and multicomputers (easy to build), communication paradigms that simulate shared data on a multicomputer have become popular. These mechanisms are implemented with message passing but they provide the illusion of shared data, namely processors in a multicomputer communicating through distributed shared memory.
A Distributed Shared Memory (DSM) is a memory space that is logically shared by processes running on computers connected by a communication network. While such an organization exists in shared memory multiprocessors, in the domain of distributed systems it is unusual. Most existing distributed systems are structured as a number of processes with independent address spaces. These processes communicate via some form of Inter-Process Communication (IPC) system, typically message passing or remote procedure call. In a DSM system data sharing is supported directly: processes communicate with each other by reading and modifying shared directly-addressable data.
A DSM can be a flat and paged virtual address space, a segmented single level store, or even a physical address space.
II WHY DISTRIBUTED SHARED MEMORY?
A distributed system can be viewed as group of computers cooperating with each other to achieve some goal. These computers are autonomous, in that each computer has an independent flow of control, and different computers have distinct address spaces.
They communicate by sending and receiving messages. An important characteristic of cooperation is state sharing. Unfortunately, message passing primitives do not support data sharing directly. It can be simulated by implementing the shared data in a dedicated process and operating on the data by sending predefined operations to this process. Other methods may involve moving data around explicitly using message passing primitives.
Special care must be taken to maintain consistency if a piece of data is replicated.
As more experience is gained with message passing programming, it is found that having to move data back and forth explicitly within programs puts a significant burden on application programmers. Remote Procedure Calls (RPC), were introduced to provide a procedure-call-like interface. Because the "procedure call" is performed in a separate address space, it is difficult for the caller to pass context related data or complicated structures, i.e., parameters must passed by value. RPC can be viewed as a "poor man's" version of shared memory, because the semantics are basically those of shared memory, with limitations imposed by implementation constraints.
A shared memory space provides direct support for data sharing: the mapping of shared data to a shared memory space is natural; thus, the question of extension to distributed settings arose. Ideally, processes on each node should be able to access the
2
same address space with fetch and store operations. However, because the latency involved in communication through the network is high, simple implementation of the fetch and store as remote operations to a shared memory server is not attractive.
"Latency" represents a speed ratio between remote access and local access: if the value of this ratio is large, the mismatch must be remedied for adequate performance. Such a mismatch, albeit a generally smaller one, exists in shared memory multiprocessors. Thus we look to shared memory multiprocessor architectures for inspiration.
Shared Virtual Memory
Shared virtual memory is a model of distributed shared memory. It is a single address space shared by a number of processors. Any processor can access any memory location in the address space directly. This address space is divided into pages, which are distributed among the processes (see Figure). The system provides a coherent address space: a read operation always returns the value of most recent write to same address.
Mutual-exclusion synchronization can be implemented by locking pages.
III DISTRIBUTED SHARED MEMORY DESIGN ISSUES
Memory Coherence in Shared Virtual Memory Systems
Shared Virtual Memory
The shared virtual memory described in this paper provides a virtual address space that is shared among all processors in loosely coupled distributed-memory multiprocessor systems. Application programs can use the shared virtual memory just as they would a traditional virtual memory, except of course, that processes can run on different processors in parallel. Memory mapping managers implement the mapping between the local memories and the shared virtual memory address space. The main difficulty in building shared virtual memory is solving the memory coherence problem.
Solving this problem is a chief responsibility of memory mapping managers.
Memory Coherence Problem
A memory is coherent if the value returned by a read operation is always the last value written to the same address. The memory coherence problem was first encountered when caches appeared in uniprocessors and have become more complicated with the introduction of "multi-caches" for shared memories in multiprocessors. However, the memory coherence problem in shared virtual memory is different from that in multicache systems because shared virtual memory on loosely coupled multiprocessors has no physical memory as in multi-cache systems, and the communication cost between processors is nontrivial. The design of shared virtual memory is greatly influenced by the strategies for maintaining coherence. In the following sections, we will discuss these strategies in detail.
3
Memory Coherence Strategies
There are many different techniques for memory coherence. These techniques are classified by the way in which one deals with Page Synchronization and Page
Ownership.
Page Synchronization: There are two basic approaches to page synchronization: invalidation and write-broadcast. In the invalidation approach, there is a single owner of each page and only the owner can read and write to it. If a write-fault to a page occurs:
• all copies of the page are invalidated.
• the access of the page is changed to write.
• a copy of the page is moved to the requesting processor if it does not have one already. This processor is now the owner of the page and can proceed in reading and writing.
• return to the faulting instruction.
If a read-fault to a page occurs :
• the access of the page is changed to read on the processor that has write access to it. • a copy of the page is moved to the requesting processor and access is set to read.
• return to the faulting instruction.
In the write-broadcast approach, the only difference is that a write fault leads to an update of all copies of the page. However, this requires special hardware support to provide this functionality efficiently. Because the algorithms using write-broadcast do not seem practical for loosely coupled multiprocessors, they are not considered further in this paper.
Page Ownership: It can be of two types. First is fixed ownership in which a page is owned by one processor and others cannot write to it without incurring a page fault. The other is dynamic ownership, a scheme in which ownership of a page can be dynamically assigned to a processor by a distinct "page manager" which can be either centralized or distributed. Fixed ownership constrains the desired modes of parallel computation. Thus we only consider dynamic page ownership. This paper focuses discussion on algorithms which use invalidation and are based on dynamic ownership. Discussion of different management models will be delayed for the next section (Memory Management Models).
Coherency Classes and Memory Consistency
Many different classifications of coherency have been proposed and described in the literature. Choosing an appropriate memory consistency model ( MCM ) is a tradeoff between minimizing memory access order constraints and complexity of the programming model - as well as complexity of the memory model itself [Mos93].
Before proceeding we must distinguish the parallel concepts of coherency and consistency because they tend to be used interchangeably in literature. Coherence is the concept of correctness: coherence models deal with assurance that correct data is available for access. Consistency applies to the ordering of events: consistency models address agreement among two or more separate entities regarding the order of events which are significant to those entities. This distinction is important because, in a sense, the abstract notion of coherence is beyond control in a truly distributed system.
Consistency, on the other hand, can be controlled. The parallel nature of the concepts
4
allows us to provide a certain level of coherence by implementing a certain level of consistency (e.g. absolute coherence can be provided by implementing atomic consistency which guarantees global ordering of events - because everyone agrees on the point at which a certain event occurred, everyone can be certain that causal relationships involving an event are reflected in subsequent events). Because we can now speak of coherence in terms of consistency, we will confine our discussion to consistency models, relating them to their analogous coherence models only when necessary.
Research in memory consistency models is important and interesting because it leads to better performance by reducing the latency associated with memory accesses and minimizing the network message traffic [Kel92]. The goal of memory consistency research is to present a model as close as possible to that exhibited by sequential machines, the so-called Sequential Consistency model (see Figure). The pure Sequential
Consistency model is simple to program for but severely restricts the set of possible optimizations to reduce the high latency of memory access imposed by multiprocessor systems. It would, for example, be beneficial to pipeline write accesses and to use write buffering. Simulations have shown that weaker consistency models allowing such optimization could improve performance on the order of 10 to 40 percent over a strictly sequential model [Mos93]. However, the programming model becomes more restricted and complicated as the consistency model becomes weaker.
In short, weaker memory consistency models can have a positive effect on the performance of parallel shared memory machines. The benefit of weaker models increases as memory latency increases. Therefore, we expect that in the near future most parallel machines will be based on consistency models significantly weaker than the sequential consistency model. In the rest of this section we will discuss most of the proposed memory consistency models. Note that all these models deal with memory access which are shared, competing and synchronized.
Memory Consistency Models
For the sake of simplicity, we will give the definition and some explanation of each consistency model without dealing with implementation details.
Strong Consistency: Also known as Atomic Consistency [Mos93], this model requires every read instruction to return the last value written to the same address. Formally, this implies a global ordering of all read/write events as it is the case in shared memory and shared bus systems [Bor91]. The operation intervals are divided into non-overlapping consecutive slots [Mos93].
Sequential Consistency: A formal definition of Sequential Consistency given by
Lamport [Bor91] is as follows:
5
"A system is sequentially consistent if the result of any execution is the same as if the operations of all processors were executed in the same sequential order, and the operations of each individual processor appear in the order specified by its program."
Thus every process has to observe the same sequence of events. In contrast to strong consistency, a read access may not return the last value written. For multiprocessor systems without physical shared memory, the sequential ordering is an overly restrictive memory model [Moh91][Bor91]. The proposed weak consistency model is considered as an alternative to sequential consistency [Moh91].
Causal Consistency: A memory is causally consistent if all processes agree on the order of causally related events. Causally unrelated events (concurrent events ) can be observed in different order. Causal consistency, proposed by Ahmad and hattu
[Moh91][Ana92], is mostly of theoretical interest because it is strict and hard to implement [Mos93].
Processor Consistency: The motivation for this model is to allow pipelining of write accesses. Pipelining results in potentially delaying the effect of a write. It therefore relaxes some of the ordering constraints of writes. Write by single processes are still performed in FIFO order. An interesting effect allowed by processor consistency is that read access can "overtake" write accesses if they are performed to different locations
[Mos93].
Slow Memory: Slow Memory requires that all processors agree on the order of observed writes to each location by a single processor. Slow memory consistency does not appear to be any practical significance [Mos93].
Weak Consistency: A memory system is weakly consistent if it enforces the following restrictions: • accesses to asynchronous variables are sequentially consistent and
• no access to a synchronization variable is issued in a processor before
• all previous data accesses have been performed and
• no access is issued by processor before a previous access to a synchronization variable has been performed.
At the time a synchronization access is performed, all previous accesses to that location are guaranteed have been performed and future accesses are guaranteed not to have been performed [Mos93]. A good example of weak consistency is presented in [Bor91].
Release Consistency: The Release Consistency model requires shared memory accesses to be labeled as either ordinary or special. Within the special category, accesses are divided into those labeled as synch and non-synch accesses. Synch accesses are further subdivided into acquires and releases. The definition is as follows [Kel92]:
• Before an ordinary access is allowed to perform with respect to any other processor, all previous acquires must be performed.
• Before release is allowed to perform with respect to any other processor, all previous ordinary reads and writes must be performed.
• Special accesses are sequentially consistent with respect to one another.
6
This can be implemented by the Lazy Release Consistency algorithm (LRC)
[Kel92][Bor91] in which all updates and invalidations are postponed until visibility of the newly written values is required [Bor91]. LRC does not make modifications globally visible at the time of release. Instead, it guarantees only that a processor that acquires a lock will see all modifications that precede the lock acquisition [Kel92]. This not only reduces the update and invalidation rate, but also allows multiple concurrent reads from a page that is being written. It also permits multiple write to the same page and thus provides a solution to the "false sharing" and thrashing problem [Bor91]. More discussion about this problem is presented in the next section.
Concurrent Writer Protocols
One advantage of weak memory coherence is that it allows concurrent read and write access to a page rather than the serialized concurrent read access or single write access imposed by the strong memory coherence models [Bor91].
Multiple Reader and Single Writer Protocol: With this protocol we permit exactly one write copy but multiple read copies per page. This protocol works by having one master copy of the page held by a single process which is considered the owner of the page. The master copy is the only copy that is allowed to be modified ( write access ). Other copies are marked as read-only pages. Modification by the owner is invisible to others until they request to read the new data. When the writer ( owner ) updates the page ( master copy ) it invalidates all other copies of the same page. When a reader makes a read access on an invalidated page, a page fault occurs and the page manager ( centralized or distributed ) asks the owner of the page to send a copy to the requesting reader. If write access is requested, the page manager asks the owner to send the page and transfer ownership of the page to the requesting processor. The previous owner marks its copy of the page as read-only.
Multiple Reader and Multiple Writer Protocol: This protocol attacks the problem of concurrent writes to one page. The page owner maintains a primary copy which contains the most recent version of a given page. To keep the master copy up-to-date, modifications must be propagated to the page owner via the network. A write by the page owner is directly executed on the master copy. If a remote update to a page occurs, the data is written to the local copy by the requesting processor and an update message is sent to the master copy ( write-through ). Other nodes which may be reading a local copy of the page are not notified of changes until the master copy is updated. This model allows pipelining multiple concurrent writes at the master copy and thus the potential for a single invalidation and update to propagate many changes. This model does not require ownership to migrate, because the ownership is no longer associated with write access
[Bor91].
Cache Coherence Problem
Caching is a potential solution to the serious delay on fetch and store operations in a multiprocessor systems. Unfortunately, multiple processor caching may cause several copies of an object to coexist in different caches. When the data is changed, we risk potentially reading an old copy. A cache coherence protocol is needed to ensure that we will always read a valid copy. Because a cache has a finite size, it is also important to have a cache replacement policy. There are three basic approaches to cache coherence:
7
Snooping Cache: Snooping cache depends on the existence of some communication medium with broadcast capability. Each cache is required to monitor the shared bus for memory transactions initiated by other processors to maintain the coherence of its own data [Tam90]. The page owner can access the page with no fault ( signal). But for nonowners accesses, if the page is invalid, a fresh copy is fetched and an invalidation signal is sent to the other caches. The dirty blocks are written back to the main memory. A good example of this scheme is the Berkeley Protocol [Tam90].
Directory Based: The directory based approach depends on having a directory of memory blocks in the main memory. Whenever a cache miss occurs, the request is directed to this directory that has information about ownership, copy-set and valid bit.
On a read miss, the requesting processor can take the block directly from the memory if the bit is valid otherwise it must wait until the block is validated by writing it back to the memory from the modifying cache. In case of a write miss, the copy-set information is used to invalidate other copies. The copy-set contains information on which cache have a copy of block. It is updated after each block access [Tam90].
Lock Based: In this approach a request to read or write an item is preceded by a lockrequest for that item and succeeded by a lock-release [Moh91]. Comparison of these three schemes is in [Moh91]. Moreover, a good discussion of implementation issues can be found in [Fle89].
Management Of Distributed Shared Memory
The management strategy of the DSM cache is critical to achieving good performance in a DSM system and affects the scalability of the system
[Tam90][Li86][Zho92][Bor91][Moh91][Lev92][Ana92]. DSM is most efficient when implemented in conjunction with, and as part of, the conventional virtual memory mechanism of the machine [Tam90][Ana92]. Because DSM fault-handling may require communication with another machine through a, possibly low-bandwidth, network connection, it can add significantly to the cost of a VM page fault. Therefore it is very important that a given DSM page be located quickly [Lev92][Ana92]. The DSM page manager is responsible for keeping track of the current owner of each shared page, for locating and fetching the page in response to a VM page fault and for communicating that page to other machines as necessary [Lev92][Li86]. The implementation of the
DSM manager has broad implications both for the usability and for the scalability of the system. The management of DSM may be either centralized or distributed. In a centralized management scheme [Tam90][Lev92][Li86], there is a central page manager which is known to all processes using shared memory. The central manager tracks the location of each shared page and grants access to a faulting process according to the coherency scheme in use. When a read fault occurs on a shared page, the faulting processor sends a read-request to the page manager. If the page manager is currently holding the latest version of the page, it simply sends a copy to the requester. If some other processor is currently in possession of the latest version, the page manager forwards the request to that processor. At this point the page manager may also record
8
the fact that a new processor now has a read-copy of the page (to facilitate invalidation or update on a point-to-point network, for example).
Write faults on a shared page are handled in a similar manner. A write-request is sent to the page manager, which arranges for the faulting host to receive the page. The page manager records the requesting host as the current holder of the latest version of the page. At this point, depending on the consistency scheme employed, all read-copies of the page may have to be invalidated (the central page manager may do this as part of processing the write-request or it may be left to the local VM manager on the page's new host). Distributed management [Tam90][Moh91][Lev92][Ana92][Li86][Zho92][Bor91] may be either fixed or dynamic. The fixed distributed scheme simply spreads the function of the central page manager across all machines using the DSM. Every processor is allocated a predetermined subset of pages to manage in a centralized manner.
When a page fault occurs, a mapping function provides the identity of the page manager and the request proceeds as in the centralized case.
Dynamic distributed management [Tam90][Li86] does away with the manager concept altogether. Two variants of dynamic distributed management are possible: broadcast and hinting. In the broadcast variant, each processor keeps track of only those pages it currently possesses. When a read fault occurs, a broadcast-read-request is sent to all other processors. The current holder of the page replies with a copy and records that a new processor now holds a read-copy of the page. When a write fault occurs, a similar broadcast-write-request is sent to all processors. The current holder releases control of the page to the requester, sending both a copy of the page and the list of copy-holders.
Upon receiving the list, the new owner of the page invalidates all the copies.
Distributed management using hints [Tam90][Li86] is the most flexible of all these schemes. The VM page table of each processor is extended with a field indicating the probable current owner of each shared page. The entry in this field is a "hint", it may not always be correct, but it does provide the first link in a chain of owners the processor can follow to locate the page. If a processor receives a request for some page it does not own, it forwards the request according to the hint in its page table. These hints are updated under the following conditions:
• the processor receives an invalidation request
• the processor relinquishes ownership to another processor
• the processor receives a page
• the processor forwards a page request
When the processor receives an invalidation message, it knows that ownership of a page has been transferred and the new owner sent the message. When a processor receives a page after faulting, it either becomes the owner of the page or knows the current owner sent the page. Finally, when a processor forwards a request, it knows the requester will either become the new owner or will know exactly who the owner upon receiving the page. Centralized management schemes, in general, do not scale well. Centralized
DSM requires up to two messages to locate a page and one to copy it to the requester. In a point-to-point environment, a transfer of ownership also requires one invalidation message for each read-copy that exists. The host processor running the page manager may become a bottleneck in a large system.
9
Fixed distributed management requires the same message traffic as does the centralized scheme. It scales better because the single point of congestion is eliminated, but suffers from the problem of finding a good static allocation of pages to each manager when used in an inter-networked environment.
Broadcast dynamic management [Li86] is more of a curiosity than a reasonable idea. It is absolutely dependent on some type of broadcast or multicast facility in the connecting network because it requires N messages to process every page fault among N processors. It does offer atomic coherence (implicitly because only one processor can access a page at any given time) and simple programming model, but suffers from terrible performance when used with more than a few processors.
Dynamic management using hints offers the best scalability. It has been shown
[Tam90][Li86] that the performance of this scheme does not degrade as processors are added, but rather degrades logarithmically with the number of processors in contention for a given page. In the worst case, locating K owners of a page among N processors where P processors are sharing the page is O( N + K logP ). On average log(P) message are required to locate a given page. Periodic broadcasts of current location information can be used to improve the average case.
IV IMPLEMENTATION ISSUES
To be efficient a Distributed Shared Memory system must be integrated with the processor's local virtual memory subsystem. DSM then becomes simply another level of the storage hierarchy that must be managed. The distributed nature of DSM, however, complicates the normal handling of virtual memory by placing new constraints on faulthandling, page replacement policies and on the choice of VM page size.
Fault-Handling
Fault-handling in a conventional virtual memory system is fairly simple: the handling code discovers the reason for the fault, e.g. the page is not in RAM memory or a write was attempted to a read-only page, and takes appropriate action to remedy the situation, e.g. scheduling the page to be transferred from backing storage or signaling the faulting task to terminate. Shared memory in such a system is easy to implement and costs little in terms of fault-handling overhead. DSM, however, adds a whole new layer of complexity to the fault-handler. The fault-handling code must now be able to decide whether a missing page is in local backing storage or held remotely by another machine.
Messages may need to be sent to locate and retrieve the page; or to update or invalidate other copies of the page. Information on the current owner of the page or about which machines are holding copies may need updating. The fault-handler needs to interact with the network protocol stacks as well as with the local paging daemon (the local page daemon can be modified to provide for remote paging as well as local, however, outgoing coherency messages are most efficiently sent directly from the fault-handler.
Additionally, incoming coherency messages must be delivered to the VM system so that page information can be updated). Although these issues arise only for shared pages, the code for dealing with them is non-trivial and potentially time-consuming. If the goal of the system is to provide seamless shared memory support without regard to local/remote sharing, the DSM code will be executed for fault on a shared page and must therefore be very fast.
10
Page Replacement
In a uni or multi-processor system, dirty (i.e. modified) shared pages are candidates for replacement along with all others. They are copied out to local disk storage and will be paged back in when needed. Shared pages in a distributed system present a dilemma because the next access to the page may be remote. If there is a designated page manager (i.e. a centralized or fixed-distributed scheme) a dirty page may be relinquished and sent back to the page manager for storage. This puts the burden of backing storage on the machine executing the page manager but may result in inefficiency when a temporarily overloaded machine is actively using the page. Dynamic management systems which do not use page managers as such, have no choice but to copy the page to local disk storage. Broadcasting the modified page to the copy-holders before overwriting it is not an option because the modified page may be the only copy in existence at that particular time (remember that read-only copies held by other processors may vanish at any time in accordance with their own local VM management).
Page Size
The page size of a DSM implementation is of considerable importance because it affects both the size of messages (needed for copying) and the potential for false sharing of different data structures co-located on the same page. It is not always obvious what the right size for a page is. Large pages are more efficient because they minimize faulthandling overhead, however false sharing is more likely with large pages because compilers, etc. generally are not aware that the resulting program may execute in a DSM environment. Small page sizes minimize false sharing but invite many page faults and may clog the network with large numbers of coherence messages. Network latency must be considered in choosing a good page size - the utility of the DSM system is in direct proportion to its speed: slow or overloaded networks are a major problem. In addition, we must consider different types of machines which may be connected to the network - once we agree that DSM is useful, we do not want to have implementations restricted only to networks of homogenous machines.
... for Heterogeneous Implementations
The last point - integrating heterogeneous machines - is a topic of considerable research. It is not currently known whether better performance results from having a single canonical DSM frame size (which may require multiple native VM pages) or from having special purpose mechanisms to allow machines with identical frame sizes to exchange native pages. This has implications for memory coherence as well as performance because, if native exchanges are allowed, multiple pages must be sent to fill a request from a machine with a larger frame size. The problem is that the requester is expecting a coherent logical page and may in fact receive several inconsistent smaller pages. For the case in which the necessary small pages are all resident at a single host but may be not contiguous in memory, this problem could be solved by using network interfaces with scatter/gather capabilities or by assembling the pages in proper order in a buffer before transmission. However, if small pages are allowed to migrate among homogeneous machines, the expense of reassembling them into large pages may destroy the utility of sharing. Research into this question is still ongoing.
Task Synchronization in the DSM Environment
11
An important consideration in a DSM implementation is whether to provide separate synchronization primitives along with the shared memory. This seemingly minor detail can dramatically affect performance according to the coherence and programming model the DSM is to provide. In general, programming for a distributed shared memory is subject to the same cautions as programming for a uni-processor with shared memory; the same shared-memory programs that waste cycles on a uni-processor
(e.g. spin-locked data structures) cause DSM pages to ping-pong through the connecting network. But DSM has yet another pitfall: strong coherence models provide uniprocessor shared memory semantics, and with them, the temptation to use shared memory for task synchronization through mutexes and semaphores, etc. This natural use of shared memory, in the presence of DSM, results in page ping-pong (see Figure) as concurrent tasks on different machines try to access the same memory locations.
Program performance will be miserable and, worst of all, innocent processes will suffer longer network delays. Weaker coherence strategies require some form of explicit synchronization apart from the shared memory, but they present a more complex programming model to the user. The ideal distributed programming model is transparent
- a distributed system should execute the same binaries as a uni-processor. Failing that, the programmer should be able to write code for a uni-processor and be able to move programs to multi or distributed processor systems with, at most, a simple recompilation.
Above all, a multiple processor system should not degrade the performance of a program compared to execution on a uni-processor. The answer to this may lie in smarter compilers which automatically generate the synchronization messages (writing correct programs is tough enough already without having to worry about where your shared memory is coming from).
V HARDWARE SUPPORT
FOR DSM
Special hardware support may be necessary to efficiently implement certain coherence strategies. The write-broadcast approach requires that copies be updated on every write access to a shared page. This requires generating a page fault on every write access, sending update messages and returning to the write instruction. The multiple-writer update propagation scheme requires the same gyrations be performed on every write access. Efficient implementation of either scheme requires 1) hardware that can skip a faulting write cycle, or 2) local DSM access to the physical address so the
12
DSM system can perform the actual write to local memory during the processing of the fault [Li86][Bor91].
The performance of DSM is extremely dependent on the speed of the interconnection network. Fetching a page from a remote machine should not take appreciably longer than paging from the local disk. Smart network hardware with features such as direct memory access (DMA) considerably ease the burden of providing efficient service. Another desirable feature for network hardware is the so-called
Scatter/Gather ability which allows outgoing network messages to be assembled on-thefly from non-contiguous sources and incoming messages to be potentially distributed to both kernel and user space in a single operation. Use of Scatter/Gather would allow delivery of an incoming page directly to the page-frame while any acompanying information is delivered to the kernel without requiring that the message first be received in a kernel buffer and then disassembled by the processor. Such capabilities would allow a processor to schedule a network page transfer, then do something else while the transfer takes place concurrently (as is already done for disk paging), thus increasing throughput for the paging system.
VI CASE STUDIES
Clouds
Clouds is a single level store oriented system. The computation model of Clouds consists of passive objects with threads. A thread is an active entity that provides the notion of a computation. It executes in the context of an object. During the course of execution, a thread may invoke entry points in other objects. Thus a thread is not associated with a single address space. Further, since these objects may not all be at the same node, a thread may span machine boundaries during the course of execution. The collection of objects in Clouds represents a distributed shared global virtual space. A thread traverses the address space of the objects that it invokes during its execution. It uses a lock-based protocol for coherence maintenance that unifies synchronization and transport of data. [Tam90][Moh91]
IVY
IVY investigated the feasibility of providing a virtual shared memory environment on loosely coupled multi processors. In IVY a process address space is divided into private and shared portions. The private portion is not addressable by other processes and the shared portion is implemented as a virtual shared memory. A virtual shared memory is a flat address space shared by all the processes running on different nodes. The notion of coherence used in IVY is a multiple reader/single writer semantics.
[Tam90]
MEMNET
MEMNET is a shared local area token-ring network. It provides close coupling to the processors of a distributed multiprocessor system. It employs dedicated hardware to service remote memory access. The hardware address space seen by each host has two parts: private and shared. References to the shared part are passed to the associated
MEMNET device, which coordinates with other devices to resolve the references.
13
MEMNET exploits the features of a special-purpose token ring net work to implement a write-invalidate style of cache protocol. [Tam90][Moh91]
METHER
METHER provides a set of mechanisms for sharing memory across a network on top of SunOS. METHER differs from most other distributed shared memory systems in that it does not provide strict memory coherence. A process can continue to write on a page without the changes being reflected in other copies of the page. METHER provides for data driven page-faults. The page-fault is serviced when another process actively sends out an update for the page that caused the fault. [Moh91]
MACH
Mach's shared memory semantics are geared towards managing shared memory in a tightly-coupled multiprocessor. A task in MACH is an execution environment that includes a virtual address space and an access list to system resources. There are two ways of sharing memory between tasks in MACH: copy-on-write and read-write.
MACH provides strict memory coherence semantics using a write-invalidate approach for sharing of pages across the network. [Moh91]
VII LOOKING TOWARDS THE FUTURE
Consideration of the specifications for the computing environment of the future leads us to believe the future system will be an environment in which:
• VM page faults will be tremendously more expensive to process than they are today. Processor speeds will have increased 100 times while the access speeds of local disk storage increased only 5-7 times.
• The discrepancy between network RPC times and the latency of local disk storage will have doubled. Ignoring handling overhead, a memory to memory transfer of a typical VM page through a 10Mb/s network is already much faster than local disk paging. Widening the gap will likely bias software designs in favor of zero disk access whenever the network is a viable alternative.
• I/O in general will be much more expensive in terms of processor time and smart device hardware will be available to provide as much concurrent processing as possible. With these points in mind we tried to imagine whether Distributed Shared Memory is appropriate in such an environment and what such a system would be like. We believe that DSM will continue to be a valuable service to provide; however, its use will likely be restricted to small geographic areas - e.g. a campus or building complex, at most perhaps between locations inside the same city. While nothing will prevent users from trying long-distance sharing, the high latency of message traffic will discourage longdistance use. Portable computers using slow, wireless communication are not likely to participate in DSM at all.
The high expense of page faults argues in favor of a large VM page size - but large page sizes in a shared memory environment invite contention from false sharing due to co-located data with differing access patterns. Typical page sizes today range from 2KB to 8KB with the most common size being 4KB.
14
Deciding on a good page size for our future machine is difficult because it can be approached in a couple of ways. Keeping the same ratio of page-table-size to addressspace that exists today (0.12%), we arrive at a VM page size of 8KB for our future machine. An alternative approach is to consider the number of page frames available in a typical workstation today: if we consider a 16MB memory and a 4KB page to be typical of workstations today, we may then conclude that a page size of 128KB provides comparable utility for a machine with a 512MB memory (4096 frames). We might extend this by noting that the processor speed increased by 2 orders of magnitude and make a similar decrease in our page size (from 128KB to 32KB) to accommodate the expected extra load.
The large 32-128KB page size will inevitably lead to many false sharing situations. Therefore a weak or release consistency model employing lazy propagation of invalidations and updates will be required to realize increases in application performance consistent with expectations of new hardware capabilities. The access management policy should support concurrent reads and writes to DSM pages: the multiplereader/ single-writer model can be provided without special purpose hardware - supporting multiple concurrent writers will require either a fault mechanism which can provide the physical address of a write to the DSM system or a processor which can skip a faulted write cycle.
Interestingly, the projected increase in network speeds provides a way to keep the small pages sizes we would like to have. At 500Mb/s sending a 8KB message requires about 160us (microseconds). Network paging with an 8KB page size allows us to maintain (actually to improve upon) the ratio of paging time to RAM access time that exists today at about the same cost in memory overhead for VM management (0.12%).
Network contention and RPC overhead might force us to use smaller page sizes which would help alleviate the false sharing problem and allow a stronger consistency model to be used.
This could all be accomplished using a dedicated "DSM Server" that exists to provide shared memory via network paging to other machines and to provide backing storage for the shared memory, which can be handled locally in any convenient transfer size. Multiple DSM servers could be used to provide a static universe of shared system binaries and other objects (such as memory-mapped files) to which all machines would have access (with appropriate permission, of course). The effect of the DSM server concept would be to turn a collection of autonomous workstations into a large shared memory multiprocessor.
Assuming that RAM access times keep pace with processor speeds, our future computers will have main memories with 0.5-1ns access times. Paging virtual memory from 2ms local disks will be far more expensive on this future machine than it is even today. DSM servers can preserve and improve upon the ratio of paging time to RAM access time that exists today at about the same cost in VM management overhead. The
DSM server concept is quite cost efficient as well: workstations don't need local disks at all and would in fact be slowed by accessing them.
VIII CONCLUDING REMARKS
15
Distributed shared memory is a popular abstraction because it is an appropriate vehicle for adapting the shared memory paradigm to a distributed system.
Implementation of this abstraction requires careful evaluation of several design choices such as interacting with virtual memory, data granularity, choice of coherence and synchronization. Because each node in a distributed system is autonomous in managing its local resources it is important for an implementation to reconcile the requirements of
DSM with the mechanisms local to each node.
In this paper we have discussed the issues important to a successful implementation of DSM, briefly surveyed current implementations, and presented our thoughts regarding future implementations. We are convinced that distributed shared memory is a useful feature for any distributed operating system to provide.
16
REFERENCES
[Ana92] R. Ananthanarayanan et al.: "Experiences in Integrating Distributed
Shared Memory with Virtual Memory Management," Operating System
Review, Vol.26, #3, July 1992.
[Bor91] Lothar Borrmann, Petro Istavrinos.: "Store Coherency in a Parallel
Distributed Memory Machine", Distributed Memory Computing,
Proceedings, Vol. 487, 1991, pp. 32-41.
[Fle89] Brett D. Fleisch, Gerald J. Popek.: "Mirage: A Coherent Distributed
Shared Memory Design" in Proc. 12th ACM Symposium on Operating
System Principles, Dec. 1989, PP. 211-223.
[Hag92] Erik Hagersten, Anders Landin, Seif Haridi.: "DDM - A Cache-Only
Memory Architecture," IEEE Computer, Vol.44, Feb 1992, pp. 900-944.
[Kel92] Pete Keleher, Alan. L. Cox, Willy. Zwaenepoel.: "Lazy Release
Consistency for Software Distributed Shared Memory", ACM SIGARCH
- Computer Architecture News, Vol.20, #2, May 1992.
[Lev92] Willem G. Levelt et al.: "A Comparison of Two Paradigms for Disributed
Shared Memory", Software - Practice and Experience, Vol.22, #11, Nov.
1992, pp. 985-1010.
[Li86] Kai Li, Paul Hudak.: "Memory Coherence in Shared Virtual Memory
Systems", Proceedings 5th ACM SIGACTSIGOPS Symposium of
Principles of Distributed Computing, Canada, Angust, 1986
[Moh91] Ajay Mohindra and Umakishore Ramachandran.: "A Survey of
Distributed Shared Memory in Loosely-Coupled Systems", Technical
Report GIT-CC-91/01, Georgia Institute of Technology, Atlanta, GA, Jan.
1991
[Mos93] David Mosberger.: "Memory Consistency Models," Operating System
Review, Vol.27, #1, Jan. 1993.
[Tam90] Ming-Chit Tam, Jonathan M. Smith, David J. Farber.: "A Taxonomy-
Based Comparison of Several Distributed Shared Memory Systems,"
Operating System Review, Vol.24, #3, Jul. 1990.
[Zho92] Songnian Zhou, Michael Stumm, Kai Li.: "Heterogeneous Distributed
Shared Memory", IEEE Transactions on Parallel and Distributed Systems,
Vol. 3, #5, Sept. 92, pp. 540-554.
17

Similar Documents

Free Essay

Design Issue of Dsm

...TITLE: DESIGN ISSUES AND FUTURE TRENDS OF DISTRIBUTED SHARED MEMORY SYSTEMS ABSTRACT In these times, the distributed shared memory paradigm has gained a lot of attention in the field of distributed systems. This piece of work looks into different system issues that arise in the design of distributive shared memory systems. The work has been motivated by the observation that distributed systems will continue to become popular and will be largely be used to solve large computational issues. Since shared memory paradigm offers a natural transition for a programmer from the field of uniprocessors, it is very attractive for programming large distributed systems. Introduction The motive of this research is to identify a set of system issues, such as integration of DSM with virtual memory management, choice of memory model, choice of coherence protocol, and technology factors; and evaluate the effects of the design alternatives on the performance of DSM systems. The design alternatives have been evaluated in three steps. First, we do a detailed performance study of a distributed shared memory implementation on the CLOUDS distributed operating system. Second, we implement and analyze the performance of several applications on a distributed shared memory system. Third, the system issues that could not be evaluated via the experimental study are evaluated using a simulation-based approach. The simulation model is developed from our experience with the CLOUDS distributed system....

Words: 1092 - Pages: 5

Premium Essay

The Survey and Future Evolution of Green Computing

...International Conference on Green Computing and Communications The Survey and Future Evolution of Green Computing Qilin Li Production and Technology Department Sichuan Electric Power Science and Research Institute Chengdu, Sichuan, P.R.China li_qi_lin@163.com Mingtian Zhou School of Computer Science and Technologies University of Electronic Science and Technology Chengdu, Sichuan, P.R.China mtzhou@uestc.edu.cn Although green computing is becoming increasingly important in IT systems, it presents challenging problems to system designers. Designers need to take into account energy consumption during the phase of system design and to find solutions to reduce it. Green computing involves all aspects of IT systems, such as chips, system architectures, compilers, operating systems, communication networks and application services, and so on[1]. Further, these aspects are interdependent and complicated. As a result, building lowcost and low-power-consumption systems is a challenging and important activity. Such a new computing paradigm introduces new technical challenges to system designers. In light of today’s requirements for green computing, we present latest research efforts that attempt to deal with them and indicate still open issues. We thus discuss the connotation of green computing and sketch our view on the next generation of IT systems for green computing. We further identify key issues relevant to green computing and evaluate different approaches to these problems...

Words: 2936 - Pages: 12

Premium Essay

Business

...1. ACCOUNTING SYSTEMS IN A COMPUTER ENVIRONMENT ACC 109 Information and its complexity For any business to survive, it needs an objective that it strives to achieve. The manager of a business must plan, organize, direct and control the activities of an organization so that the set objective is met. In order to perform these functions they need to be making decisions continuously. To make decisions managers need information to help them to make informed choices. Basically a computer is a device that has the ability to accept data, internally store and execute a program of instructions, perform mathematical, logical and manipulative operations on the data and internally store the data or the result from the earlier described processes and reports on the results. Data and Information Data is raw facts or unprocessed information, computers and manipulate data according to the instructions contained in the software to produce information. Activities that are captures into an information system are known as data. -Data that has been processed is available as information and must be used by someone -Information is created through a process of recording, capturing and processing data and activities, then communicating it to the users to increase their knowledge to enable them to make decision. -Processing data involves doing calculations on the activities and summarizing the activities, classifying the activities according to certain attributes, sorting the activities etc. Information...

Words: 12308 - Pages: 50

Free Essay

The Knuth Marris Pratt Algorithm

...Middleware for Distributed Systems Evolving the Common Structure for Network-centric Applications Richard E. Schantz BBN Technologies 10 Moulton Street Cambridge, MA 02138, USA schantz@bbn.com Douglas C. Schmidt Electrical & Computer Engineering Dept. University of California, Irvine Irvine, CA 92697-2625, USA schmidt@uci.edu 1 Overview of Trends, Challenges, and Opportunities Two fundamental trends influence the way we conceive and construct new computing and information systems. The first is that information technology of all forms is becoming highly commoditized i.e., hardware and software artifacts are getting faster, cheaper, and better at a relatively predictable rate. The second is the growing acceptance of a network-centric paradigm, where distributed applications with a range of quality of service (QoS) needs are constructed by integrating separate components connected by various forms of communication services. The nature of this interconnection can range from 1. The very small and tightly coupled, such as avionics mission computing systems to 2. The very large and loosely coupled, such as global telecommunications systems. The interplay of these two trends has yielded new architectural concepts and services embodying layers of middleware. These layers are interposed between applications and commonly available hardware and software infrastructure to make it feasible, easier, and more cost effective to develop and evolve systems using reusable software. Middleware...

Words: 10417 - Pages: 42

Free Essay

With the Development of Technology, More and More Robots Are Used in Various Fields,

...University of Mumbai B.E Information Technology Scheme of Instruction and Evaluation Third Year -Semester VI Scheme of Instructions Sr. Subjects Lect/ No 1 Information and Network Security Middleware and Enterprise Integration Technologies Software Engineering Data Base Technologies Programming for Mobile and Remote Computers Information Technology for Management of Enterprise TOTAL Week 4 Scheme of Examinations Theory T/W Practical Oral Total Hours Marks Marks Marks Marks Marks 3 100 25 -25 150 Pract/ Week 2 Tut/ Week -- 2 4 2 -- 3 100 25 -- 25 150 3 4 5 4 4 4 2 2 2 ---- 3 3 3 100 100 100 25 25 25 --25 25 25 -- 150 150 150 6 4 24 10 1 1 3 -- 100 600 25 150 -25 25 125 150 900 INFORMATION AND NETWORK SECURITY CLASS T.E. ( INFORMATION TECHNOLOGY) HOURS PER LECTURES : WEEK TUTORIALS : PRACTICALS EVALUATION SYSTEM: THEORY PRACTICAL ORAL TERM WORK : SEMESTER VI 04 -02 HOURS 3 ---- MARKS 100 25 25 1. Introduction What is Information Security? Security Goals. 2. Cryptography Crypto Basic, Classic Cryptography, Symmetric Key Cryptography: Stream Ciphers, A5/1, RC4, Block Ciphers, Feistel Cipher, DES, Triple DES, AES, Public Key Cryptography: Kanpsack, RSA, Defiie-Hellman, use of public key crypto- Signature and Non-repudiation, Confidentiality and Non-repudiation, Public Key Infrastructure, Hash Function: The Birthday Problem, MD5, SHA-1, Tiger Hash, Use of Hash Function. 3. Access...

Words: 3868 - Pages: 16

Premium Essay

Car Parking System

...development is being done all over the world to implement better and smarter parking management mechanisms. Widespread use of wireless technologies paired with the recent advances in wireless applications for parking, manifests that digital data dissemination could be the key to solve emerging parking problems. Wireless Sensor Network (WSN) technologies has attracted & increased attention and are rapidly emerging due to their enormous application potential in diverse fields. This buoyant field is expected to provide an efficient and cost-effective solution to the efficient car parking problems have taken a lot of the guesswork out of driving: They can help us pinpoint the nearest gas station, navigate to an obscure destination, and avoid heavy traffic and construction. Nabbing a parking spot on a crowded downtown street, on the other hand, has remained a matter of luck and the occasional fearless manoeuvre. But now, new intelligent parking systems are poised to make that easy. This project proposes a Parking (WSN) Management System based on wireless sensor network technology which provides advanced features like remote parking monitoring, automated guidance. It describes the overall system architecture of MCPS(Multi Channel Process System) from hardware to software implementation in the view point of sensor networks. Here we have proposed a software implementation using wireless sensor network for management of car parking system without entering into the parking lot. Parking status can...

Words: 1602 - Pages: 7

Premium Essay

Database

...ebook may be reproduced in any form, by photostat, microfilm, xerography, or any other means, or incorporated into any information retrieval system, electronic or mechanical, without the written permission of the publisher. All inquiries should be emailed to rights@newagepublishers.com ISBN (13) : 978-81-224-2861-2 PUBLISHING FOR ONE WORLD NEW AGE INTERNATIONAL (P) LIMITED, PUBLISHERS 4835/24, Ansari Road, Daryaganj, New Delhi - 110002 Visit us at www.newagepublishers.com Preface In recent years there have been significant advances in the development of high performance personal computer and networks. There is now an identifiable trend in industry toward downsizing that is replacing expensive mainframe computers with more cost-effective networks of personal computer that achieve the same or even better results. This trend has given rise to the architecture of the Client/Server Computing. The term Client/Server was first used in the 1980s in reference to personal computers on a network. The actual Client/Server model started gaining acceptance in the late 1980s. The term Client/Server is used to describe a computing model for the development of computerized systems. This model is based on the distribution of functions between two types of independent and autonomous entities: Server and Client. A Client is any process that request specific services from server processes. A Server is process that provides requested services for Clients. Or in other words, we can say...

Words: 79055 - Pages: 317

Free Essay

Rtos

...and control systems, industrial process control, flight control systems, and space shuttle and aircraft avionics. All of these involve gathering data from the environment, processing of gathered data, and providing timely response. A concept of time is the distinguishing issue between real-time and non-real-time systems. When a usual design goal for non-real-time systems is to maximize system's throughput, the goal for real-time system design is to guarantee, that all tasks are processed within a given time. The taxonomy of time introduces special aspects for real-time system research. Real-time operating systems are an integral part of real-time systems. Future systems will be much larger, more widely distributed, and will be expected to perform a constantly changing set of duties in dynamic environments. This also sets more requirements for future real-time operating systems. This seminar has the humble aim to convey the main ideas on Real Time System and Real Time Operating System design and implementation. Index Chapter 1. Introduction 1.1 Real-time Programs: The Computational Model 2. Design issue of Real Time Systems 3. Scheduling 3.1 Scheduling paradigms 3.2 Priority inversion problem 4. Real-time operating...

Words: 6435 - Pages: 26

Premium Essay

Bigdata

...4. 4.1 Big Data Introduction In 2004, Wal-Mart claimed to have the largest data warehouse with 500 terabytes storage (equivalent to 50 printed collections of the US Library of Congress). In 2009, eBay storage amounted to eight petabytes (think of 104 years of HD-TV video). Two years later, the Yahoo warehouse totalled 170 petabytes1 (8.5 times of all hard disk drives created in 1995)2. Since the rise of digitisation, enterprises from various verticals have amassed burgeoning amounts of digital data, capturing trillions of bytes of information about their customers, suppliers and operations. Data volume is also growing exponentially due to the explosion of machine-generated data (data records, web-log files, sensor data) and from growing human engagement within the social networks. The growth of data will never stop. According to the 2011 IDC Digital Universe Study, 130 exabytes of data were created and stored in 2005. The amount grew to 1,227 exabytes in 2010 and is projected to grow at 45.2% to 7,910 exabytes in 2015.3 The growth of data constitutes the “Big Data” phenomenon – a technological phenomenon brought about by the rapid rate of data growth and parallel advancements in technology that have given rise to an ecosystem of software and hardware products that are enabling users to analyse this data to produce new and more granular levels of insight. Figure 1: A decade of Digital Universe Growth: Storage in Exabytes Error! Reference source not found.3 1 ...

Words: 22222 - Pages: 89

Premium Essay

He Objective of the Subject Is to Make Students Conversan

...conversant with a set of management guidelines which specify the firm’s product-market position, the directions in which the firm seeks to grow and change the competitive tools it will employ, the strengths it will seek to exploit and the weaknesses it will seek to avoid. Strategy is a concept of the firm’s business which provides a unifying theme for all its activities. Course Syllabus Group I: Defining Strategic Management, Characteristics of Strategic Management Types and Hierarchy, Formulation of Strategy: Various Stages and Components of Strategic Management, Determination of various objectives like corporate, divisions and departmental objectives: Vision, Mission and Purpose, Environmental Scanning: Internal & External environment, Types of Strategies, Guidelines for crafting strategies, Tailoring strategies to fit specific Industry. Group II: Strategic Analysis and Choice: Environmental Threat and Opportunity Profile (ETOP), Organizational Capability Profile – Strategic Advantage Profile, Corporate Portfolio Analysis – SWOT Analysis, Synergy and Dysergy – GAP Analysis, Porter’s Five Forces Model of Competition, Mc Kinsey’s 7s Framework, GE 9 Cell Model, Distinctive competitiveness – Selection of matrix while considering all models discussed above, Implementation of strategy: Analysis and development of organizational policies-marketing, production, financial, personnel and management information system, Strategy implementation: Issues in implementation...

Words: 11813 - Pages: 48

Premium Essay

My Course

...conversant with a set of management guidelines which specify the firm’s product-market position, the directions in which the firm seeks to grow and change the competitive tools it will employ, the strengths it will seek to exploit and the weaknesses it will seek to avoid. Strategy is a concept of the firm’s business which provides a unifying theme for all its activities. Course Syllabus Group I: Defining Strategic Management, Characteristics of Strategic Management Types and Hierarchy, Formulation of Strategy: Various Stages and Components of Strategic Management, Determination of various objectives like corporate, divisions and departmental objectives: Vision, Mission and Purpose, Environmental Scanning: Internal & External environment, Types of Strategies, Guidelines for crafting strategies, Tailoring strategies to fit specific Industry. Group II: Strategic Analysis and Choice: Environmental Threat and Opportunity Profile (ETOP), Organizational Capability Profile – Strategic Advantage Profile, Corporate Portfolio Analysis – SWOT Analysis, Synergy and Dysergy – GAP Analysis, Porter’s Five Forces Model of Competition, Mc Kinsey’s 7s Framework, GE 9 Cell Model, Distinctive competitiveness – Selection of matrix while considering all models discussed above, Implementation of strategy: Analysis and development of organizational policies-marketing, production, financial, personnel and management information system, Strategy implementation: Issues in implementation...

Words: 11813 - Pages: 48

Free Essay

Test

...Abstract……………………………………………………………………………...3 2. Introduction………………………………………………………………………….4 3. Cognition…………………………………………………………………………....9 4. User Interaction Design……………………………………………………….....12 5. Interaction Styles………………………………………………………………….15 6. Interaction Devices…………………………………………………………….....18 7. Future of Human Computer Interaction………………………………..……….19 8. Conclusion………………………………………………………………………....19 9. Reference……………………………………………………………………….....20 -2- Human Computer Interaction Abstract Human-computer interaction (HCI) is the study of how people design, implement, and use interactive computer systems and how computers affect individuals, organizations, and society. This encompasses not only ease of use but also new interaction techniques for supporting user tasks, providing better access to information, and creating more powerful forms of communication. It involves input and output devices and the interaction techniques that use them; how information is presented and requested; how the computer’s actions are controlled and monitored; all forms of help, documentation, and training; the tools used to design, build, test, and evaluate user interfaces; and the processes that developers follow when creating Interfaces. HCI in the large is an interdisciplinary area. It is emerging as a specialty concern within several disciplines, each with different emphases: computer science (application design and engineering of human interfaces), psychology (the application of theories of cognitive processes and the empirical...

Words: 4044 - Pages: 17

Free Essay

Innovation and Erp Systems

...frontiers of human knowledge to enrich the citizen, the nation, and the world. To excel in research and innovation that discovers new knowledge and enables new technologies and systems. To develop technocrats, entrepreneurs, and business leaders of future who will strive to improve the quality of human life. To create world class computing infrastructure for the enhancement of technical knowledge in field of Computer Science and Engineering. PROGRAMME: B.E. CSE (UG PROGRAMME) PROGRAMME EDUCATIONAL OBJECTIVES: I. Graduates will work as software professional in industry of repute. II. Graduates will pursue higher studies and research in engineering and management disciplines. III. Graduates will work as entrepreneurs by establishing startups to take up projects for societal and environmental cause. PROGRAMME OUTCOMES: A. Ability to effectively apply knowledge of computing, applied sciences and mathematics to computer science & engineering problems. B. Identify, formulate, research literature, and analyze complex computer science & engineering problems reaching substantiated conclusions using first principles of mathematics, natural sciences, and engineering sciences. C. Design solutions for computer science & engineering problems and design system components or processes that meet the specified needs with appropriate consideration for the public health and safety, and the cultural, societal, and environmental considerations. D. Conduct investigations of complex problems...

Words: 23989 - Pages: 96

Free Essay

Computer

...Migration in Distributed Shared Memory Systems by Wilson Cheng-Yi Hsieh S.B., Massachusetts Institute of Technology (1988) S.M., Massachusetts Institute of Technology (1988) Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY September 1995 c Massachusetts Institute of Technology 1995. All rights reserved. Author : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Department of Electrical Engineering and Computer Science September 5, 1995 Certified by : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : M. Frans Kaashoek Assistant Professor of Computer Science and Engineering Thesis Supervisor Certified by : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : William E. Weihl Associate Professor of Computer Science and Engineering Thesis Supervisor Accepted by : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Frederic R. Morgenthaler Chairman, Departmental Committee on Graduate Students 1 2 Dynamic Computation Migration in Distributed Shared Memory...

Words: 40765 - Pages: 164

Premium Essay

Internal and External Environment Scan

...Internal and External Environments Nokia and Facebook Student’s Name Professor’s Name Date Internal and External Environments of Nokia and Facebook Nokia Environmental Analysis Internal Environment Nokia being the most renowned name in the world has a very big network which is distributed across the world, and has large selling when it is compare to other phone company in the world. It is of very high quality and has user-friendly features. The company has strong financial base which enables it to make innovations with a lot of ease. Nokia has a high product range which makes it very attractive many customers. Nokia’s financial health is strong, which makes it very profitable. Essentially, the price of the product is actually the main issue, as some of the Nokia’s products are not friendly to the users, which fail to sail through in the market. The service centres in some countries are quite few quite often there or no quality after sales services. Most of these product models are quite heavy to carry and not easy to handle. External Environment The digital market is developing so fast. Hence, Nokia has the opportunity to improve its sales as well as its share n the market. Due to An increase in the income level of the people, the purchasing power also increases; therefore, Nokia has to strategically go for the right customer so as to be able to achieve a big gain out of this important situation (Nokia Company, Investor relations, 2015). Also, they would have the good chance...

Words: 1252 - Pages: 6