Free Essay

Challenges for Multicore Programming

In:

Submitted By ranajoymalakar
Words 1388
Pages 6
| Challenges for multicore programming |

In 1965, Gordon E. Moore, co­founder and chairman emeritus of Intel Corporation made an observation that “the number of transistors placed inexpensively on integrated circuit will double approximately every two years”. This simple observation led to Moore’s Law, which has held true till recent times. Around 2004, single CPU clock speed scale ups hit what is today know as the “power wall”, when further power consumption and clock speed improvements became impossible. The problem at 90nm of silicon was that transistor gates became too thin to prevent current from leaking out into the substrate. None of the new technologies after 2004 has re-enabled anything like the scaling in the 90s. From 2007 to 2011, maximum CPU clock speed (with Turbo Mode enabled) rose from 2.93GHz to 3.9GHz, an increase of 33%. From 1994 to 1998, CPU clock speeds rose by 300%. CPU manufactures hence were forced to shift towards increasing processor count per die. Most processors today have at least 4 cores, while the higher end machines commonly have 16 cores or more. This brought in the era of multicore computing. However, increasing the number of cores does not necessarily mean faster processing. The software written for the multi core computers of today must be able to utilize the additional processing power resident in the extra cores. Today’s software therefore has to be written for multicore computers, and sequential modes of computation in software must be replaced by parallel processing to harness the power of the additional cores. However, most software in the world, excluding specialized cases of high performance computing and ADA, has always been sequential in nature. The sudden influx of multicores meant a complete new paradigm of programming, and while a considerable work has been done in this regard, programming multicores still proves to be a difficult problem.

In order to understand the challenges of creating software for multicore platforms, one must first understand the platform itself. Multicore machines are shared memory systems with uniform memory access time, implying that each execution unit usually has access to the whole system memory. Input output operations and disk access is usually serialized, making disk accesses and general input output operations very expensive in terms of time and synchronization. As the name implies, multicores necessarily have more than one processor, today usually 4 or more processors, on the same die. Therefore any software that can express itself in terms of tasks that can run in parallel on separate cores with minimal synchronization or disk access will yield maximum performance benefit from multicore machines. Whether a software code is expressible in such a form is however dependant on its algorithm and structure, and while some algorithms are inherently suited for parallel programming, many are not. Many algorithms in the domain of image processing require little or no interaction between tasks that process subparts of the image, and such algorithms, usually referred to as embarrassingly parallel algorithms, can be easily expressed as parallel tasks. On the other hand, iterative algorithms that depend on the output of the previous run are remarkably difficult, and sometimes impossible to parallelize. It can thus be easily surmised that one of the main tasks of programming for multicores is expressing the algorithm of the program as a set of parallel tasks. Most algorithms also need some synchronization between tasks, and may need a final reduction step to create the final output. If multiple approaches to parallelization of an algorithm are found, Amdahl’s law can be used to select the best approach to parallelization. A very commonly used law in the world of multicore programming, Amdahl’s law attempts to quantify the speedup produced by parallelizing software for multicores. Amdahl’s law states that if you enhance a fraction f of a computation by a speedup S, the overall speedup is: Speedupenhanced (f, S) = 1 / ( (1-f) + (f/S) ).
While Amdahl’s law can estimate the speed up gained by parallelizing software, it does not guide the programmer about how to create a parallel algorithm or software for a given problem. When it is required to create a parallel, multicore enabled version of existing software, often the programmer has to understand, analyze and redesign the software in order to achieve maximum benefits. In the initial days of multicore based computing, several attempts were made alleviate the complexity of parallelization by using auto parallelizing compilers. Compiler-based auto-parallelization is a much studied area, yet has still not found wide-spread application. This is largely due to the poor exploitation of parallelism inherent in program logic (often referred to as application parallelism), subsequently resulting in performance levels far below those which a skilled expert programmer could achieve. Most of today’s approaches for parallel programming for multicores depend on the programmer’s ability to create and express an algorithm as a parallel program using the various languages or libraries available for multicore programming.
Multicore programming can be achieved using are various programming models and libraries. Thread based models utilize multiple threads to express parallelism. However, if number of threads in execution is explicitly created and managed by the programmer, then the scalability of the program for target hardware of the future with larger number of cores becomes an issue. For this reason, explicit creation and management of threads is discouraged in parallel programming. Today’s parallel programming models like OpenMP, Intel’s thread building blocks and Cilk utilize what is known as user guided parallelism. In this model, the programmer is responsible for identifying parallel tasks within his/her code, but he/she does not explicitly write threads to handle these parallel tasks. Rather, these parallel units are indicated to the compiler through either specific classes or language extensions, and then the compiler, along with other supplied indications, is responsible for management of number of threads. This model is intended to provide excellent opportunities for scale up for future multicores with larger number of cores. Compilers today also utilize instruction level parallelisms to boost performance further.
While application level parallelism yields best performance gains, sometimes it is very difficult to achieve for large legacy software systems. It is also possible to gain substantial performance gains by using parallel libraries and data structures. For many commonly used mathematical operations, parallel versions are readily available in the market. MATLAB from Mathworks sells a parallel version that enable users to rum matlab code in parallel. Image processing library Imaging toolkit (ITK) also has a parallel version. By using these libraries, it is possible to gain substantial performance gain when running on a multicore without any major changes at code or algorithm level. Enterprises often use these libraries for quick and targeted performance enhancements for legacy code. All major languages today also provide parallel concurrent data structures which are inherently faster and thread safe, and language extensions that utilize parallelism. .Net language extensions contain concurrent data structures, and LINQ is a language extension that utilizes parallelism.

Multicores are here to stay and industry trends point towards largely parallel hardware architecture in the future. This is evident by Intel’s Knights Corner and AMD’s Fusion like products. The future may even move towards hybrid computing on heterogeneous platforms utilizing the cloud, NVIDIA’s GPUs and other many core devices. It is an urgent challenge today to create parallel computing technologies for tomorrow’s performance needs. Informatics plays a key role in this evolution, and must rise up to meet the challenge of how to make multicore computing easy and simple and out of the domain of specialists into the mainstream software development world.

Sources: 1. The death of CPU scaling: From one core to many — and why we’re still stuck: http://www.extremetech.com/computing/116561-the-death-of-cpu-scaling-from-one-core-to-many-and-why-were-still-stuck 2. Parallel Programming with Microsoft .NET: http://msdn.microsoft.com/en-us/library/ff963542.aspx 3. Introduction to Parallel Computing: https://computing.llnl.gov/tutorials/parallel_comp/#WhyUse 4. Automatic parallelization: http://en.wikipedia.org/wiki/Automatic_parallelization 5. Parallel computing: http://en.wikipedia.org/wiki/Parallel_computing 6. Rules for Parallel Programming for Multicore: http://www.drdobbs.com/parallel/rules-for-parallel-programming-for-multi/201804248 7. Parallel Computing Toolbox: http://www.mathworks.in/products/parallel-computing/ 8. OpenMP: http://www.openmp.org/ 9. Insight Segmentation and Registration Toolkit (ITK): http://www.itk.org/

Similar Documents

Premium Essay

Cloud

...An Operating System for Multicore and Clouds: Mechanisms and Implementation David Wentzlaff wentzlaf@csail.mit.edu Kevin Modzelewski kmod@csail.mit.edu Charles Gruenwald III Nathan Beckmann cg3@csail.mit.edu beckmann@csail.mit.edu Adam Belay Lamia Youseff lyouseff@csail.mit.edu abelay@csail.mit.edu Jason Miller Anant Agarwal jasonm@csail.mit.edu agarwal@csail.mit.edu Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139 ABSTRACT Cloud computers and multicore processors are two emerging classes of computational hardware that have the potential to provide unprecedented compute capacity to the average user. In order for the user to effectively harness all of this computational power, operating systems (OSes) for these new hardware platforms are needed. Existing multicore operating systems do not scale to large numbers of cores, and do not support clouds. Consequently, current day cloud systems push much complexity onto the user, requiring the user to manage individual Virtual Machines (VMs) and deal with many system-level concerns. In this work we describe the mechanisms and implementation of a factored operating system named fos. fos is a single system image operating system across both multicore and Infrastructure as a Service (IaaS) cloud systems. fos tackles OS scalability challenges by factoring the OS into its component system services. Each system service is further factored into a collection of Internet-inspired...

Words: 771 - Pages: 4

Free Essay

Death Penality

...Chapter 4: Threads Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne ©2013 Chapter 4: Threads  Overview  Multicore Programming  Multithreading Models  Thread Libraries  Implicit Threading  Threading Issues  Operating System Examples Operating System Concepts – 9th Edition 4.2 Silberschatz, Galvin and Gagne ©2013 Objectives  To introduce the notion of a thread—a fundamental unit of CPU utilization that forms the basis of multithreaded computer systems  To discuss the APIs for the Pthreads, Windows, and Java thread libraries  To explore several strategies that provide implicit threading  To examine issues related to multithreaded programming  To cover operating system support for threads in Windows and Linux Operating System Concepts – 9th Edition 4.3 Silberschatz, Galvin and Gagne ©2013 Motivation  Most modern applications are multithreaded  Threads run within application  Multiple tasks with the application can be implemented by separate threads     Update display Fetch data Spell checking Answer a network request  Process creation is heavy-weight while thread creation is light-weight  Can simplify code, increase efficiency  Kernels are generally multithreaded Operating System Concepts – 9th Edition 4.4 Silberschatz, Galvin and Gagne ©2013 Multithreaded Server Architecture Operating System Concepts – 9th Edition 4.5 Silberschatz, Galvin and Gagne ©2013 ...

Words: 1284 - Pages: 6

Premium Essay

Chapter 4 Review Questions 1-5

...Chapter 4 “IT Infrastructure: Hardware and Software” Review Questions 1-5 1. What are the components of IT infrastructure? • Define information technology (IT) infrastructure and describe each of its components. IT infrastructure consists of the shared technology resources that provide the platform for the firm’s specific information system applications. Major IT infrastructure components include computer hardware, software, data management technology, networking and telecommunications technology, and technology services. 2. What are the major computer hardware, data storage, input, and output technologies used in business? * Computer Hardware: Mainframes, midrange computers, PC’s, workstations, and supercomputers. * Data Storage: Magnetic disk, optical disc, magnetic tape and storage networks. * Input devices: Keyboards, computer mice, touch screens (including those with multitouch), magnetic ink and optical character recognition devices, pen-based instruments, digital scanners, sensors, audio input devices, and radio-frequency identification devices. * Output devices: Display monitors, printers, and audio output devices. • List and describes the various type of computers available to businesses today. * Mainframes are a large-capacity, high-performance computer that can process large amounts of data very rapidly. * Midrange computers are servers computers are specifically optimized to support a computer network, enabling users to share files...

Words: 3229 - Pages: 13

Premium Essay

Tutorial 6

...ISYS104 Tutorial – week 6 Review Questions 1. What is IT infrastructure and what are its components? Define IT infrastructure from both a technology and a services perspective. • Technical perspective is defined as the shared technology resources that provide the platform for the firm’s specific information system applications. It consists of a set of physical devices and software applications that are required to operate the entire enterprise. • Service perspective is defined as providing the foundation for serving customers, working with vendors, and managing internal firm business processes. In this sense, IT infrastructure focuses on the services provided by all the hardware and software. IT infrastructure is a set of firm-wide services budgeted by management and comprising both human and technical capabilities. List and describe the components of IT infrastructure that firms need to manage. Students may wish to use Figure 5-10 to answer the question. IT infrastructure today is composed of seven major components. • Internet Platforms – Apache, Microsoft IIS, .NET, UNIX, Cisco, Java • Computer Hardware Platforms – Dell, IBM, Sun, HP, Apple, Linux machines • Operating Systems Platforms – Microsoft Windows, UNIX, Linux, Mac OS X • Enterprise Software Applications – (including middleware), SAP, Oracle, PeopleSoft, Microsoft, BEA • Networking/Telecommunications – Microsoft Windows Server, Linux, Novell, Cisco, Lucent...

Words: 3398 - Pages: 14

Premium Essay

Saudi Electricity Company

...Essentials of Management Information Systems, 9e (Laudon/Laudon) Chapter 4 IT Infrastructure: Hardware and Software 1) IT infrastructure technology is the set of physical devices required to operate the entire enterprise. Answer: FALSE Diff: 1 Page Ref: 117 AACSB: Use of IT CASE: Comprehension 2) Today, most system and application software is custom built by in-house programmers. Answer: FALSE Diff: 2 Page Ref: 118 AACSB: Reflective Thinking CASE: Comprehension 3) Systems integration means ensuring the legacy systems work with new elements of the infrastructure. Answer: TRUE Diff: 1 Page Ref: 119 AACSB: Use of IT CASE: Comprehension 4) One of the main benefits of moving to mobile business computing platforms is the dramatically lower costs of hardware. Answer: FALSE Diff: 2 Page Ref: 145 AACSB: Reflective Thinking CASE: Comprehension 5) Today most business firms have discontinued operating their legacy systems, and they have been extremely inexpensive to replace with newer technology. Answer: FALSE Diff: 2 Page Ref: 119 AACSB: Use of IT CASE: Comprehension 6) A mainframe is a type of legacy workstation. Answer: FALSE Diff: 2 Page Ref: 119 AACSB: Reflective Thinking CASE: Comprehension 7) Client/server computing is the most widely used form of centralized processing. Answer: FALSE Diff: 2 Page Ref: 120 AACSB: Reflective Thinking CASE: Comprehension 8) In two-tiered client/server architecture, the processing...

Words: 5111 - Pages: 21

Premium Essay

Cloud Computing

...[pic] REQUEST-NEW-PAPER SEARCH SOFTWARE EMBEDDED ELECTRONICS VLSI WIRELESS RF ALL PAPERS free research papers-computer science-cloud computing [pic]   cloud computing  2012-cloud computing  cloud computing-year-2011  cloud computing-2  best-papers-EEE cloud computing  data storage in cloud computing  data-compression-in-cloud-computing Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Cloud computing provides computation, software, data access, and storage services that do not require end-user knowledge of the physical location and configuration of the system that delivers the services. Parallels to this concept can be drawn with the electricity grid, where end-users consume power without needing to understand the component devices or infrastructure required to provide the service. Cloud computing describes a new supplement, consumption, and delivery model for IT services based on Internet protocols, and it typically involves provisioning of dynamically scalable and often virtualized resources It is a byproduct and consequence of the ease-of-access to remote computing sites provided by the Internet. This frequently takes the form of web-based tools or applications that users can access and use...

Words: 2157 - Pages: 9

Premium Essay

Information System

...1 Civilization advances by extending the number of important operations which we can perform without thinking about them. Alfred North Whitehead, An Introduction to Mathematics, 1911 Computer Abstractions and Technology 1.1 Introduction 3 1.2 Eight Great Ideas in Computer Architecture 11 1.3 Below Your Program 13 1.4 Under the Covers 16 1.5 Technologies for Building Processors and Memory 24 1.6 Performance 28 1.7 The Power Wall 40 1.8 The Sea Change: The Switch from Uniprocessors to Multiprocessors 43 1.9 Real Stuff: Benchmarking the Intel Core i7 46 1.10 Fallacies and Pitfalls 49 1.11 Concluding Remarks 52 1.12 Historical Perspective and Further Reading 54 1.13 Exercises 54 1.1 Introduction Welcome to this book! We’re delighted to have this opportunity to convey the excitement of the world of computer systems. This is not a dry and dreary field, where progress is glacial and where new ideas atrophy from neglect. No! Computers are the product of the incredibly vibrant information technology industry, all aspects of which are responsible for almost 10% of the gross national product of the United States, and whose economy has become dependent in part on the rapid improvements in information technology promised by Moore’s Law. This unusual industry embraces innovation at a breath-taking rate. In the last 30 years, there have been a number of new computers whose introduction...

Words: 24107 - Pages: 97

Premium Essay

Trying to Join Site

...IT1115 Introduction to Information Technology Syllabus Credit hours: 6.0 Contact/Instructional hours: 70 (50 Theory, 20 Lab) IT1115 Introduction to Information Technology Syllabus COURSE SUMMARY COURSE DESCRIPTION This course explores foundational topics related to information technology. Topics examined include computing devices, hardware, software, operating systems, computer networks, security, and computer programming. Logical problem solving, troubleshooting, and maintenance of computer systems are also introduced. MAJOR INSTRUCTIONAL AREAS 1. Computer History and Fundamentals 2. Hardware 3. Operating Systems 4. Basic Networking 5. Basic Security 6. Software 7. Basic Programming 8. Web Technologies 9. Troubleshooting COURSE LEARNING OBJECTIVES By the end of this course, you should be able to: 1. Identify the evolution of computers and different types of computers. 2. Convert numbers between binary, decimal, and hexadecimal number systems. 3. Explain the purpose, functions, and characteristics of a CPU. 4. Describe the physical components of a computer and various input and output devices, including storage and memory. 5. Describe the function of BIOS and the booting process of a computer. 6. Describe basic operating system architecture, its components, and storage management. © ITT Educational Services, Inc. All Rights Reserved. [2] 6/15/15 IT1115 Introduction to Information Technology Syllabus 7. Describe basic types of computer network topologies and connections...

Words: 12527 - Pages: 51

Free Essay

Deadlock Detector and Solver Research Paper

...Deadlock Detector and Solver Abstract Deadlock is one of the most serious and complex problems concerning the reliability of concurrent Java Programs. This paper presents Deadlock Detector and Solver which detects and resolves circular deadlocks of a java program. An agent written in C++ runs parallel to Java Program and monitors the Java Virtual Machine for deadlocks. If the deadlock is detected, the solver agent is used to resolve the deadlock . Introduction The onset of multicore processors forces the programmers to use multiple threads in order to take advantage of hardware parallelism. Java is one of the first languages to make multithreading available to developers. Along with advantages of concurrent systems and multithreading, there are some challenges involved. Java has inter-process communication model which means it has set of methods for exchange of data among multiple threads and processes. It is based on shared data structures and meshes well with hardware architectures in which multiple cores share memory. However Java is susceptible to deadlocks. Deadlock is a condition under which the entire program is halted as each thread in a set attempts to acquire a lock already held by another thread in a set. Java is susceptible to deadlocks because (a) threads exchanges information by sharing variables that they lock with mutex locks and (b) the locking mechanism interacts with other language features, such as aliasing. Consider a simple banking transaction example...

Words: 3641 - Pages: 15

Free Essay

Assignment Artemis

...3 Global challenges, need for R&I and economic dimensions of Digital Technology AUTUMN 2013 Copyright © ARTEMIS Industry Association & ITEA Office Association Permission to reproduce any text from this publication for non-commercial purposes is granted, provided that the source is credited. First edition, autumn 2013 www.artemis-ia.eu & www.itea2.org ISBN: 978-90-817213-2-5 5 Preamble This updated document1 is the joint result of the industry represented in the ARTEMIS Industry Association and ITEA and expresses the common industry ambition. Its creation was initiated by the ARTEMIS ITEA Cooperation Committee (AICC). The main goal of this update of the ITEA-ARTEMIS high-level vision 2030, version 2012 is to add a quantitative description of the impact of software innovation on revenues and labour. There are also other aspects of the impact of software innovation, like eco-systems, community building and standardisation. However these are not the focus of this year’s update. Disclaimer The trends and predictions presented in this document are based on publicly available sources. We rely on these sources, without independent verification of the information presented. The nature of this document is for a large part rather a compilation of existing material, than a reinvention of insights. The statements made by Roland Berger Strategy Consultants are based on assumptions held to be accurate on the basis of the information available. However, Roland Berger Strategy...

Words: 19271 - Pages: 78

Premium Essay

Dac1

...Porter's Five Forces Supplier Power: Here you assess how easy it is for suppliers to drive up prices. This is driven by the number of suppliers of each key input, the uniqueness of their product or service, their strength and control over you, the cost of switching from one to another, and so on. The fewer the supplier choices you have, and the more you need suppliers' help, the more powerful your suppliers are. Buyer Power: Here you ask yourself how easy it is for buyers to drive prices down. Again, this is driven by the number of buyers, the importance of each individual buyer to your business, the cost to them of switching from your products and services to those of someone else, and so on. If you deal with few, powerful buyers, then they are often able to dictate terms to you. Competitive Rivalry: What is important here is the number and capability of your competitors. If you have many competitors, and they offer equally attractive products and services, then you'll most likely have little power in the situation, because suppliers and buyers will go elsewhere if they don't get a good deal from you. On the other hand, if no-one else can do what you do, then you can often have tremendous strength. Threat of Substitution: This is affected by the ability of your customers to find a different way of doing what you do - for example, if you supply a unique software product that automates an important process, people may substitute by doing the process manually or by outsourcing...

Words: 4647 - Pages: 19

Premium Essay

Infrastructure Hardware

...Information Technology Infrastructure P A R T II 4 IT Infrastructure: Hardware and Software 5 Foundations of Business Intelligence: Databases and Information Management 6 Telecommunications, the Internet, and Wireless Technology 7 Securing Information Systems Part II provides the technical foundation for understanding information systems by examining hardware, software, databases, networking technologies, and tools and techniques for security and control. This part answers questions such as these: What technologies and tools do businesses today need to accomplish their work? What do I need to know about these technologies to make sure they enhance the performance of my firm? How are ISBN 1-269-41688-X these technologies likely to change in the future? 107 Essentials of Management Information Systems, Tenth Edition, by Kenneth C. Laudon and Jane P. Laudon. Published by Prentice Hall. Copyright © 2013 by Pearson Education, Inc. IT Infrastructure: Hardware and Software LEARNING OBJECTIVES C H A P T E R 4 STUDENT LEARNING OBJECTIVES After completing this chapter, you will be able to answer the following questions: 1. 2. What are the components of IT infrastructure? What are the major computer hardware, data storage, input, and output technologies used in business? What are the major types of computer software used in business? What are the most important contemporary hardware and software trends? What are the principal issues in...

Words: 21212 - Pages: 85

Free Essay

Research Paper

...Process management Research Paper Process management Research Paper Contents Title: 3 Abstract: 3 Introduction/Background 4 The Process 5 Process State 7 Process Scheduling 8 Methods 9 Results and findings 10 Process Creation 10 Threads 12 Scheduling 14 Synchronization 15 Buffering 16 Deadlock Handling 17 Related Work 18 Conclusion and Future Work 19 References 20 Appendix 20 List of your data 20 Design/Implementation 21 Source Codes 24 Process Creation 24 Threads 24 Peterson’s Algorithm 25 Bakery Algorithm 26 Softwares 26 Title: Process Management Abstract: A process is usually called as a program in execution. A process needs certain assets, including CPU time, memory, documents, and I/O gadgets, to achieve its undertaking. The working framework is in charge of the accompanying exercises regarding process administration i.e. process creation and erasure, process suspension and resumption (scheduling), procurement of instruments for process synchronization and process correspondence. Process administration is normally performed by the bit. In numerous cutting edge working frameworks, there can be more than one occurrence of a system stacked in memory in the meantime; for instance, more than one client could be executing the same project, every client having separate duplicates of the system stacked into memory. With a few projects, it is conceivable to have one duplicate stacked...

Words: 5575 - Pages: 23

Premium Essay

Management Information Systems Text Book Summary

...Course Related 2 Virtual Expert 2 JSB Inc. 2 James S. Black (Company founder) 2 Abigail Foley (Senior vice president of Business Development) 2 Mark Thompson (Business Development Manager) 2 Major Business Functions (In JSB) 3 Manufacturing 3 Sales and Marketing 3 Human Resources 3 Finance and Accounting 3 Terms 3 Information vs Data 3 Digital Manufacturing 3 DELMIA 3 POS 3 Inventory Management System 3 Mapping Technology 3 Electronic Business / E-Business 3 Electronic Commerce / E-Commerce 3 E-Government 3 Problem Solving 3 Critical Thinking 3 Steps 3 1. Problem Identification 3 Dimensions of Business Problems 3 Organizations 3 Technology 3 People 3 Examples 3 2. Solution Design 3 3. Solution Evaluation and Choice 3 4. Implementation 3 Analyzing Performance 3 Case Study Analysis 3 Identify the most important facts surrounding the case 3 Identify key issues and problems 3 Specify alternative courses of action 3 Evaluate each course of action 3 Recommend the best course of action 3 Information Systems 3 Definition 3 Information Technology versus Information Systems 3 Information Technology 3 Information System 3 Dimensions 3 Organization 4 Technology 4 Components 4 Hardware 4 Software 4 Data Management Technology 4 Networking and Telecommunications Technology 4 Function 4 People 4 Functions 4 Input 4 Processing 4 Output 4 Types 4 Strategic-level systems 5 ...

Words: 9749 - Pages: 39

Free Essay

Unix, Linux, Mac Os, Windows Os Comparison

...Introduction The most commonly known Operating Systems (OS) today are UNIX, Linux, MacOS, and Windows. These operating systems all behave in their own way and similar in others. This paper will discuss this comparison of how these operating systems utilize memory, process handling, file management, and security. Memory Management Each operating system is required to allocate a certain amount of memory for the operating system itself for the processes that are being used or executed. There are two places memory exists: physically on the motherboard of the computer and within the operating system itself. When the operating system dynamically divides the memory usage between these two areas, this is called memory management. Each operating system manages memory different from the others; however they all follow some basic rules of memory management. All of the operating system’s memory management in general follows these requirements: relocation, protection, sharing, logical organization, and physical organization (Stallings, 2015). Relocation of memory is the process that allows the operating system to move a process or file from physical memory to virtual memory and back again, depending on the need for that data. Virtual memory exists within the operating system which uses the hard drive to store temporary information and physical memory is located on the motherboard of the computer. While the process is in memory, protection of that area needs to be enabled to...

Words: 2918 - Pages: 12