...UNIX Protection Scheme Cedric Lee POS/355 Scott Stewart March 25, 2013 UNIX Protection Scheme There is an operation system that supports 5,000 users, and the company only wants to allow 4,990 users permission to access one file. In order to have a protection scheme in UNIX, a number of operations need to be performed first in order for this to work. UNIX file management hierarchy is very essential to know in order to understand and devise a plan that will allow this protection scheme to protect the files. Without knowledge of the hierarchy of the file management system within the UNIX operating system, there is no way that the 4,990 will have access to only one file. A file access control scheme will be the design of UNIX operating system. Therefore, user ids and passwords are needed in order to gain access to the system. All users of the UNIX operating system will each be given a user id and a user password. These user ids and passwords will be kept by the assigned users only. The protection of these ids and passwords depend on how well the user protects them. Encryptions and decryptions can also be used when the users are attempting to login. The administrator can put each user into different groups that allow access or deny access to certain files within the operating system. By doing this, there can be control on who can access what file due to the permission given to them by the administrator. The administrator is referred to as the super user because...
Words: 385 - Pages: 2
...| |2014 | | | POS/355 | | |Professor Sumayao | | | | | |June 9, 2014 | |[Week 4 Individual Assignment-Failures] | | | Types of Failure in Distributed System December 5, 2012 Types of Failure in Distributed System To design a reliable distributed system that can run on unreliable communication networks, it is utmost important to recognize the various types of failures that a system has to deal with during a failure state. Broadly speaking failures of a distributed system fall into two obvious categories: hardware and software failure. A distributed system may suffer any of such types of failures. Yet each of the failure has its own particular nature, reasons and corresponding remedial actions to restore smooth operation (Ray, 2009). Follow are few types of failure that may occur for a distributed system. Transaction failure: Transaction failure is a centralized...
Words: 731 - Pages: 3
...Linux and Windows are both operating systems for the common home PC. Each of them offers positive benefits and negative detriments. Some people try to claim that one is better than the other. But as an active user of Linux for over half a decade, I can honestly say that both are superior in their respective strengths. I would never attempt to do any type of visual work like video editing or photo editing that requires anything in depth on a Linux machine. The native programs are just not as good as anything that Adobe offers in the Windows world. However if I am surfing the net checking out random sites that might be questionable in nature, I would not dare so unless I was on my Linux partition. In reference to memory managements differences between Windows and Linux; We have to first start with the base. Memory can be viewed in both a RAM basis and memory as in storage and a base for operating system. Windows has been locked into a dated filesystem for the basis of their operating system. This file system is called NTFS. NTFS positive is that it is old and stable. Its positives are know and it’s negatives are so well know that they come as no surprise to end users. Sadly this outdated file system technology requires the end user to periodically defragment the operating system to combat NTFS’s gross lack of the ability to organize files. Over time NTFS moves files into so many random places on the hard drive that it starts to slow down due to...
Words: 270 - Pages: 2
...Network The way to begin developing the system architecture is to decide how the new system will spread information in different locations. Riordan Manufacturing's operational network allows each of their locations to transmit information for communication. Their corporate headquarters's new human resource system (HRIS) will be installed by NAS Iomega network storage. Riordan's three locations will be connected to San Jose by the wide area network connection (WAN), which communicates at T1 connection speeds. The network server relays its information to the human client computers each location uses. Each location's human resources department can then have access to important information, accessed via their local client server computers. Process All of the files that the human resources department needs for their procedures will be saved to the HR system on the main server at corporate headquarters. The HRIS system will store this information on its employees via Riordan's intranet: • Employee files – includes resumes, performance reviews, and other important information. • Job descriptions – explains a job's functions and the education it requires. • Electronic job posting – lists job openings in multiple locations. • Employee handbook – employees can access the handbook electronically. • Policies and procedures – a place to find the company's policies and operating standards. • Employee file updates – Lets...
Words: 435 - Pages: 2
...Team D Project Plan I. Team member responsibilities a. Memory management (Linux, Mac, Windows) – Ben McCormick b. Process management (Linux, Mac, Windows) – Andy Richards c. File management (Linux, Mac, Windows) – Richard Smith d. Security Management (Linux, Mac, Windows) – Mark Heselden e. PowerPoint presentation (Linux, Mac, Windows) – Andy Richards & Joseph Jundt II. Team approach f. As illustrated above the parts will be divided among the members of the team. g. Each week the member’s work will be submitted to the team forum by Friday of the week in which it is due. This will allow the other team members time to look over the submissions and add any suggestions or changes they may have. h. Any suggestions and changes will be submitted by 6PM on the Sunday of the week it is due thus giving the person who submitted the rough draft time to assimilate the changes into their final rough draft due by 9PM on Monday. i. Once the final rough draft of the week has been submitted to the team forum, the person responsible for submitting the collective rough draft will combine the drafts into one paper and turn it in for the team. III. Key headings j. Memory management – k. Process management – “ l. File management – m. Security management – IV. Schedule and milestones all times are in EST n. Week 2 i. 5/2/2013 – project plan approval ii. 5/6/2013...
Words: 398 - Pages: 2
...Failures POS/355 August 26, 2013 UOPX Failures Distributed systems emerged recently in the world of computers. A distributed system is an application of independent computers that appear to work as a coherent system to its users. The advantages of distributed systems consist of developing the ability to continually to open interactions with other components to accommodate a number of computers and users. Thus, stating that a stand-alone system is not as powerful as a distributed system that has the combined capabilities of distributed components. This type of system does have its complications and is difficult to maintain complex interactions continual between running components. Problems do arise because distributed systems are not without its failures. Four types of failures will characterize and the solutions to two of these failures will address on how to fix such problems. Before constructing a distributed system reliable one must consider fault tolerance, availability, reliability, scalability, performance, and security. Fault tolerance means that the system continues to operate in the event of internal or external system failure to prevent data loss or other issues. Availability needed to restore operations to resume procedure with components has failed to perform. For the system to run over a long period without any errors is need and known as reliability. To remain scalable means to operate correctly on a large scale. Performance and security remains needed...
Words: 953 - Pages: 4
...File Management POS/355 05/13/2013 John Buono File Management The file managers function is to regulate all of the files on a system that is stored on the storage mediums. There are several tasks that the file manager must perform in order to manage these files. The file manager must be able to identify the unique naming conventions of the files in order to complete its tasks. The file manager must also be able to determine the location of the files, the sectors that make up the file on the storage medium, and the order of those sector that make up the file. It is important that the file manager work with the device manager and use effective algorithms for the read and write of files. The file manager also gives or denies access to files by users or programs. The file manager also is in cooperation with the process manager to allocate or de-allocate files to the processor. The last task is that the file manager provides easy commands that assist users and/or programs in file handling (Gallert, 2000) . Unix/Linux File Management UNIX/Linux uses the distinction of inodes to refer to files or segments of files on the system and uses pointers to indicate where the files are on the storage media. There are some slight differences between each version of Unix/Linux but we will not go into those differences in this paper and will only cover the basics of file management. No matter what version of Unix/Linux is being used the file structure and permissions do seem to be...
Words: 2096 - Pages: 9
...POS/355 March 11, 2013 Bhupinder Singh Failures Paper The distributed systems are unique in that it’s executions of the application of the protocols are to coordinate on multiple processes on the network, they have their own local memory and it communicates in entities with each of them using a massage passing mechanism. They also have their own personal users to them that they can use for personal uses. What are shared across the distributed systems are the data, processor, and the memory that can achieve those tasks when processing information. The distributed system has features to help achieve in in solving problems and issues with software and programs, when being useful with the distributed system is not very easy; its capabilities are the components, than just the stand alone systems that are sometimes not as reliable. Because of the complexities of interactions between running the distributed systems, it must have special characteristics like the fault tolerant; this can recover from component failures without performing incorrect actions. Recoverable is where failed components can restart and then rejoin the system after the cause failure has been repaired. The failure on a distributed system can result in anything from easily repairable errors to a catastrophic meltdown. Fault tolerance deals with making the system function in the presence of defaults. Faults can occur in any one of components. In this paper we will look at the different...
Words: 811 - Pages: 4
...Failures Adam Cain POS/355 2/6/2014 Randy Shirley Failure is not an option! This is what I have been told growing up and while I served in the Marine Corps, but as I found out in this assignment, failure is an option. This holds true when talking about a distributed system, which is a computer network like a Wide Area Network (WAN) or a Local Area Network (LAN). Distributed systems is defined as a software system in which components located on networked computers communicate and coordinate their actions by passing messages (Coulouris, Dollimore, Kindberg, & Blair, 2012). This allows the computers or even devices like smart phones and tablets, to share resources like printers, hard drives, and even internet access. A centralized system is a computer that is by itself, one that is not connected to a laptop. Think of a centralized computer as one of the spy computers in movies, like Mission Impossible. These systems can and will fail, while sharing some failures; a distributed system has more components that could fail, leading to them having more problems. There a many things that could fail on a distributed system, this paper will cover four of them, starting with hardware failure. Video cards, network access card, hard disk drives, solid-state drives, memory, and power supply units (PSU), these are all pieces of hardware that are in most of the computers sold today, and they can all die at a moment’s notice. Some of these items, if they failed would not...
Words: 1129 - Pages: 5
...Memory Management Paper POS/355 February 16, 2013 Bhupinder Singh Memory Management Paper Memory management is a key function of the operating system. Without proper memory management it can slow the running of the operating systems and can limit the number of tasks the system is able to do at the same time. The memory management is divided into two parts Multi-Programming and Uniprogramming system. One part would be the uniprogramming, it process things one at a time. Some users only do things one at a time more for personal computers, and then there is the multi- programming, which several programs can run at the same time. The operating system has the capability of causing an interruption after a specified time interval. The multiprogramming is a rudimentary form of parallel processing, so the operating system will allow each program for a given length of time. So what the memory management does is of the act of handling the computer’s operating systems memory space. In order for the operating system to run efficiently the memory management part has to share and store properly. It is a critical component to the operating system to run efficiently. There are requirements to the memory management. The mechanism and policies are in place to be required for the use of the operating system. These requirements are Protection, Sharing, Relocation, Physical Organization, and Logical Organization. Protection is one, it must be provided by...
Words: 511 - Pages: 3
...File Management POS/355 February 25, 2013 Bhupinder Singh File Management Imagine a system that support 5000 users, and only allow 4000 of those users can access one file. This can be accomplished in many different ways. One option is for the 4000 users placed in a specific group and then set another group access to the group already on file. The second option is would be the way to go, and that is to have an access control list made up with names of all 5000 users on it. This paper will look into a protection scheme that will be used in an efficiently way to provide that protection to the system. There are techniques to protect the systems directories, files, and folders. Most IT departments will set up appropriate file permission on the files, set up certain tools to check accounts security, and make sure that every account and user set up passwords when in the systems. Security properties can be the source of protecting this. This paper will talk about the security descriptor referred to as access control list (ACL). There are two different types of ACL’s. Access Control List is specifically for the directory and files, and Access Control Default can only be associated with the directory. Example: when a file is in the directory and does not have access to the ACL it will use the rule of default for the directory. So with setting up those 500 users the system can access the list and find if those users are allowed to have...
Words: 584 - Pages: 3
...Mac OS Vulnerabilities: Buffer Overflow Michael Andrews POS/355 Introduction to Operational Systems May 1, 2014 Professor Christopher Warner Mac OS Vulnerabilities: Buffer Overflow The risk of a buffer overflow to an operating system is high, especially to operating systems using C language. Based on the UNIX architecture, Mac OS uses C, Objective C, and C++ code as its foundation, primarily for its speed and efficiency. However, this leaves them vulnerable to attacks. Despite its reputation for security, this fundamental operating system security flaw leaves the Mac OS open for external attacks, which can result in theft of user information and corruption of internal systems. However, measures exist to prevent attacks on buffer overflows through code as well as systems built into the Mac OS architecture. A buffer overflow occurs when more data is put into the buffer than it can hold. This happens when there is not enough room allocated in the buffer and vital program information is overwritten by the new data. Attackers are able to exploit this by taking advantage of a program that is waiting on user’s input. In order to do this, an attacker must know the weaknesses in a program and understand how information will be stored in memory in order to alter the programs execution and gain access to a user’s system. Malware can also be specifically written in order to compromise the integrity of the system. Buffer overflows are the most common way for an attacker to gain...
Words: 957 - Pages: 4
...File Management Comparison for Operating Systems All computers and their subsequent operating systems use a means of digitally storing data within a file onto an allotted section of some type of storage media. The allotted section of storage can theoretically be read from, and written to as required. The data in the file is stored as bytes of binary code, and can be identified as belonging to a particular file by the file’s start or “address”. Though the storage section is linear in nature, it can be visualized as a cross grid of cells, with each cell containing one byte of data. The combined cells of data populate the allotted section of storage within a file. At this point, operating systems diverge from this commonality, particularly in the way the operating systems manage files through their respective “file management” programs. The following is a cursory look at three such operating system’s file management schemes. Mac OS file system Mac OS uses what is called the Hierarchical File System (plus). It comes from the original version of the Hierarchical File System (HFS), which comes from the Macintosh File System (MFS), used with older Mac systems. The HFS concept begins with a sole directory on a storage media (in this case a hard drive or hard disk). From this directory, sub-directories are created, and so on, down to the user and user access files. This is the most simplistic of file management system concepts, in theory. Mac OS is also proprietary. Linux...
Words: 829 - Pages: 4
...Memory Management Requirements The requirements that memory management is intended to satisfy are; relocation, protection, sharing, logical organization and physical organization. Main memory is vital component in a computer system, as both the operation system and some user application have to be loaded into main memory before they can be executed. I will describe each requirement is a bit more detail. The first requirement in memory management is relocation. Relocation is essentially relocating the process to a different area of memory. Often it is impossible for a programmer to know in advance which other programs will be resident in main memory at the time of execution of their program, therefore, in an effort to maximize processor utilization, we like to be able to swap active processes in and out of main memory. The next requirement in memory management is protection. The purpose of protection is to protect each process against unwanted interference by other processes, whether they are unintentional or deliberate. Since, the location of a program in main memory is unpredictable, it is impossible to check absolute address at compile time to assure protection. Furthermore, most programming languages allow the dynamic calculation of addresses at run time. Therefore, all memory references generated by a process must be checked at run time to ensure that they refer only to the memory space allocated to that process. Any protection mechanism must have the flexibility...
Words: 566 - Pages: 3
...Router/ Switch Operating System Cisco IOS or Inter-work Operating System is an operating system for the company Cisco’s system routers and network switches. Cisco systems is a multinational corporation based in San Jose of California that designs, manufactures, and sells networking equipment. The company was founded in 1984 by two people working at Stanford University on the computer support staff. The two Stanford University members were Leonard Bosack, who was in charge of the computer science department’s computers and his then girl friend Sandy Lerner who was in charge of the graduate school of business’s computers and they named it after San Francisco which is why in the company’s early years they insisted on the first “c” in cisco being not capitalized. Cisco IOS is the operating system used for their products and I will go over the history and tech specifications of this operating system. The Cisco IOS was first based off of Stanford University’s multiple protocol router software which was written by William Yeager a Stanford Research Engineer while at Stanford Medical School. Cisco IOS is a package of routing, switching, internetworking, and telecommunications functions integrated into a multitasking operating system. Cisco IOS is versioned using three numbers and a few letters in the general form of a.b(c.d)e with a being the major version number and b is the minor version number, c is the release number, and d is the interim build number omitted from general...
Words: 1061 - Pages: 5