Free Essay

Linux

In:

Submitted By tiabert
Words 17681
Pages 71
University of Sunderland

School of Computing and Technology

File Management System in Linux CUI Interface

A Project Dissertation submitted in partial fulfillment of the
Regulations governing the award of the degree of BA in
Computer Studies, University of Sunderland 2006

I. Abstract

This dissertation details a project to design and produce a prototype Linux character environment file manipulation assisting application. The application is offering a friendly menu driven interface to handle the jobs that non-programmers keep finding cumbersome to master when it comes to working in a Unix/Linux interface, resulting in serious mistakes and much loss of productive time. The Linux File Management System is a basic program for every user at a Unix/Linux terminal. Advantages here include the fact that the support team does not have to be burdened with solving simple file based queries by the employees. The areas of Designing GUI interfaces in Linux and Windows versus Linux Security were researched and a prototype has been designed, developed and tested. An evaluation of the overall success of the project has been conducted and recommendations for future work are also given.

Words

II. Table of Contents

1) Introduction.................................................................................................................................4 1.1 Overview........................................................................................................4 1.2 Sponsor Background...........................................................................................................5 1.3 Research Topics...............................................................................................5
2) Research Topic 1 – Security benefits of Linux OS compared to Windows OS ……………….6 2.1 Introduction.......................................................................................................6 2.2 Myths...............................................................................................................8 2.3 Windows vs Linux Design...................................................................................11 2.4 Windows Design..............................................................................................13 2.5 Linux Design...................................................................................................15 2.6 Realistic Security And Severity Metrics................................................................................................................17 2.7 Overall Security Risk.........................................................................................19

3.0 Introduction 26
3.1 A brief look at several popular toolkits available for Linux. 26
3.2 GTK, the Gimp Toolkit 26
3.3 QT from Troll Tech 27
3.4 wxWindows 27
3.5 GraphApp, Platform-Independent GUI Programming in C 27
3.6 Motif, the Standard 28
3.7 Good Interface Designing Tips and Techniques 28
3.8 UI Design Principles 31
3.9 Concluding Remarks 32

4) Project Management Approach and Planning.............................................................................41 4.1 Project Management Approach............................................................................41 4.2 Terms of Reference and Project Plan....................................................................42
5) Analysis.......................................................................................................................................44 5.1 Current System................................................................................................................44 5.2 Requirements of Proposed System.......................................................................46 5.3 Constraints and Limitations.................................................................................47 5.4 Software Tools..................................................................................................................48 5.5 Interactivity…………..........................................................................................50
6) Design........................................................................................................................................52 6.1 Project Documentation.......................................................................................52 6.2 Design Concept..............................................................................................................52
7) Development...............................................................................................................................68
8) Testing........................................................................................................................................73 8.1 Unit Testing................................................................................................................74 8.3 Analysis of Test Results.....................................................................................77
9) Evaluation and Conclusions........................................................................................................79 9.1 System Evaluation............................................................................................................79 9.2 Process Evaluation............................................................................................................80 9.3 Evaluation of Research......................................................................................83 9.4 Conclusions.........................................................................................................83
10) Recommendations for Future Work.........................................................................................84
11) References..............................................................................................................................85 11.1 References for Research Topic 1......................................................................85 11.2 References for Research Topic 2......................................................................87

1.0 Introduction

1.1 Overview

This project was undertaken in order to partially fulfill the requirements necessary to obtain a degree in Computer Studies and aimed to produce a Unix/linux file management system for the project sponsor Elms Worth Constructors.
The purpose of the application was to reduce to the minimum, employee dependence on the IT support team for trivial operations by making them capable of executing the commands they have trouble with in a convenient yet effective manner.
The project was chosen because it afforded the author the opportunity to acquire some a better understanding of the linux operating system.

The report is structured as follows; chapter two examines Windows versus Linux Security and Usability, chapter three examines the development of a graphical User Interface in linux. The project management approach and planning are then discussed in chapter four, followed by the analysis phase in chapter five, the design phase is covered in chapter six and the development phase in chapter seven. The various forms of testing conducted are detailed in chapter eight and an evaluation of the project’s success follows in chapter nine. Chapter ten suggests recommendations for possible future work, the references used for the two research topics are listed in chapter eleven and the appendices conclude the report.

1.2 Sponsor Background

Organisation Of Elmsworth Projects Pty Ltd

Elmsworth Projects is a contracting organization located in the northern part of Botswana in the city of Francistown. It offers services ranging from; civil and building constructions, quantity surveying services, and construction plant hire. The main client or market is the private developers and few government projects usually contracted as sub contractor from big contractors. It undertakes project all over Botswana.

Elmsworth projects is a small sized organization with an annual income of an equivalent of ZAR 1,500,000.00. Its goal is to work with private developers such as banks, estate agents and individuals in facilitating the efficient erection and maintenance of building and civil structures. Like any business system elmsworth project has a structure which has functions and processes to realize its goals.

The establishment of elmsworth projects consists of directors, secretary, project engineer, quantity surveyor, manager (plant), foreman, site clerk and operatives. See the organization structure. Most of the administration duties are done at the head office and the rest of the functions are done on sites.

1.3 Research Topics

Linux/Unix versus Microsoft Windows Security was chosen for the first research topic and Designing GUI interfaces for the second. Organizations have come to increasingly realize the need for some operating system that will cater to their data security fears from both internal and external threats. Linux/Unix and Windows Operating Systems are discussed. The area of Designing GUI Interfaces tries to explore the design and development potential of GUI interfaces especially in Linux.

2.0 Research Topic 1 – Security benefits of Linux OS compared to Windows OS

2.1 Introduction

When you ask someone in a prominent organization nowadays exactly what platform do they use for most of their work, or better yet, the employees preferred platform for doing their work, it is not wonder that most will say Windows except maybe a few technical guys who understand that the beauty of windows is not to be mistaken for rest assured security and that likewise the rigidness and user unfriendliness compared to Windows of Linux distributions neither translates to more attention given to security than esthetics.

Microsoft’s Windows Operating system sits and runs on 90% of the worlds personal computers but Microsoft still gets a bad rap for security, while many believe that Linux is relatively secure. Is this a fair assessment? Not really: After collecting a year’s worth of vulnerability data, the much acclaimed Forrester’s analysis shows that both Windows and four key Linux distributions can be deployed securely. Key metrics include responsiveness to vulnerabilities, severity of vulnerabilities, and thoroughness in fixing flaws.
Yet, currently the world’s most popular web server is the open source apache web server. Considering that it is open source, one would expect it to be the least popular given the openness of the source code. That in comparion to windows IIS is something to think about. Why is Apache beating Microsoft’s Internet Information Services which is closed and by public perception, more secure?
Open source in this instance proves how well it be because it’s source is exposed not only to would be code breakers but to experienced and well versed code breakers who still want to prove theorems round security in Linux wrong. Hence the battle to have the best code continues to get reinforced by hardcore professionals resulting in the consistent evolution of linux distribustions that customers still do not understand the need for such a variety. Just recently there was an article in the papers where Ubuntu linux was boldly challenging Microsoft Windows security aspects.

2.2 Myths

2.2.1 Open Source is inherently dangerous

The impressive uptime record for Apache also casts doubt on another popular myth: That open source code (where the blueprints for the applications are made public) is more dangerous than proprietary source code (where the blueprints are secret) because hackers can use the source code to find and exploit flaws.
“Real vulnerability to attack begins with disclosure that virtually every complex piece of code probably has some vulnerability in it. But users are unlikely to see attacks against their platforms until someone uncovers and discloses a vulnerability in a public forum like the bugtraq security mailing list (Laura Koetzle, 2004)”

[pic]The evidence begs to differ. The number of effective Windows-specific viruses, Trojans, spyware, worms and malicious programs is enormous, and the number of machines repeatedly infected by any combination of the above is so large it is difficult to quantify in realistic terms. Malicious software is so rampant that the “average time it takes for an unpatched Windows XP to be compromised after connecting it directly to the Internet is 16 minutes -- less time than it takes to download and install the patches that would help protect that PC (Greek Keiser, techweb) “
As another example, the Apache web server is open source. Microsoft IIS is proprietary. In this case, the evidence refutes both the "most popular" myth and the "open source danger" myth. The Apache web server is by far the most popular web server. If these two myths were both true, one would expect Apache and the operating systems on which it runs to suffer far more intrusions and problems than Microsoft Windows and IIS. Yet precisely the opposite is true. Apache has a near monopoly on the best uptime statistics. Neither Microsoft Windows nor Microsoft IIS appear anywhere in the top 50 servers with the best uptime. Obviously, the fact that malicious hackers have access to the source code for Apache does not give them an advantage for creating more successful attacks against Apache than IIS.

2.2.2 Conclusions Based on Single Metrics

Four of the best metrics to use in order to quantify platform security are • “All days of risk” quantifies the platform’s actual vulnerability to attack. “All days of risk” measures the number of days between a security vulnerability’s first public disclosure and the platform maintainer’s first fix for the problem. We calculated
“all days of risk” values for the platforms maintained by Microsoft and by Linux distributors Debian, MandrakeSoft, Red Hat, and SUSE Linux.3 • “Distribution days of risk” compares the Linux distributors’ responsiveness.
Linux world, distributors bundle together code from many sources — meaning there may be a lag between a patch being issued for a specific component and that patch being included in a new distribution. “Distribution days of risk” quantifies the elapsed time between the first fix for the security hole by the maintainer of the flawed component and the first fix for the flawed component issued by the platform maintainer (see
Figure 2). We calculated separate values for “distribution days of risk” for Debian,
MandrakeSoft, Red Hat, and SUSE, and used the “all days of risk” value for Microsoft. • “Flaws fixed” measures the platform maintainers’ thoroughness. “Flaws fixed” calculates the percentage of applicable public security issues that the platform maintainer addressed. We calculated “flaws fixed” for all five platform maintainers. • Percentage of high-severity vulnerabilities. To measure relative severity, we used the criteria applied by the US government’s National Institutes for Standards and
Technology’s (NIST) ICAT project.4 ICAT defines a vulnerability as high severity if an exploit: 1) allows a remote attacker to violate the security of a system (i.e., gain an account); 2) allows a local attacker to gain complete control of a system; or 3) the
Computer Emergency Response Team Coordination Center (CERT/CC) issues an advisory.5 We calculated the percentage of total applicable vulnerabilities that ICAT classified as high severity for all five platform maintainers — Debian, MandrakeSoft,
Microsoft, Red Hat, and SUSE.

[pic]
One popular claim is that, "there are more security alerts for Linux than for Windows, and therefore Linux is less secure than Windows". Another is, "The average time that elapses between discovery of a flaw and when a patch for that flaw is released is greater for Linux than it is for Windows, and therefore Linux is less secure than Windows."
“Software firms rush to issue patches for the vulnerable component that neither commercial independent software vendors (ISVs) nor open source component maintainers have the resources to address all security vulnerabilities instantly. They struggle to verify and prioritize all the flaws that surface and to build, test, and release stable fixes as quickly as they can,( Laura Koetzle, 2004)
The latter is the most mysterious of all. It is an imponderable mystery how anyone can reach the conclusion that Microsoft's average response time between discovery of a flaw and releasing the fix for that flaw is superior to that of any competing operating system, let alone superior to Linux. Microsoft took seven months to fix one of its most serious security vulnerabilities (Microsoft Security Bulletin MS04-007 ASN.1 Vulnerability, eEye Digital Security publishes the delay in advisory AD20040210), and there are flaws Microsoft has openly stated it will never repair. The Microsoft Security Bulletin MS03-010 about the Denial Of Service vulnerability in Windows NT says this will never be repaired. More recently, Microsoft stated that it would not repair Internet Explorer vulnerabilities for any operating systems older than Windows XP.

2.2.3 There's Safety In Small Numbers

Perhaps the most oft-repeated myth regarding Windows vs. Linux security is the claim that Windows has more incidents of viruses, worms, Trojans and other problems because malicious hackers tend to confine their activities to breaking into the software with the largest installed base. This reasoning is applied to defend Windows and Windows applications. Windows dominates the desktop; therefore Windows and Windows applications are the focus of the most attacks, which is why you don't see viruses, worms and Trojans for Linux. While this may be true, at least in part, the intentional implication is not necessarily true: That Linux and Linux applications are no more secure than Windows and Windows applications, but Linux is simply too trifling a target to bother attacking.
This reasoning backfires when one considers that Apache is by far the most popular web server software on the Internet. According to the September 2004 Netcraft web site survey,68% of web sites run the Apache web server. Only 21% of web sites run Microsoft IIS. If security problems boil down to the simple fact that malicious hackers target the largest installed base, it follows that we should see more worms, viruses, and other malware targeting Apache and the underlying operating systems for Apache than for Windows and IIS. Furthermore, we should see more successful attacks against Apache than against IIS, since the implication of the myth is that the problem is one of numbers, not vulnerabilities.
Yet this is precisely the opposite of what we find, historically. IIS has long been the primary target for worms and other attacks, and these attacks have been largely successful. The Code Red worm that exploited a buffer overrun in an IIS service to gain control of the web servers infected some 300,000 servers, and the number of infections only stopped because the worm was deliberately written to stop spreading. Code Red.A had an even faster rate of infection, although it too self-terminated after three weeks. Another worm, IISWorm, had a limited impact only because the worm was badly written, not because IIS successfully protected itself.
Yes, worms for Apache have been known to exist, such as the Slapper worm. (Slapper actually exploited a known vulnerability in OpenSSL, not Apache). But Apache worms rarely make headlines because they have such a limited range of effect, and are easily eradicated. Target sites were already plugging the known OpenSSL hole. It was also trivially easy to clean and restore infected site with a few commands, and without as much as a reboot, thanks to the modular nature of Linux and UNIX.
“Perhaps this is why, according to Netcraft, 47 of the top 50 web sites with the longest running uptime (times between reboots) run Apache”( Security report-Windows vs Linux,theregister). None of the top 50 web sites runs Windows or Microsoft IIS. So if it is true that malicious hackers attack the most numerous software platforms, that raises the question as to why hackers are so successful at breaking into the most popular desktop software and operating system, infect 300,000 IIS servers, but are unable to do similar damage to the most popular web server and its operating systems?
Astute observers who examine the Netcraft web site URL will note that all 50 servers in the Netcraft uptime list are running a form of BSD, mostly BSD/OS. None of them are running Windows, and none of them are running Linux. The longest uptime in the top 50 is 1,768 consecutive days, or almost 5 years.
This appears to make BSD look superior to all operating systems in terms of reliability, but the Netcraft information is unintentionally misleading. Netcraft monitors the uptime of operating systems based on how those operating systems keep track of uptime. Linux, Solaris, HP-UX, and some versions of FreeBSD only record up to 497 days of uptime, after which their uptime counters are reset to zero and start again. So all web sites based on machines running Linux, Solaris, HP-UX and in some cases FreeBSD "appear" to reboot every 497 days even if they run for years. The Netcraft survey can never record a longer uptime than 497 days for any of these operating systems, even if they have been running for years without a reboot, which is why they never appear in the top 50.
That may explain why it is impossible for Linux, Solaris and HP-UX to show up with as impressive numbers of consecutive days of uptime as BSD -- even if these operating systems actually run for years without a reboot. But it does not explain why Windows is nowhere to be found in the top 50 list. Windows does not reset its uptime counter. Obviously, no Windows-based web site has been able to run long enough without rebooting to rank among the top 50 for uptime.
Given the 497-rollover quirk, it is difficult to compare Linux uptimes vs. Windows uptimes from publicly available Netcraft data. Two data points are statistically insignificant, but they are somewhat telling, given that one of them concerns the Microsoft website. As of September 2004, the average uptime of the Windows web servers that run Microsoft's own web site (www.microsoft.com) is roughly 59 days. The maximum uptime for Windows Server 2003 at the same site is 111 days, and the minimum is 5 days. Compare this to www.linux.com (a sample site that runs on Linux), which has had both an average and maximum uptime of 348 days. Since the average uptime is exactly equal to the maximum uptime, either these servers reached 497 days of uptime and reset to zero 348 days ago, or these servers were first put on-line or rebooted 348 days ago.
The bottom line is that quality, not quantity, is the determining factor when evaluating the number of successful attacks against software.

2.3 Windows vs. Linux Design

It is possible that email and browser-based viruses, Trojans and worms are the source of the myth that Windows is attacked more often than Linux. Clearly there are more desktop installations of Windows than Linux. It is certainly possible, if not probable, that Windows desktop software is attacked more often because Windows dominates the desktop. But this leaves an important question unanswered. Do the attacks so often succeed on Windows because the attacks are so numerous, or because there are inherent design flaws and poor design decisions in Windows?
Many, if not most of the viruses, Trojans, worms and other malware that infect Windows machines do so through vulnerabilities in Microsoft Outlook and Internet Explorer. To put the question another way, given the same type of desktop software on Linux (the most often used web browsers, email, word processors, etc.), Are there as many security vulnerabilities on Linux as Windows?

2.3.1 Windows Design

Viruses, Trojans and other malware make it onto Windows desktops for a number of reasons familiar to Windows and foreign to Linux: 1. Windows has only recently evolved from a single-user design to a multi-user model 2. Windows is monolithic, not modular, by design 3. Windows depends too heavily on an RPC model 4. Windows focuses on its familiar graphical desktop interface

2.3.1.1 Windows has only recently evolved from a single-user design to a multi-user model

Windows XP was the first version of Windows to reflect a serious effort to isolate users from the system, so that users each have their own private files and limited system privileges. This caused many legacy Windows applications to fail, because they were used to being able to access and modify programs and files that only an administrator should be able to access. That's why Windows XP includes a compatibility mode - a mode that allows programs to operate as if they were running in the original insecure single-user design. This is also why each new version of Windows threatens to break applications that ran on previous versions. As Microsoft is forced to hack Windows into behaving more like a multi-user system, the new restrictions break applications that are used to working without those restraints.
Windows XP represented progress, but even Windows XP could not be justifiably referred to as a true multi-user system. For example, Windows XP supports what Microsoft calls "Fast User Switching", which means that two or more people can log into a Windows XP system on a single PC at the same time. Here's the catch. This is only possible if and only if the PC is not set up to be part of a Windows network domain. That's because Microsoft networking was designed under the assumption that people who log into a network will do so from their own PC. Microsoft was either unable or unwilling to make the necessary changes to the operating system and network design to accommodate this scenario for Windows XP.
Windows Server 2003 makes some more progress toward true multi-user capabilities, but even Windows Server 2003 hasn't escaped all of the leftover single-user security holes. That's why Windows Server 2003 has to turn off many browser capabilities (such as ActiveX, scripting, etc.) by default. If Microsoft had redesigned these features to work in a safe, isolated manner within a true multi-user environment, these features would not present the severe risks that continue to plague Windows.

Windows is Monolithic by Design, not Modular

A monolithic system is one where most features are integrated into a single unit. The antithesis of a monolithic system is one where features are separated out into distinct layers, each layer having limited access to the other layers.
While some of the shortcomings of Windows are due to its ties to its original single-user design, other shortcomings are the direct result of deliberate design decisions, such as its monolithic design (integrating too many features into the core of the operating system). Microsoft made the Netscape browser irrelevant by integrating Internet Explorer so tightly into its operating system that it is almost impossible not to use IE. Like it or not, you invoke Internet Explorer when you use the Windows help system, Outlook, and many other Microsoft and third-party applications. Granted, it is in the best business interest of Microsoft to make it difficult to use anything but Internet Explorer. Microsoft successfully makes competing products irrelevant by integrating more and more of the services they provide into its operating system. But this approach creates a monster of inextricably interdependent services (which is, by definition, a monolithic system).
Interdependencies like these have two unfortunate cascading side effects. First, in a monolithic system, every flaw in a piece of that system is exposed through all of the services and applications that depend on that piece of the system. When Microsoft integrated Internet Explorer into the operating system, Microsoft created a system where any flaw in Internet Explorer could expose your Windows desktop to risks that go far beyond what you do with your browser. A single flaw in Internet Explorer is therefore exposed in countless other applications, many of which may use Internet Explorer in a way that is not obvious to the user, giving the user a false sense of security.
This architectural model has far deeper implications that most people may find difficult to grasp, one being that a monolithic system tends to make security vulnerabilities more critical than they need to be.
Perhaps an admittedly oversimplified visual analogy may help. Think of an ideally designed operating system as being comprised of three spheres, one in the center, another larger sphere that envelops the first, and a third sphere that envelope the inner two. The end-user only sees the outermost sphere. This is the layer where you run applications, like word processors. The word processors make use of commonly needed features provided by the second sphere, such as the ability to render graphical images or format text. This second sphere (usually referred to as "userland" by technical geeks) cannot access vulnerable parts of the system directly. It must request permission from the innermost sphere in order to do its work. The innermost sphere has the most important job, and therefore has the most direct access to all the vulnerable parts of your system. It controls your computer's disks, memory, and everything else. This sphere is called the "kernel"., and is the heart of the operating system.
In the above architecture, a flaw in the graphics rendering routines cannot do global damage to your computer because the rendering functions do not have direct access to the most vulnerable system areas. So even if you can convince a user to load an image with an embedded virus into the word processor, the virus cannot damage anything except the user's own files, because the graphical rendering feature lies outside the innermost sphere, and does not have permission to access any of the critical system areas.
The problem with Windows is that it does not follow sensible design practices in separating out its features into the appropriate layers represented by the spheres described above. Windows puts far too many features into the core, central sphere, where the most damage can be done. For example, if one integrates the graphics rendering features into the innermost sphere (the kernel), it gives the graphical rendering feature the ability to damage the entire system. Thus, when someone finds a flaw in a graphics-rendering scheme, the overly integrated architecture of Windows makes it easy to exploit that flaw to take complete control of the system, or destroy the entire system.
Finally, a monolithic system is unstable by nature. When you design a system that has too many interdependencies, you introduce numerous risks when you change one piece of the system. One change may (and usually does) have a cascading effect on all of the services and applications that depend on that piece of the system. This is why Windows users cringe at the thought of applying patches and updates. Updates that fix one part of Windows often break other existing services and applications. Case and point: The Windows XP service pack 2 already has a growing history of causing existing third-party applications to fail. This is the natural consequence of a monolithic system - any changes to one part of the machine affect the whole machine, and all of the applications that depend on the machine.

Windows Depends Too Heavily on the RPC model

RPC stands for Remote Procedure Call. Simply put, an RPC is what happens when one program sends a message over a network to tell another program to do something. For example, one program can use an RPC to tell another program to calculate the average cost of tea in China and return the answer. The reason it's called a remote procedure call is because it doesn't matter if the other program is running on the same machine, another machine in the next cube, or somewhere on the Internet.
RPCs are potential security risks because they are designed to let other computers somewhere on a network to tell your computer what to do. Whenever someone discovers a flaw in an RPC-enabled program, there is the potential for someone with a network-connected computer to exploit the flaw in order to tell your computer what to do. Unfortunately, Windows users cannot disable RPC because Windows depends upon it, even if your computer is not connected to a network. Many Windows services are simply designed that way. In some cases, you can block an RPC port at your firewall, but Windows often depends so heavily on RPC mechanisms for basic functions that this is not always possible. Ironically, some of the most serious vulnerabilities in Windows Server 2003 (see table in section below) are due to flaws in the Windows RPC functions themselves, rather than the applications that use them. The most common way to exploit an RPC-related vulnerability is to attack the service that uses RPC, not RPC itself.
It is important to note that RPCs are not always necessary, which makes it all the more mysterious as to why Microsoft indiscriminately relies on them.

Windows focuses on its familiar graphical desktop interface

Microsoft considers its familiar Windows interface as the number one benefit for using Windows Server 2003. To quote from the Microsoft web site, "With its familiar Windows interface, Windows Server 2003 is easy to use. New streamlined wizards simplify the setup of specific server roles and routine server management tasks..."
By advocating this type of usage, Microsoft invites administrators to work with Windows Server 2003 at the server itself, logged in with Administrator privileges. This makes the Windows administrator most vulnerable to security flaws, because using vulnerable programs such as Internet Explorer expose the server to security risks.

2.3.2 Linux Design

According to the Summer 2004 Evans Data Linux Developers Survey, 93% of Linux developers have experienced two or fewer incidents where a Linux machine was compromised. Eighty-seven percent had experienced only one such incident, and 78% have never had a cracker break into a Linux machine. In the few cases where intruders succeeded, the primary cause was inadequately configured security settings.
More relevant to this discussion, however, is the fact that 92% of those surveyed have never experienced a virus, Trojan, or other malware infection on Linux.
Viruses, Trojans and other malware rarely, if ever, manage to infect Linux systems, in part because: 1. Linux is based on a long history of well fleshed-out multi-user design 2. Linux is mostly modular by design 3. Linux does not depend upon RPC to function, and services are usually configured not to use RPC by default 4. Linux servers are ideal for headless non-local administration
There are variations in the default configurations of the different distributions of Linux, so what may be true of Red Hat Linux may not be true of Debian and there may be even more differences in SuSE. For the most part, all the major Linux distributions tend to follow sane guidelines in the default configurations.

Linux is based on a long history of well fleshed-out multi-user design

Linux does not have a history of being a single-user system. Therefore it has been designed from the ground-up to isolate users from applications, files and directories that affect the entire operating system. Each user is given a user directory where all of the user's data files and configuration files are stored. When a user runs an application, such as a word processor, that word processor runs with the restricted privileges of the user. It can only write to the user's own home directory. It cannot write to a system file or even to another user's directory unless the administrator explicitly gives the user permission to do so.
Given the default restrictions in the modular nature of Linux; it is nearly impossible to send an email to a Linux user that will infect the entire machine with a virus. It doesn't matter how poorly the email client is designed or how badly it may behave - it only has the privileges to infect or damage the user's own files. Linux browsers do not support inherently insecure objects such as ActiveX controls, but even if they did, a malicious ActiveX control would only run with the privileges of the user who is running the browser. Once again, the most damage it could do is infect or delete the user's own files.
In sharp contrast, Windows was originally designed to allow all users and applications to have administrator access to every file on the system. Windows has only gradually been re-worked to isolate users and what they do from the rest of the system. Windows Server 2003 is close to achieving this goal, but the methodology Microsoft has employed to create this barrier between user and system is still largely composed of constantly changing hacks to the existing design, rather than a fundamental redesign with multi-user capability and security as the foundational concept behind the system.

Linux is Modular by Design, not Monolithic

Linux is for the most part a modularly designed operating system, from the kernel (the core "brains" of Linux) to the applications. Almost nothing in Linux is inextricably intertwined with anything else. There is no single browser engine used by help systems or email programs. Indeed, it is easy to configure most email programs to use a built-in browser engine to render HTML messages, or launch any browser you wish to view HTML documents or jump to links included in an email message. Therefore a flaw in one browser engine does not necessarily present a danger to any other application on the system, because few if any other applications besides the browser itself must depend on a single browser engine.
Not everything in Linux is modular. The two most popular graphical desktops, KDE and GNOME, are somewhat monolithic by design; at least enough so that an update to one part of GNOME or KDE can potentially break other parts of GNOME or KDE. Neither GNOME nor KDE are so monolithic, however, as to require you to use GNOME or KDE-specific applications. You can run GNOME applications or any other applications under KDE, and you can run KDE or any other applications under GNOME.
The Linux kernel supports modular drivers, but it is essentially a monolithic kernel where services in the kernel are interdependent. Any adverse impact of this monolithic approach is minimized by the fact that the Linux kernel is designed to be as minimal a part of the system as possible. Linux follows the following philosophy almost to a point of fanaticism: "Whenever a task can be done outside the kernel, it must be done outside the kernel." This means that almost every useful feature in Linux ("useful" as perceived by an end user) is a feature that does not have access to the vulnerable parts of a Linux system.
In contrast, bugs in graphics card drivers are a common cause of the Windows blue-screen-of-death. That's because Windows integrates graphics into the kernel, where a bug can cause a system failure. With only a few proprietary exceptions (such as the third-party NVidia graphics driver), Linux forces all graphics drivers to run outside the kernel. A bug in a graphics driver may cause the graphical desktop to fail, but not cause the entire system to fail. If this happens, one simply restarts the graphical desktop. One does not need to reboot the computer.

Linux is Not Constrained by an RPC Model

As stated above in the section on Windows, RPC stands for Remote Procedure Call. Simply put, an RPC allows one program to tell another program to do something, even if that other program resides on another computer. For example, one program can use an RPC to tell another program to calculate the average cost of tea in China and return the answer. The reason it's called a remoteprocedure call is because it doesn't matter if the other program is running on the same machine, another machine in the next cube, or somewhere on the Internet.
Most Linux distributions install programs with network access turned off by default. For example, the MySQL SQL database server is usually installed such that it does not listen to the network for instructions. If you build a web site using Apache and MySQL on the same server machine, then Apache will interact with MySQL without MySQL having to listen to the network. Contrast this to SQL Server, which listens to the network whether or not it is necessary to do so. If you want MySQL to listen to the network, you must turn on that feature manually, and then explicitly define the users and machines allowed to access MySQL.
Even when Linux applications use the network by default, they are most often configured to respond only to the local machine and ignore any requests from other machines on the network.
Unlike Windows Server 2003, you can disable virtually all network-related RPC services on a Linux machine and still have a perfectly functional desktop.

Linux servers are ideal for headless non-local administration

A Linux server can be installed, and often should be installed as a "headless" system (no monitor is connected) and administered remotely. This is often the ideal type of installation for servers because a remotely administered server is not exposed to the same risks as a locally administered server.
For example, you can log into your desktop computer as a normal user with restricted privileges and administer the Linux server through a browser-based administration interface. Even the most critical browser-based security vulnerability affects only your local user-level account on the desktop, leaving the server untouched by the security hole.
This may be one of the most important differentiating factors between Linux and Windows, because it virtually negates most of the critical security vulnerabilities that are common to both Linux and Windows systems, such as the vulnerabilities of the Mozilla browser vs. the Internet Explorer browser.

2.4 Realistic Security and Severity Metrics

One needs to examine many metrics in order to evaluate properly the risks involved in adopting one operating system over another for any given task. Metrics are sometimes cumulative; at other times they offset each other.
There are three very important metrics, represented as risk factors, which have a profound effect on one another. The combination of the three can have a dramatic impact on the overall severity of a security flaw. These three risk factors are damage potential, exploitation potential, and exposure potential.

2.4.1 Elements of an Overall Severity Metric

Damage potential of any given discovered security vulnerability is a measurement of the potential harm done. A vulnerability that exposes all your administrator passwords has a high damage potential. A flaw that makes your screen flicker would have a much lower damage potential, raised only if that particular damage is difficult to repair.
Exploitation potential describes how easy or difficult it is to exploit the vulnerability. Does it require expert programming skills to exploit this flaw, or can almost anyone with rudimentary computer experience use it for mischief?
Exposure potential describes the amount of access necessary to exploit a given vulnerability. If any hotshot hacker (commonly referred to as "script kiddies") on the Internet can exploit a flaw on a server you have protected by a firewall, that flaw has a very high exposure potential. If it is only possible to exploit the flaw if you are an employee within the company with a valid login ID, using a computer inside the company building, the exposure potential of that flaw is significantly less severe.
Overall Severity Metric and Interaction Between the Three Key Metrics
One or more of these risk factors can have a profound affect on the overall severity of a security hole. Assume for a moment that you are the CIO for a business based on a web eCommerce site. Your security analyst informs you that someone has found a flaw in the operating system your servers are running. A malicious hacker could exploit this flaw to erase every disk on every server on which the company depends.
The damage potential of this flaw is catastrophic.
Worse, he adds that it is trivially easy from a technical perspective to exploit this flaw. The exploitation potential is critical.
Time to press the panic button, right? Now suppose he then adds this vital bit of information. Someone can only exploit this flaw with a key to the server room, because this particular security vulnerability requires physical access to the machines. This one key metric, if you'll pardon the pun, makes a dramatic difference in the overall severity of the risk associated with this particular flaw. The extremely low exposure potentialshifts the needle on the severity meter from "panic" to "imminently manageable".
Conversely, another security vulnerability might be exposed to every script kiddy on the Internet, but still be considered of negligible severity because the damage potentialfor this flaw is inconsequential.
Perhaps you can begin to appreciate why it is misleading, if not outright irresponsible to measure security based on a single metric like the number of security alerts. At the very least, one must also consider these three risk factors. Would you rather rely on an operating system with a history of hundreds of flaws of negligible severity, or one with a history of a dozens of flaws with catastrophic severity? Unless you factor the overall severity of the flaws into the evaluation, the number of flaws is irrelevant at best, misleading at worst.
Applying The Overall Severity Metric
Once you can evaluate the overall severity of any given flaw, you can begin to add meaning to metrics such as "how many security alerts does Windows have vs. Linux", or "how long does one have to wait for a fix after a flaw is discovered when using Windows vs. Linux".
Suppose one operating system has far more security alerts than another. The only reason that metric may have meaning is if it also has more security alerts that point to flaws with a high overall severity level. It is one thing to be plagued on a regular basis by a myriad of minor low-risk annoyances, quite another to be plagued on a regular basis by only a few flaws that put your entire company at risk.
Suppose one operating system has a better record for time to delivery of a fix once a flaw is discovered. Once again, the only reason this metric may have meaning is if the delays are related to flaws with a high overall severity level. It is one thing to wait months for a fix to an exploit that would cause little or no damage on a few computers. It is quite another to wait months for a fix for a flaw that puts your entire company at risk.

2.4.2 Means Of Evaluating Metrics

2.4.3 Exposure Potential

This metric takes into account the measures one must take to access a machine in order to exploit security vulnerabilities. This typically falls into one of the following categories. The actual order of some of these categories can vary in practice, but this should prove to be a useful guideline. It should also be noted that there are several unusual complexities not listed here. For example, a patched flaw in Windows Server 2003 was not itself a serious exposure, but it allowed a malicious hacker to open the system to serious exposure. In short, it was a single step in a chain of exposure vulnerabilities. Given that these are roughly defined categories, they are listed in terms of severity, ordered from least to greatest.
You need physical access to the machine, but not a valid user login account.
You need physical access to the machine and must have a valid user login account.
You need a valid user login account, but do not need physical access to the target machine. Local network access (from inside the company network) is sufficient.
You need a valid user login account, but do not need physical access to the target machine. The target machine is accessible via the Internet from a remote location.
You can exploit a flaw remotely from the Internet without a valid login account for the target machine, but you cannot reach the flaw directly. Another barrier is in place, such as a router or firewall. This category is difficult to place in the correct order of severity, since a well-configured firewall may provide 100% protection, but not always. A poorly configured firewall may not present a barrier at all.
You can exploit a flaw remotely from the Internet without a valid login account for the target machine, but you cannot reach the flaw directly. Another less intrusive barrier is in place. This barrier may be another program (for example, the flaw is in Microsoft SQL Server, but must be exploited by embedding an ActiveX control or Javascript within a web page accessed by Microsoft Internet Information Server). In some cases, you must entice the user into an action in order to gain indirect access. For example, you must send a user an email that directs them to a web page that includes the malicious control or code. To use another common practice, the user is enticed to open an attachment to an email. The severity of this category varies depending on how cleverly the enticement is disguised as an innocent action.
You can exploit a flaw remotely from the Internet without a valid login account for the target machine, but you cannot reach the flaw directly. Nevertheless, the flaw is exploited indirectly but automatically. For example, a flaw in the Windows operating system is exploited immediately and automatically as soon as a user opens an email message in Outlook.
You can exploit a flaw remotely from the Internet simply by sending information directly to the target machine via the network. For example, one might be able to exploit a Denial Of Service (DoS) vulnerability simply by sending special network packets to a target web site, rendering that web site unavailable to other Internet users.

2.4.4 Exploitation Potential

This metric takes into account the technical difficulty involved in exploiting a security flaw. This typically falls into one of the following categories, in terms of severity, ordered from least to greatest (the actual order of some of these categories can vary in practice, but this should prove to be a useful guideline):
The flaw exists but it has not yet been discovered. This flaw either requires infinite knowledge or a lucky accident to exploit.
The flaw requires expert programming skills and profound knowledge of the operating system, but its existence is not known well enough that many such attackers would be likely to exploit it.
The flaw is known by and requires attackers with expert programming skills and profound understanding of how the target software and operating system works in order to exploit.
The flaw requires expert programming skills, but someone has already created a virus, Trojan, or worm as a foundation. The programmer must only modify the code in order to exploit a new flaw, or modify the code in order to make the virus more dangerous.
The flaw required expert programming skills to create, but the code is available and it requires only mediocre programming skills to improve or modify the code in order to exploit the existing flaw, or future flaws.
The flaw requires only mediocre or novice programming skills, or rudimentary computer knowledge to exploit.
It is irrelevant how difficult it is to exploit the flaw, because someone has done the hard work of solving the means of exploiting the flaw, and made a intrusion kit publicly available for use by novices.
Anyone can exploit the flaw simply by typing simple text at a command line or pointing a browser to a URL.

2.4.5 Damage Potential

This metric is the most difficult to quantify. It requires at least two separate sets of categories. First, it takes into account how much damage potential a flaw presents to an application or the computer system. Second, the damage potential must be measured in terms of "what it means" to the company affected. For example, there is a single metric where a flaw allows an attacker to read unpublished web pages. That flaw is relatively minor if no sensitive information is present in the system. However, if an unpublished web page contains sensitive information such as credit card numbers, the overall damage potential is quite high even though the technical damage potential is minimal. Here are the most important factors in estimating technical damage potential for any given flaw, in order of severity from least to worst:
The flaw affects only the performance of another computer, but not significantly enough to make the computer stop responding.
The flaw only affects the attacker's own programs or files, but not the files or programs of other users.
The flaw exposes the information in co-worker's files, but not information from the administrator account or information in any system files.
The flaw allows an attacker to examine, change or delete a user's files. It does not allow the attacker to examine, change or delete administrator or system files.
The flaw allows an attacker to view sensitive information, whether by examining network traffic or by getting read-only access to administrator or system files.
The flaw allows an attacker to gain some but not all administrator-level privileges, perhaps within a restricted environment.
The flaw allows an attacker to either crash the system or otherwise cause the system to stop responding to normal requests. This is typically a Denial Of Service (DoS) attack. However, the attacker cannot actually gain control of the computer aside from stopping it from responding.
The flaw allows an attacker to change or delete all privileged files and information. The attacker can gain complete control of the target system and do virtually any amount of damage that a fully authorized system administrator can do.

2.5 Overall Severity Risk

Given the above three factors, the overall severity risks range from minimal to catastrophic. It would be impossible to consider all the permutations, but a few examples may prove useful. These examples are based on the damage potential categories, combined with assorted selections from exposure and exploitation potential.
If an anonymous hacker on the Internet can degrade your company's system performance, this can range from a minor annoyance to a devastating financial impact, depending on how critical system performance may be to the mission of your company.
Attacking your own account is silly, but self-destructive behavior can cause needless restoration work by the IT department.
The potential severity of viewing another user's files is minimal if you can only view the files of a co-worker in the same building, even if this flaw is trivially easy to exploit. The severity is increased if the co-worker's files contain sensitive information, and decreased the more likely the attacker may be to get caught. On the other hand, if any anonymous malicious hacker on the Internet (high exposure potential) can view sensitive files of a user within your company, the overall severity is dramatically more serious.
Again, if the flaw allows an attacker to change or delete the files of a co-worker in the same building, the severity is minimized by how well the company performs backups, and how easily the attacker will get caught. If the attacker can change files on a remote computer's user account, the severity varies with the importance of that user account and the service it provides. For example, the severity may range from the embarrassment of having your web pages defaced to having your web pages deleted entirely.

2.5.1 A Comparison of 40 Recent Security Patches

The following sections document the 40 most recent patches to security vulnerabilities in Windows Server 2003 (arguably the most secure version of Windows) and Linux Red Hat Enterprise AS v.3 (arguably the competitive equivalent of Windows Server 2003). The data for the Windows Server 2003 patches and vulnerabilities was taken directly from the Microsoft web site, and the data for Red Hat Enterprise AS v.3 was taken from the Red Hat web site.
Patches and Vulnerabilities Affecting Microsoft Windows Server 2003
Microsoft marks fifteen of the 40 vulnerabilities as Critical. That means by Microsoft's own subjective analysis, 38% of the most recent problems reported and patched are of Critical severity, the highest rating possible.
Patches and Vulnerabilities Affecting Red Hat Enterprise Linux AS v.3
Of the 40 vulnerabilities, only 4 are rated as Critical. That means 10% of the most recent 40 updates are of Critical severity on the red hat website.

2.5.2 CERT Vulnerability Notes Database Results

The United States Computer Emergency Readiness Team (CERT) uses its own set of metrics to evaluate the severity of any given security flaw. A number between 0 and 180 expresses the final metric, where the number 180 represents the most serious vulnerability. The ranking is not linear. In other words, a vulnerability ranked 100 is not twice as serious as a vulnerability ranked at 50.
CERT considers any vulnerability with a score of 40 or higher to be serious enough to be a candidate for a special CERT Advisory and US-CERT technical alert.
We queried the CERT database using the search terms "Microsoft", "Red Hat", and "Linux". [www.theregister.co.uk] As a result, the results are somewhat unfairly skewed against Linux and Red Hat. Nevertheless, even if one takes the results at face value and ignores the skewed results for Linux and Red Hat, Microsoft still produces the most entries in the CERT database, and the list of entries contain the most severe flaws.
The CERT results for "Microsoft" returned 250 entries, with the top two entries containing the severity metric of 94.5. Thirty-nine entries have a severity rating of 40 or greater. The average severity rating for the top 40 entries is 54.67. (We chose to average 40 entries instead of 50 or more because the Red Hat search only returned 49 results.)
The CERT results for "Red Hat" returned 46 entries. The top entry has a severity metric of 108.16. Only 3 (vs. 39 for Microsoft) entries have a metric of 40 or greater. The average severity for the top 40 entries is 17.96.
The CERT results for the "Linux" search returned 100 entries. The top entry has a severity metric of 87.72. Only 6 of the entries carry a severity metric of 40 or greater. The average severity for the top 40 entries is 28.48.
These results cannot be expected to mirror our own analysis of recent vulnerability patches. The CERT search criteria and date ordering is different, and the CERT search does not confine the products to Windows Server 2003 and Red Hat Enterprise Linux AS v.3. But the CERT results reflect how Windows security flaws tend to be far more frequently severe than those of Linux, which echoes my conclusions.

2.5.3 Recommendations

2.5.3.1 Handling Competing Platform Requirements

Here’s how customers should balance security with other pressing concerns:

If you want security updates as quickly as possible, think Debian or Microsoft.
Unix-leaning firms that want maximum security control and aren’t afraid of command-line interfaces and community support should plump for Debian because of its low “distribution days of risk.” Firms who find Windows a better fit for their environments shouldn’t fly off the handle about every Microsoft security incident.
Instead, those firms should:

1) Focus on requiring every new deployment of Windows to conform to one of a few security-validated configurations, and

2) Monitor Microsoft’s new monthly security release policy closely to make sure that it doesn’t cause an unacceptable increase in overall “days of risk.”

To maximize security and operator ease, look at Microsoft or Red Hat. If you:
1) Lack the manpower to validate and test security patches yourself, and

2) Can’t afford to subscribe to a vulnerability management service from vendors like SecureInfo or TruSecure, then you should subscribe all your machines directly to your vendor’s auto-update service. Microsoft and Red Hat successfully handle large percentages of their customers through auto-update services, which can automatically download and apply security updates. MandrakeSoft and SUSE also offer fully automatic update services. 3.0 Research Topic 2 – Designing GUI interfaces on Linux

3.1 Introduction

Linux needs a standard GUI API. It's not that all applications must end up looking and even acting alike as in Windows, but they should be consistent in certain areas; for example, a consistent desktop, consistent help system, cut and paste, drag-and-drop and so forth.

The fragmentation of development energy into too many GUI toolkits is one of the most serious problems facing the Linux community today. There is some recognition of the magnitude of the problem but nobody can agree on which GUI toolkits to use. A good example is the Gnome and KDE desktop projects; Gnome uses GTK, and KDE uses QT.

This document seeks to explore the various development tools or techniques for developing a graphical user interface application. It then seeks to analyze possibilities of using one of the techniques explored to solve the current problem as stated in the requirements.

3.2 A brief look at several popular toolkits available for Linux.

Linux is now a reality--it is no longer just a hacker tool or a toy for students. As such, it needs the power of console programs ported to the graphical world. People want easy-to-use desktops with good-looking programs. This is why many programmers now turn to graphical toolkits.
While many toolkits are available, they all share some basic features. Since programming languages do not include built-in functions to make graphical widgets, you must use add-ons. Graphical toolkits are actually libraries which add functions to a programming language, allowing you to integrate a graphical interface to your program.
The main differences found between various toolkits are ease of use, graphical appeal, cross-platform portability and language. If, for example, you are experienced in the Tcl scripting language, you will probably want to use the Tk graphical toolkit. If you like Perl, you may pick Perl/GTK (Patrick Lambert, 99)

3.3 GTK, the Gimp Toolkit

The GTK toolkit seems to be one of the most popular. It is modern and easy to use. That library was made in C as the base for the GIMP, an image manipulation program. Now it is used by programmers around the world for all kinds of applications, including the GNOME desktop environment. The graphical interface looks clean and much like interfaces on other operating systems.
GTK is a good toolkit for writing applications in C, since it is a C library. The GTK toolkit is built on top of the GDK library, which is on top of GLIB. All three provide unique functions to programmers. Available functions include memory handling, graphical components and widgets. GNOME also has its own extensions. GTK and GNOME are freely available software products.

3.4 QT from Troll Tech

The QT toolkit was created by Troll Tech, a software company in Norway, and is used in the KDE desktop environment. It is written in C++ and used by programmers worldwide. The QT library began as a commercial product, but now Troll Tech has released a free version under an open license. Similar to GTK, it has the same kind of widgets, including labels, entry boxes and text fields. QT would be a good choice if you write applications in C++. The QT library is cross-platform, and the graphical interface of programs using it will compile without changes in both UNIX and Microsoft Windows.
The QT widgets look very much like GTK's widgets, as well as those of other operating systems.

3.5 wxWindows

wxWindows (w for Windows, x for X Window System) was created at the University of Edinburgh as a cross-platform toolkit. wxWindows is a C++ framework that allows you to write graphical applications. You can write your code once, then compile it under one of the many ports of the library. It currently runs under Microsoft Windows, Macintosh OS, Motif under UNIX and GTK. There is one library per platform, all providing a common API. wxWindows is a free product, under a license similar to the L-GPL. Using it, you can write both commercial and free products.

3.6 GraphApp, Platform-Independent GUI Programming in C

GraphApp is a C library that allows you to write simple graphical applications in C. It is a cross-platform toolkit, and will work on the Macintosh OS, Microsoft Windows, Motif under UNIX and Athena. GraphApp supports a more limited number of widgets, but is truly easy to learn.
A nice thing about GraphApp is that it compiles as a small static library. This means you can compile your programs with the library linked in them without increasing the size of the binary much, and the user will be able to run it without installing the toolkit.

3.7 Motif, the Standard

Motif has been the standard graphical toolkit for years on UNIX and other platforms. It is a commercial standard and has its own look. Motif is the base for the popular CDE desktop environment, also a standard on many commercial UNIX systems.
On Linux and other open systems, developers have made a free Motif clone called LessTif. LessTif is source compatible with Motif and available under the L-GPL. Motif and LessTif offer cross-platform compatibility among UNIX systems. While Motif code will not work on most non-UNIX systems, many commercial UNIX systems come with Motif libraries. Also, Motif has the advantage of having passed the test of time.
Most programmers are concerned about two things: graphical look and portability. GTK and QT are probably used the most in the Linux world, mainly because of the GNOME and KDE desktop environments. Users want a desktop that will provide all utilities using the same graphical look.
A fundamental reality of application development is that the user interface is the system to the users. What users want is for developers to build applications that meet their needs and that are easy to use. Too many developers think that they are artistic geniuses – they do not bother to follow user interface design standards or invest the effort to make their applications usable, instead they mistakenly believe that the important thing is to make the code clever or to use a really interesting color scheme.
“The reality is that a good user interface allows people who understand the problem domain to work with the application without having to read the manuals or receive training.” (Constantine and Lockwood)
User interface design is important for several reasons. First of all the more intuitive the user interface the easier it is to use, and the easier it is to use and the less expensive to use it. The better the user interface the easier it is to train people to use it, reducing your training costs. The better the user interface the less help people will need to use it, reducing support costs. The better the user interface the more users will like to use it, increasing their satisfaction with the work that you have done. In this article I discuss: 1. Good Interface Designing Tips and Techniques 2. User Interface Design Principles 3. Concluding Remarks

3.8.1 Good Interface Designing Tips and Techniques

1. Being Consistent. If you can double-click on items in one list and have something happen, then you should be able to double-click on items in any other list and have the same sort of thing happen. Put your buttons in consistent places on all your windows, use the same wording in labels and messages, and use a consistent color scheme throughout. Consistency in your user interface enables your users to build an accurate mental model of the way it works, and accurate mental models lead to lower training and support costs. 2. Setting standards and sticking to them. “The only way one can ensure consistency within oner application is to set user interface design standards, and then stick to them. An example is the Agile Modeling (AM)’s Apply Modeling Standards for all aspects of software development, including user interface design” (Scott W Ambler, 2003) 3. Being prepared to hold the line. When is are developing the user interface for a system one will discover that stakeholders often have some unusual ideas as to how the user interface should be developed. One should definitely listen to these ideas but also need to make your stakeholders aware of corporate UI standards and the need to conform to them. 4. Explaining the rules. Users need to know how to work with the application built for them. When an application works consistently, it means the rules have to be explained only once. This is a lot easier than explaining in detail exactly how to use each feature in an application step-by-step. 5. Navigation between major user interface items is important. If it is difficult to get from one screen to another, then your users will quickly become frustrated and give up. When the flow between screens matches the flow of the work the user is trying to accomplish, then your application will make sense to your users. Because different users work in different ways, your system needs to be flexible enough to support their various approaches. User interface-flow diagrams should optionally be developed to further your understanding of the flow of your user interface. 6. Navigation within a screen is important. In Western societies, people read left to right and top to bottom. Because people are used to this, should you design screens that are also organized left to right and top to bottom when designing a user interface for people from this culture? You want to organize navigation between widgets on your screen in a manner users will find familiar to them. 7. Word your messages and labels effectively. The text you display on your screens is a primary source of information for your users. If your text is worded poorly, then your interface will be perceived poorly by your users. Using full words and sentences, as opposed to abbreviations and codes, makes your text easier to understand. Your messages should be worded positively, imply that the user is in control, and provide insight into how to use the application properly. For example, which message do you find more appealing “You have input the wrong information” or “An account number should be eight digits in length.” Furthermore, your messages should be worded consistently and displayed in a consistent place on the screen. Although the messages “The person’s first name must be input” and “An account number should be input” are separately worded well, together they are inconsistent. In light of the first message, a better wording of the second message would be “The account number must be input” to make the two messages consistent. 8. Understand the UI widgets. You should use the right widget for the right task, helping to increase the consistency in your application and probably making it easier to build the application in the first place. The only way you can learn how to use widgets properly is to read and understand the user-interface standards and guidelines your organization has adopted. 9. Look at other applications with a grain of salt. Unless you know another application has been verified to follow the user interface-standards and guidelines of your organization, don’t assume the application is doing things right. Although looking at the work of others to get ideas is always a good idea, until you know how to distinguish between good user interface design and bad user interface design, you must be careful. Too many developers make the mistake of imitating the user interface of poorly designed software. 10. Use color appropriately. Color should be used sparingly in your applications and, if you do use it, you must also use a secondary indicator. The problem is that some of your users may be color blind and if you are using color to highlight something on a screen, then you need to do something else to make it stand out if you want these people to notice it. You also want to use colors in your application consistently, so you have a common look and feel throughout your application. 11. Follow the contrast rule. If you are going to use color in your application, you need to ensure that your screens are still readable. The best way to do this is to follow the contrast rule: Use dark text on light backgrounds and light text on dark backgrounds. Reading blue text on a white background is easy, but reading blue text on a red background is difficult. The problem is not enough contrast exists between blue and red to make it easy to read, whereas there is a lot of contrast between blue and white. 12. Align fields effectively. When a screen has more than one editing field, you want to organize the fields in a way that is both visually appealing and efficient. I have always found the best way to do so is to left-justify edit fields: in other words, make the left-hand side of each edit field line up in a straight line, one over the other. The corresponding labels should be right-justified and placed immediately beside the field. This is a clean and efficient way to organize the fields on a screen. 13. Expect your users to make mistakes. How many times have you accidentally deleted some text in one of your files or in the file itself? Were you able to recover from these mistakes or were you forced to redo hours, or even days, of work? The reality is that to err is human, so you should design your user interface to recover from mistakes made by your users. 14. Justify data appropriately. For columns of data, common practice is to right-justify integers, decimal align floating-point numbers, and to left-justify strings. 15. Your design should be intuitable. In other words, if your users don’t know how to use your software, they should be able to determine how to use it by making educated guesses. Even when the guesses are wrong, your system should provide reasonable results from which your users can readily understand and ideally learn. 16. Don’t create busy user interfaces. Crowded screens are difficult to understand and, hence, are difficult to use. Experimental results show that the overall density of the screen should not exceed 40 percent, whereas local density within groupings should not exceed 62 percent. 17. Group things effectively. Items that are logically connected should be grouped together on the screen to communicate they are connected, whereas items that have nothing to do with each other should be separated. You can use white space between collections of items to group them and/or you can put boxes around them to accomplish the same thing. 18. Take an evolutionary approach. Techniques such as user interface prototyping and Agile Model Driven Development (AMDD) are critical to your success as a developer.

3.8.2 UI Design Principles

Guiding principles are 1. The structure principle. Your design should organize the user interface purposefully, in meaningful and useful ways based on clear, consistent models that are apparent and recognizable to users, putting related things together and separating unrelated things, differentiating dissimilar things and making similar things resemble one another. The structure principle is concerned with your overall user interface architecture (Software for Use, L. Constantine, nd) 2. The simplicity principle. Your design should make simple, common tasks simple to do, communicating clearly and simply in the user’s own language, and providing good shortcuts that are meaningfully related to longer procedures. 3. The visibility principle. Your design should keep all needed options and materials for a given task visible without distracting the user with extraneous or redundant information. Good designs don’t overwhelm users with too many alternatives or confuse them with unneeded information. 4. The feedback principle. Your design should keep users informed of actions or interpretations, changes of state or condition, and errors or exceptions that are relevant and of interest to the user through clear, concise, and unambiguous language familiar to users. 5. The tolerance principle. Your design should be flexible and tolerant, reducing the cost of mistakes and misuse by allowing undoing and redoing, while also preventing errors wherever possible by tolerating varied inputs and sequences and by interpreting all reasonable actions reasonable. 6. The reuse principle. Your design should reuse internal and external components and behaviors, maintaining consistency with purpose rather than merely arbitrary consistency, thus reducing the need for users to rethink and remember.

3.9.1 A Beginner's Guide to Using pyGTK and Glade

It seems pyGTK and Glade have opened up cross-platform, professional-quality GUI development.Not only does pyGTK allow neophytes to create great GUIs, it also allows professionals to create flexible, dynamic and powerful user interfaces faster than ever before. Creating a quick user interface that looks good without a lot of work, and you don't have any GUI experience, (http://www.linuxjournal.com/node/6586/print, Accessed April 17 2007)

3.9.2 The Cross-Platform Nature of pyGTK

In a perfect world, you never would have to develop for anything but Linux running your favorite distribution. In the real world, you need to support several versions of Linux, Windows, UNIX or whatever else your customers need. Choosing a GUI toolkit depends on what is well supported on your customers' platforms. Nowadays, choosing Python as your development tool in any new endeavor is second nature if speed of development is more of a requirement than runtime speed. This combination leads you to choose from the following alternatives for Python GUI development: wxPython, Tkinter, pyGTK and Python/Qt.
Keeping in mind that I am not a professional GUI developer, here are my feelings on why one should chose pyGTK. wxPython has come a long way and offers attractive interfaces but is hard to use and get working, especially for a beginner. Not to mention, it requires both Linux and Windows users to download and install a large binary package. Qt, although free for Linux, requires a license to be distributed for Windows. This probably is prohibitive for many small companies who want to distribute on multiple platforms.
Tkinter is the first Python GUI development kit and is available with almost every Python distribution. It looks ugly, though, and requires you to embed Tk into your Python applications, which feels like going backward. For a beginner, you really want to split the GUI from the application as much as possible. That way, when you edit the GUI, you don't have to change a bunch of things in your application or integrate any changes into your application.

3.9.3 Shell Scripting with KDE dialogs

There are some misconceptions that KDE is only a graphical environment. While it is true that KDE is an outstanding desktop environment, the Unix heritage of command line and scripting is also well supported by KDE. In particular, KDE applications can be controlled from the command line, and shell scripts can make use of some of the KDE widget set (Brad Hards, Sigma Bravo, nd)
To use this tutorial, you'll need to have some basic familiarity with command line fundamentals, and be at least aware of shell scripting. Like any other programming environment, effective shell scripting requires solid knowledge of the environment, however you should be able to make sense of at least the examples with only basic understanding. The downside to this is that if you are very familiar with shell scripting, some of the explanation is likely to be redundant.
This tutorial assumes that you are using the GNU bash shell, or something directly compatible. Users of other shells (especially csh and variants) may need to modify the examples to make them work.
Shell scripting techniques and usage varies a lot. Sometimes a script is only even meant to be run by the system (e.g. as cron job), however sometimes scripts are really applications intended to be run by users. KDE includes features that allow you to use some of the KDE functionality from a shell script, which can save you some work, and can also make your script feel like it is part of a nicely integrated application set.
As an example, consider something like a password dialog. If you need the user to enter a password, you can easily generate a dialog from your script that looks like the following.
[pic]
Figure 1. Password dialog example

3.9.4 File selection dialogs

This section covers dialogs to select files to open and save. These dialogs access the power of the underlying KDE dialogs, including advanced filtering techniques and can provide either paths or URLs.
The dialog to select a file to open is invoked with --getopenfilename or --getopenurl. These two commands are used in the same way - only the format of the result changes, so every example shown here can be applied for either format. You have to specify a starting directory, and can optionally provide a filter. Here is a simple example that doesn't provide any filtering, and accesses the current directory:
Example 28. --getopenfilename dialog box
[pic]

Zend Studio

The most comprehensive IDE for all your PHP needs!
The only Integrated Development Environment (IDE) available for professional developers that encompasses all the development components necessary for the full PHP application lifecycle.
Through a comprehensive set of editing, debugging, analysis, optimization and database tools, Zend Studio 5.5 speeds development cycles and simplifies complex projects.

Why develop with Zend Studio?
THE MOST POWERFUL PHP DEVELOPMENT ENVIRONMENT • Increase productivity with a proven professional development environment that includes advanced PHP 5.2 support, Code Analyzer, Nested Code Completion, Syntax Highlighting, Project Manager, Code Editor, Graphical Debugger and numerous wizards.

• Code Folding boosts productivity by easily folding classes, functions, PHPdoc blocks and non-PHP code.

• Make documenting your code, applications, and projects a breeze with PHPDocumentor, the standard documentation tool for PHP. Automatically add PHPDoc comments to files, classes, functions, constants and more, all through PHPDoc wizards.

• Quickly navigate to any PHP resource in the project with filtering by classes, functions, and constants using the Go-to-PHP Resource Utility.

• Facilitate team development and team collaboration by effectively managing your source code with a tight CVS integration that lets you perform CVS operations directly from within Zend Studio 5.

• Get a choice of version management systems. Zend Studio 5 supports the popular Subversion source control.

• Securely browse your FTP connection using SSL with Implicit and Explicit methods.

• Simplify deployment with FTP, SFTP and FTP over SSL, allowing developers to securely upload and download project files transparently to and from remote servers.

SUPERIOR BUSINESS APPLICATION DEVELOPMENT • Create tighter application integration with superior Web Services Support to easily generate WSDL files directly from your PHP source code and existing WSDL files for Code Completion Integration and Inspection View.

• Connect directly to the most widely used professional databases such as IBM DB2/Cloudscape/Derby, MySQL, Oracle, Microsoft SQL Server, PostgreSQL, and SQLite.

• Write and execute queries on connected servers using Zend SQL Query Editor with SQL92 and syntax highlighting support.

• View database structure and manage content with Zend SQL Explorer

How PHP Works

PHP stands for Personal HomePage (or PHP Hypertext Preprocessor, depending on your sources), a server side scripting language. PHP is an open source project available for use on most common web servers (Apache, IIS, etc.). PHP's official homepage is http://www.php.net. Because it is server side, PHP is never visible to end-users. Viewers of PHP pages will only be aware of PHP in a URL file extension of .php. PHP operates on a web server and essentially interprets server side scripting. This means that requests to a web server from a browser are received, the server selects the appropriate page, and then PHP prepares the HTML to be returned to the browser. At the stage where PHP prepares HTML, the PHP scripting language allows a web server to dynamically render an HTML page based on programmers' instructions. The major advantage over HTML offered by this configuration is that while HTML is static and must be changed manually, PHP allows for dynamic changes to .php pages prepared by a server. This would allow a web programmer to prepare a page that varied appearance based on the day of the week, time of day, or current data in a remote database, simply by writing appropriate PHP code, rather than manually changing an HTML page for each instance of variation. It is important to understand this concept in order to properly code PHP pages. Illustrated below is the path a client makes to the web server and its return response:

client --------> server (browser)

Response:

server -->PHP---------> client

In essence, PHP stands between the client and server and prepares HTML documents on the fly, based on user requests and PHP code guidelines. This allows for PHP to respond to web developer and client responses, a PHP homepage could be coded to respond to multiple variables, for instance:

1. if it is Tuesday, return the to user a homepage with a black background 2. if it is Friday, return to the user a homepage with a blue background in all other circumstances the page should have a white background

PHP can also be used to change pages dynamically based on database information. This is perhaps the most powerful aspect of PHP. PHP can be used, for instance, to return a homepage with the following instructions:

1. if the requesting user's IP address is in the databases IP table, redirect the user to another page. 2. if the user is unknown display information requesting them to register as a user 3. if there is information in the database with the same date stamp as today, display that data on line 13 of the page

These sorts of instructions allow for a much finer degree of control by a coder over the display, flow and functionality of not only individual pages, but also of a web site as a whole. Users whose IP address reflects that they are in a French speaking country could be redirected to a French version of the website, pages could be updated based on the latest information inserted into a database, user requests to the page could even be tracked by PHP.

How PHP is Prepared

PHP is an embedded language. PHP scripts are inserted directly in HTML code. While standard HTML tags are delimited by less than and greater than characters, such as:

PHP must be distinguished separately. Since all material between less than () symbols is escaped (does not appear) in HTML display, PHP follows suit. The only difference is that the beginning and closing characters which surround PHP code are slightly more involved. Much like Active server Pages use less than, percent to delimit its code, such as:

PHP uses:

less than, question mark, 'php' and question mark, less than to delimit php code. Thus the following:

is a standard HTML page that includes PHP code. You will notice that the '' do not have to appear on the same lines, or even directly preceding and following PHP code. Just as HTML ignores white space, so too does PHP ignore white space. The use of line breaks and indention should only be a consideration in reading your own and others' PHP code. Incidentally, because PHP is invisible to the end user and prepared by the server, the page above would produce a blank web page with the words 'hello world' on it. If the end user viewed the page source, all they would see would be:

hello world

You should note that this is the exact HTML code passed from the server back to the user. All PHP code is interpreted by the server, and only appropriate HTML (without any PHP) is rendered and passed back to the user.

3.9 Concluding Remarks

The user interface of an application will often make or break it. Although the functionality that an application provides to users is important, the way in which it provides that functionality is just as important. An application that is difficult to use won’t be used. It won’t matter how technically superior your software is or what functionality it provides, if your users don’t like it they simply won’t use it. Don’t underestimate the value of user interface design nor of usability.
Effective developers find ways to work closely with their stakeholders. I'm a firm believer in the AM practice Active Stakeholder Participation where your stakeholders do much of the business-related modeling using inclusive modeling techniques. Furthermore, they should be involved with your user interface prototyping efforts as well.

4.1 Analysis

The Terms of Reference had identified that there was a need to produce an interface to help non-programmers handle independently ,file manipulations in the linux environment. To a very significant extent, the applications development will help with productivity. This work on Strategies for UI Design has been inspired by similar work done by Peter Coad in the sphere of Object Modelling. More inspiration was drawn from Jenifer Tidwell with her book Designing Interfaces.This is documented in his book, Object Models - Strategies, Patterns and Applications, Peter Coad, North, Mayfield 2nd Ed., Yourdon Press, Prentice Hall, 1997.

4.2 Current System

Structure Of Elmsworth Projects

[pic]

4.3 Requirements of Proposed System

In order to capture the basic requirements of the proposed system, several meetings took place between the author and the project sponsor and the requirements in broad terms were identified. These requirements were further refined in order to produce a more detailed Functional Specifications document (See Appendix C).
As stated at the beginning of the chapter, the requirements were to produce an application that had a menu driven interface graphical or character based that could simplify common file operations in a Linux terminal.

The functions identified as required by the Linux File Management System maybe broken down into the following categories: 1. o Client Dietary Records 2. o Reporting/Search Facilities 3. o Maintenance of Nutritional Items 4. o Client’s Personal Details

4.4 Constraints and Limitations

When compiling the Terms of Reference it had been identified that the solution may have problems picking on a reliable toolkit for gui development. It should then be a must that functionality in the given time-frame must come first such that if a CUI interface can do the job for now, it be embarked on with the GUI version being a recommendation.

5.0 Scripting

I discovered various GUI tools like Glade, Dialog, KDialog, Zenity, technologies like PHP that I could use to develop my project in but realized that it would take considerable amount of time to learn the not so popular GUI programming tools.
When programming in Windows there are a lot of sure and tested environments like .NET that can be used to rapidly develop software with the help of built in object-oriented objects, drag and drop features and wizards. Linux does not have a favourite fixed environment. GDK+ is an environment that will produce an application that runs in a specific Linux Desktop, either KDE or GNOME.

Eventually, I decided to utilize easy to learn and less complicated tools common to all linux distributions, the vi editor as employees will connect via terminals from remote locations. It proved extremely useful in the busy time frame I had to produce a working prototype. Compilation was straight-forward with the Bash utility and I managed to code with not too much trouble.

5.1 Interactivity

The power of scripting is one of being able to produce GUI look alike interfaces. In this case, I kept the interfaces to a simplistic Iteration 1 development. After making the prototype functional first with whatever interactive interface of my choice, a later stage ,Iteration 2 would incorporate improvements. The GUI interface design is very possible and preferred, but for the sake of prototyping within the project time-frame that requires studying a programming language first, better Interfaces would be done in later Iterations.
This project has menu screens that offer choices on functions that can be performed as per the requirements.

6.0 Project Management Approach and Planning

6.1 Project Management Approach

The project would be developed in one Iteration. The reason being that due to time it would be simpler to produce the system in a faster technique and later the same would be refined in a more complex and time consuming GUI Iteration or Phase. The ideal would be to produce the project in two Iterations. The second being an improvement of the first after the sponsor agrees with the Iteration 1 functionality and demonstration. Since the beginning of the project the author determined to make the fabrication of documentation an integral part of managing of the project. Although the formality of such comprehensive documentation is more time consuming, the author felt that adopting this more structured approach would help ensure that both the sponsor and author could maintain clear understanding of the project throughout its life cycle. The author also felt that the probability of misunderstandings developing would be reduced through the use of all-inclusive and agreed documentation, which could be referred to if necessary should any disputes arise. The documentation will be produced by the author and require no part from the sponsor except the need to agree with the proposed formats.

6.2 Terms of Reference and Project Plan

Following an initial meeting with the project sponsor, Terms of Reference were agreed, documented and approved by both the project sponsor and project tutor (See Below). The minutes of this initial meeting were documented by the author and subsequently approved by the project sponsor. Thereafter, regular meetings were scheduled and held throughout the project and the practice of documenting minutes and obtaining approval for them was adopted and employed.
University of Sunderland
School of Computing, Engineering and Technology

B.A. (Hons) Computer Studies

Terms of Reference

File Management System in Linux CUI Interface

|Overview |
|Elms Worth Constructors is a small construction company operating in Francistown, Botswana. It is involved in undertaking |
|different types of civil engineering projects and has its engineers deployed in several remote sites. Elms Worth Construction has |
|a centralized Linux box configured for remote login. Engineers have always found it difficult to manage their data in the Linux |
|character environment. Management has welcomed a menu driven system designed for Linux to be incorporated on the server and made |
|accessible to all users in order to alleviate the current situation. We have agreed that I develop a prototype solution. The |
|expected program should provide menus for file, directory, printing, searching manipulations. |
|Objectives |
| |
|1) Research topic one is discovering the security benefits of Linux OS compared to Windows OS for remote login and file transfer |
|operations. |
| |
|2) Research topic two is to evaluate possibilities of developing a graphical user interface in Linux against a character user |
|interface to facilitate user acceptance of a menu driven solution. |
| |
|2) To learn how to use:- Unix Advanced Shell Scripting |
| |
|3) Analyse suitability of scripting language chosen |
| |
|4) Analyse system requirements / user specification |
| |
|5) Design system |
| |
|6) Develop software: That produces a self explanatory menu that navigates successfully to all expected operations, file, |
|directory, search and printing. |
| |
|7) Evaluate developed system to system requirements and the application of research |
| |
|8) Critically evaluate and conclude entire project |
| |
|9) Produce final report |
| |
|Constraints |
|1) Availability of project sponsor(s) |
|Resources |
|Hardware |
|Resources at hand include the printer I will make my tests on and the average machine specifications as I won’t work from the same|
|terminal at all times. I work in a networked environment with no permanent cubicle. |
|Printer |
|Pentium 4 3.0 GHz |
|256 MB Memory |
|200 GB Hard Disk |
|Network for multiple user access |
|Software |
|Linux CentOS 4.2 |
|Communications |
|Telephone |
|E-mail |
|Reporting: |
|Professor Eric Fletcher, The Managing Director-Martin, at Elm’s worth and Radha my trainer will receive regular updates –twice a |
|month at .The only problem will be how they will run the scripts when I want to emphasize on a screen section if need be. |
|References |
|[Mandel 97]The Elements of User Interface Design, Theo Mandel, Wiley, 1997 |
|[Hackos 98]User and Task Analysis for Interface Design, Hackos & Redish, Wiley, 1998 |
|[Coad 97] Object Model: Strategies, Patterns and Application, 2nd ed. Coad et al, Prentice Hall 1997 |
|[Coad 99] Enterprise Component Models in Color, Peter Coad, The Coad Letter to be developed further in a forthcoming book from |
|Prentice Hall, Java Modeling in Color with UML: Enterprise components and process by Coad, Lefevre and De Luca. |
|[Mullet 95]Designing Visual Interfaces, Communication Oriented Techniques, Mullet & Sano, Prentice Hall, 1995 |
|Sams’s Teach yourself Linux in 24 Hours by Bill Ball and Stephen Smoogen |
|Linux in a Nutshell by Ellen Siever & the Staff of O’Reilly & Associates, 2nd Edition |
|Unix Concepts and Applications by Sumitabha Das |
|Internet Url: |
|https//www.redhat.com/docs/manuals/linux/RHL-7.3-Manual/getting-started-guide/s1-navigating-usingcat.html-s2-navigating-redirinput|
|, Accessed 16th May 2006. |
|Unix Shell Programming by Yashwant Kanetkar |
|RedHat Linux 8 Unleashed by Bill Ball and Hoyt Duff |
|Red Hat 7.3 Bible by Christopher Negus |

Signed and agreed:

| |NAME |SIGNATURE |DATE |
|SUPERVISOR |Erick Fletcher | | |
|STUDENT |Alfred Mulumbwe Kazilimani | |November 29, 2007 |

A project plan was constructed (See Below), breaking down the project into stages, with each stage broken down further into sub tasks. The plan was constructed using the Microsoft Project software package and would be updated and reviewed on a customary basis to ensure that corrective action was adopted in a timely manner to prevent major adverse impact upon project delivery. The Gantt chart would provide visibility of the inter-relationships between tasks, task progression and to project completion.

Project Plan

|Linux File Management System |Duration |Resources |
| Activities |198 days | |
| | | |
|Produce Terms of Reference |7 days |Alfred |
|Prepare for 1st Review |5 days |Alfred |
|Review |1 |Alfred |
|Research on Windows vs Linux Security |20 days |Alfred |
|Research on GUI programming in Linux |20 days |Alfred |
|Prepare for 2nd Review |5 days |Alfred |
|2nd Review |1 |Alfred |
|Analyze Current system of Linux at client site |2 days |Martin, Employees |
|Interview Linux users |5 days |Alfred, Employees |
|Establish constraints & requirements |5 days |Alfred, Employees |
|Outline Functional Hierarchy |2 day |Alfred |
|Design Prototype Structure Skeleton |2 days |Alfred |
|Design Main menu |15 days |Alfred |
|Test Main menu for Errors |5 |Alfred, Employees |
|Design Directory menu |15 days |Alfred |
|Test Directory Menu for Errors |5 |Alfred, Employees |
|Design File Operations Menu |15 days |Alfred |
|Test File Operations for Errors |5 |Alfred, Employees |
|Design Search Operations Menu |15 days |Alfred |
|Test Search Operations for Errors |5 days |Alfred, Employees |
|Design Print Operations Menu |15 days |Alfred |
|Test Print Operations Menu for Errors |5 |Alfred, Employees |
|Integrate all Modules |5 day |Alfred |
|Test Full prototype with all modules integrated |5 days |Alfred, Employees |
|Resolve pending errors module by module |15 days |Alfred |
|Train IT Users |1 day |Alfred |
|Train IT Support |2 days |Alfred |
|Launch System |2 days |Alfred |
7.0 Design

7.1 Project Documentation

Prior to embarking upon any design work, the strategy for identification and management of risks, identification and classification of enhancements and changes were all detailed and documented using the standard template detailed in chapter four. The associated forms were also produced, the necessary logs set up and approval was gained from the project sponsor. Details of these strategies, procedures and associated logs and forms can be viewed in the appendices. (Appendix H – Testing Strategy, Appendix M – Change Requests, Appendix N – Enhancements and Appendix O – Risks)

7.2 Design Concept

As identified earlier, the coding was to be done using scripts in an initial phase or iteration that would allow the sponsor to agree as to whether the main objective and not the colors and extreme look and feel were being explored in the given time-frame. An example of this iteration 1 output for enhancing file management in the Linux CUI interface when working from CUI terminals is shown:
[pic]

7.3 Information Architecture

Information was structured on a hierarchical basis with the content separated into the main categories, as agreed in the Functional Specifications. These are: 1. • File Operations 2. • Print Operations 3. • Search Operations 4. • Directory Operations

Each of these information links have detailed abilities in terms of what submenus they provide to achieve the required functionality. File Operations menu has submenus to allow File creation, removal, renaming, copying, moving. Print Operations has submenus to allow printing single files or multiple. The Search Operations menu allows searching for files with certain content or file names. The last Directory Operations menu allows directories to be created , renamed, moved, copied.

8.0 Menu Evaluation

8.1 Visibility of System Status

“The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.”
In each menu and detailed submenu, all menu selections are coded with validation text to allow the system to communicate with the user. Should the visitor enter information incorrectly then informative messages were to be provided which could be easily understood.

8.2 Match between System and the Real World

“The system should speak the users' language, with words, phrases familiar to the user, rather than system-oriented terms. Follow making information appear in a natural and logical order. “ I have made sure that the language used is simple English throughout my project.

8.3 User Control and Freedom

“Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. Support exit to previous menu.“ I have provided exit points for each submenu allowing a user to return to the main menu where it all began.

8.4 Consistency and Standards

“Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.”
This was covered by following accepted conventions for scripting to minimize possible confusion for visitors. Because users depend on totally reading the meaning of the menus, spellings were observed to not be erroneous as this would mean the option cannot be used due to lack of understanding it.

8.5 Error Prevention

“Even better than good error messages is a careful design which prevents a problem from occurring in the first place.”
All menus were to be incorporated for data validation purposes and clear and informative error messages and instructions were to be displayed to help ensure that the visitor is aware of expected requirements. An example of this would be detailing the expected date formats (dd/mm/yyyy) where date fields exist.

8.6 Recognition rather than recall

“Make objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.” I employed the use of numbers to select options to navigate between the functions of the system.

8.6 Help and Documentation

I have developed setup guidelines to instruct on how the system works with regard to the file that needs to be run start the system.

First turn on a Linux computer.

All files need to be copied from the mounted CD to the users home directory, simply by copy and paste.But all files must be in the same home directory.

Next open a terminal and navigate to the home directory by typing cd ~ .

Confirm that all files copied from the CD are visible by typing dir

Run the mainmenu.sh file by typing bash mainmenu.sh or just mainmenu.sh .

1. The following windows are screenshots of the various invoked screens from the main screen invoked by the mainmenu.sh script.

1st Screen

[pic]

Option 1

[pic]

Option 2

[pic]

Option 3

[pic]

Option 4

[pic]

Option 5

[pic]

9.0 Development

All development was conducted on the author’s home personal computer and the files were copied to CD for distribution. The files involved were few therefore no zipping was used. This makes the setup easier actually, to copy rather than type commands to unzip from the linux terminal for novice users.

9.1 Project Scripts

The technical architecture of the Linux File Management System consisted solely of hard coding to come up with the screens. The hierarchical layout is depicted below:

10. Testing

The testing strategy adopted was to use a three phase approach towards testing with the aims of ensuring that the system met the expected functional requirements and that as many defects as possible were identified and eliminated prior to project completion. This strategy was documented in the Testing Strategy document (Appendix H – Testing Strategy) and approved by the project sponsor. The three phases of the strategy were: 1. o Unit Testing – Each component tested 2. o System Testing – Integration tested 3. o User Acceptance Testing (UAT) – Functionality tested

Unit Testing

This was done to ensure that each script was independently working well before integration with the rest of the program so as to help isolate problems come the debug phase. In effect, Unit testing was done informally by the author and expected results where there could be confusion because the program is not fully explanatory for some reason, were documented.

Example:

When testing the printing unit shown below, a choice is entered, but the system checks if the selection is appropriate as choice 1,2,3 or 4. Each choice has to bring up subroutine tasks that must operate. This way each task is independently tested for functionality and confirmation of the specific unit to be declared operational.

[pic]

Analysis of Test Results

The project perfomed fairly well and would rate out of ten 6. This is because had it been graphical it would fetch an 8/10. As long as the program files are coipied into the right locations and the main executable is launched, the script pretty much communicates well to the end user.

5.0 Appendix A –Questionnaire

All questions to be answered as: o D-Disagree o U-Undecided o A-Agree
| | |
|1. This software responds too slowly to inputs. |D |
|2. I would like to recommend this software to my colleagues |A |
|3. The instructions and prompts are helpful. | |
|4. The software has at some time stopped unexpectedly |A |
|5. Learning to operate this software initially is full of problems. |D |
|6. I sometimes don’t know what to do next with this software. |D |
|7. I enjoy my sessions with this software. | |
|8. I find that the help information given by this software is not very useful. |D |
|9. If this software stops, it is not easy to restart it. | |
|10. It takes too long to learn the software commands. |A |
|11. I sometimes wonder if I’m using the right command. |D |
|12. Working with this software is satisfying. | |
|13. The way that system information is presented is clear and understandable |D |
|14. I feel safer if I use only a few familiar commands or operations. |D |
|15. The software documentation is very informative. |D |
|16. This software seems to disrupt the way I normally like to arrange my work. |A |
|17. Working with this software is mentally stimulating. | |
|18. There is never enough information on the screen when it’s needed. |A |
|19. I feel in command of this software when I am using it. |A |
|20. I prefer to stick to the facilities that I know best. | |
|21. I think this software is inconsistent. |U |
|22. I would not like to use this software every day. |D |
|23. I can understand and act on the information provided by this software. | |
|24. This software is awkward when I want to do something, which is not standard |D |
|25. There is too much to read before you can use the software. | |
|26. Tasks can be performed in a straightforward manner using this software |D |
|27. Using this software is frustrating. |A |
|28. The software has helped me overcome any problems I have had using it |A |
|29. The speed of this software is fast enough. |D |
|30. I have to go back to look at the guides. |D |
|31. It is obvious that user needs have been fully taken into consideration. | |
|32. There have been times in using this software when I have felt quite tense. |A |
|33. The organisation of the menu or information lists seems quite logical. | |
|34. The software allows the user to be economic of keystrokes. |D |
|35. Learning how to use new functions is difficult. | |
|36. There are too many steps required to get something to work. |D |
|37. I think this software has made me have a headache on occasion. |A |
|38. Error prevention messages are not adequate. | |
|39. It is easy to make the software do exactly what you want. |D |
|40. I will never learn to use all that is offered in this software. | |
|41. The software hasn’t always done what I was expecting. |A |
| |A |
| |D |
| | |
| |A |
| |U |
| | |
| |A |
| | |
| |A |
| | |
| |D |
| | |
| |D |
| |D |
| | |
| |D |
| | |
| |A |
| |D |
| | |
| |D |

Similar Documents

Premium Essay

Linux

...Carlos Espiritu 12/10/11 Week 1 homework Page 19 1. What is free software? List three characteristics of free software. Free software includes GNU/Linux, Apache, and some examples of free applications are: KDE, OpenOffice.org. all these application can be used for router/mobile phones..Etc. Linux is free and price plays a roll but not so crucial as other OS. Also source code is available, and software can be used for any purpose, also can be studied and changed. Linux software can be distributed and changed versions as well. 2. What are multiuser sytems? Why are they successful? A multiuser system allows each user at their work terminals to be connected to the computer. The operating system of computer assigns each user a portion of RAM and drives the computer’s time among various users; it is also called time sharing system. In other words, a multiuser system allows many users to simultaneously access the facilities of the host computer. This type of system is capable of having 100’s of users use this computer at once. The commonly used multiuser systems are: 1. LOCAL AREA NETWORK 2. WIDE AREA NETWORK 3. METROPOLITAN AREA NETWORK. 3. In what language is linux written? What does the language have to do with the success of linux? Linux is written in the C programming language and because it is written in the C language the language can be imbedded in all type of devices from TV’s, to PDA’s, cell phones, cable boxes for the reason of this language being so portable...

Words: 796 - Pages: 4

Free Essay

Linux

...Linux CIS 155 Victor Gaines Dr. Weidman December 19, 2012 An operating system is, in the most basic of terms, the back bone of any modern day personal computer. They allow for users to start applications, manipulate the system, and, in general, use the computer effectively and efficiently. There are many different operating systems, all of which are used by different people for different reasons. The Apple OS operating system is the sole property of the Apple Company and is used in all of their computers and technology that they create. Then you have Windows, which is quite possibly the most widely recognizable operating system on the market today. Then there is Linux. Linux is seen as the operating system for “people who know computers”. Linux is not as user friendly as the Apple OS or Windows but it is seen as one of the most flexible operating systems around. Linux was born from the brain trust of a small group of friends lead by a Finn computer science student, Linus Torvalds. Linus built the kernel, which is the core of the Linux operating system, though the kernel itself does not fully constitute an operating system. Richard Stallman’s GNU tools were used to fully flesh out the Linux operating system. Torvald matched these two together to make these two parts one whole working body. Linux is still in its infancy but has gathered a tremendous following since its inception in 1991. Linux is greatly favored by amongst developers, being used in everything from computers...

Words: 1046 - Pages: 5

Free Essay

Linux

...NT1430 Linux Networking: Study Guide Wed 21-November-2012 Linux Commands: Know these commands and what they do: • Directory and list commands o ls, ls –l o pwd o cd / o cd and cd~ (hint: both take you to your home directory) o cd .. (takes you up one directory • Know what cp and mv do and how to use them • File viewing commands: o cat o less and more (one page at atime) o vi and view o tail (shows the last 10 lines of a file) o head (shows the top 10 lines) • chmod for changing permissions on files and directories • know the differences in read write and execute for owner group and all • > to redirect output to a file (overwrites if file exists) • >> appends to a file • & puts a process in the background while fg brings it to the foreground. • ps –ef | grep programname locates a running process for you • grep is a program that searches for a string within a directory or command output • The pipe symbol ( | ) sends output from one command to the input of another. • Know what a Linux shell script is. Direcories and file systems • / is the root of the entire file system • /usr stores program files • /home stores user home directories • /etc stores Linux configuration files • /var stores various miscellaneous files • /proc is a virtual directory that stores system performance metrics...

Words: 1137 - Pages: 5

Free Essay

Linux

...1) Describe some reasons why Linux is installed on only a very small fraction of desktop computers. Are there particular categories of products or users who might see Linux as more appealing than conventional operating systems? Do you think Linux's share of the desktop market will increase? Why or why not? Linux is used proportionally due to the fact that we live in a Windows world. All of the name brand software applications like Office, Peachtree and QuickBooks are Windows based. I couldn’t imagine playing Call of Duty on Linux. Not saying it couldn’t happen. Without being said there is a huge demand to make Windows applications. The overall installation process for Linux is different. I won’t say difficult but different. Linux overall doesn’t have the virus issues that Windows tends to obtain. I know there are a ton of LIVE CD’s out there that is used for forensics, firewalls, backup and recovery. I have used a few of them in the past to recover partitions on hard drives unattainable by windows. I see windows becoming more and more of an online service in the future. If Microsoft goes this route, I can see users adapting to Linux just to avoid a big brother conspiracy. One thing that could also increase the usage of Linux might be those entities that are trying to implement technology with a tight budget. 2) What are some of the benefits of cloud computing? What are some of the drawbacks? Find an article about cloud computing online. Summarize and critique the article...

Words: 663 - Pages: 3

Free Essay

Linux

...Unit 2 Discussion 1: Identifying Layers of Access Control in Linux One of the most vital security tasks is to maintain control over incoming network connections. As system administrator, there are many layers of control over these connections. At the lowest level unplug network cables, but this is rarely necessary unless your computer has been badly cracked beyond all trust. More realistically, you have the following levels of control in software, from general to service-specific: Network interface - The interface can be brought entirely down and up. Firewall - By setting firewall rules in the Linux kernel, you control the handling of incoming (and outgoing and forwarded) packets. This topic is covered in Chapter 2. A superdaemon or Internet services daemon- A superdaemon controls the invocation of specific network services. Suppose the system receives an incoming request for a Telnet connection. The superdaemon could accept or reject it based on the source address, the time of day, the count of other Telnet connections open... or it could simply forbid all Telnet access. Superdaemons typically have a set of configuration files for controlling your many services conveniently in one place. Individual network services - Any network service, such as sshd or ftpd, may have built-in access control facilities of its own. For example, sshd has its AllowUsers configuration keyword, ftpd has /etc/ftpaccess, and various services require user authentication. ...

Words: 324 - Pages: 2

Premium Essay

Linux

...to 10 optional extra credit points each Part that you submit in this project, but you have to begin the first week. You can always drop out later if you don’t have time…it’s optional! The project will help you select and install a Linux OS on an old computer. It will be easy XC points for those who have already done so, and a great learning experience for everyone. Part 1—Find an old computer you can install Linux on, and determine its hardware Note: If you do not do Part 1, you are not eligible to do any of the following parts! A. Old computers which are too slow for Windows often make great *nix boxes. B. Find one in your garage, from a neighbor or family member, or at a garage sale. C. You will need system unit, keyboard, mouse, monitor & [optional—network card] D. If it used to run Windows, it should be fine E. Determine what hardware it has, including a. CPU speed, # of cores, etc. b. Memory c. Hard drive space and interface (SATA, PATA, SCSI) d. Network card—ethernet? 100Mbps? Gbps? F. If you have trouble determining what hardware you have, hit the discussion board. G. Submit brand & specs in the link under the weekly Content folder for credit Part 2—Select a Linux, UNIX, or BSD OS and verify that your hardware will support it A. This is strictly research. Find a *nix flavor with which you are unfamiliar! B. Look up the hardware compatibility specs to verify that your system will support...

Words: 478 - Pages: 2

Premium Essay

Linux

...Chapter 18 Exercises 1.What is the difference between the scp and sftp utilities? copies file to and from a remote system SFTP is the same but is secure 2.How can you use ssh to find out who is logged in on a remote system? Assuming you have the same username on both systems, the following command might prompt you for your password on the remote system; it displays the output of who run on host: $ ssh host who 3.How would you use scp to copy your ~/.bashrc file from the system named plum to the local system? $ scp ~/.bashrc zack@plum: 4.How would you use ssh to run xterm on plum and show the display on the local system? Assuming you have the same username on both systems and an X11 server running locally, the following command runs xterm on plum and presents the display on the local system: $ ssh plum xterm You need to use the –Y option if trusted X11 forwarding is not enabled. 5.What problem can enabling compression present when you are using ssh to run remote X applications on a local display? When using compression latency is increased and the outcome is always undesirable slower speeds, and data interruption. 6.When you try to connect to a remote system using an OpenSSH client and you see a message warning you that the remote host identification has changed, what has happened?What should you do? This message indicates that the fingerprint of the remote system is not the same as the local system remembers it. Check with the remote system’s...

Words: 1325 - Pages: 6

Free Essay

Linux

...After researching some popular commercial windows applications, I have found a few good open-source alternatives for Linux users. The four Windows applications I researched were Adobe Acrobat, Adobe Photoshop, Internet Explorer, and Norton Anti-Virus. The most user friendly Adobe Acrobat alternative I found was PDFMod. This a very user friendly platform with a nice GUI interface that allows you to reorder, rotate, and remove pages, export images from a document, edit the title, subject, author, and keywords, and combine documents via drag and drop. This program is very simple and easy to use to modify PDF documents. Adobe Photoshop was a little harder to find a good alternative, but I think that GIMP 2.6 answers that call. GIMP is a very simple yet complex application that can be used as a simple paint program, an expert quality photo retouching program, an online batch processing system, a mass production image renderer, an image format converter, etc. You can expand GIMP with the use of plug-ins and extensions to do just about anything. Gimp also has an advanced scripting interface allows everything from the simplest task to the most complex image manipulation procedures to be easily scripted. An obvious choice for me as a replacement for Internet Explorer(due to the fact that I already use it) is Mozilla Firefox. Firefox is, in my opinion, a superior browser with better security, performance, personalization, etc. With Firefox you can sync your desktop browser with your...

Words: 446 - Pages: 2

Free Essay

Linux

...the creation of the Linux kernel by Linus Torvalds in 1991, many versions of Linux have been created. Due to the open source of the kernel, this gives advanced users the option to alter the kernel to their liking. This, in turn, has yielded a near endless amount of distributions and versions available out there. In my research, I have found the main versions of Linux have derived from Debian Linux, Slackware Linux, or RedHat Linux. However, the first distribution meant for the masses was Yggdrasil Linux (Citation). First, there were versions such as MCC Interim Linux developed by University of Manchester and TAMU developed by Texas A&M, however these were in-house developments not really meant to be widely distributed. Yggdrasil, one of the first widely distributed version of Linux, was described as a Plug and play Linux. Its’ initial release took place in December of 1992, but in the form of an alpha release. The beta version was released in 1993, and the official release the next year in 1994. It was the first Linux operating system distributed via a live CD-ROM. It included an automatic configuration of the software installation, much like we see today, making it very easy for even a novice user to set it up. Yggdrasil was not free, however, the company charged $39.95 per copy (Yggdrasil Computing). After conducting research of the number of distribution of Linux, the exact number could not be pinpointed. There are so many developers tweaking the Linux kernel and submitting...

Words: 1003 - Pages: 5

Premium Essay

Linux

...What is free software? List three characteristics of free software. 1- Distribution 2- Development 3- Collaboration. 2. Why is Linux popular? Why is it popular in academia? Because of it portability and it is free as Free Expression easy to manipulate and transport. Because of its portability and easy to manipulate. 3. What are multiuser systems? Why are they successful? Multi-user are the several individual user that can access one system that being physical machine or VM. They are popular because it help to centralize resources and energies and minimize security concerns. 4. What is the Free Software Foundation/GNU? What is Linux? Which parts of the Linux operating system did each provide? Who else has helped build and refine this operating system? The Free Software Foundation (www.fsf.org) is the principal organizational sponsor of the GNU Project. GNU developed many of the tools, including the C compiler, that are part of the NU/Linux Operating System. Linux is the name of an operating system kernel developed by Linus Torvalds and expanded and improved by thousands of people on the Internet. Torvalds’s kernel and GNU’s tools work Together as the GNU/Linux Operating System. 5. In which language is Linux written? What does the language have to do with the success of Linux? Linux was written in C language. C can be used to write machine-independent programs. A programmer who designs a program to be portable can easily move...

Words: 699 - Pages: 3

Free Essay

Linux

...operating system kernel, Linux version 0.01. Linux evolved into a fully functioning Operating System (OS) with one of its first distributions created by the Manchester Computing Center, MCC Interim Linux, using a combined boot/root disk (Hayward, 2012). Linux luminaries, Slackware, RedHat and Debian began to rise between 1992 and 1994 as well as the Linux kernel growing to version 0.95, becoming the first kernel to run the X Windows System. The Big Three, Slackware, Debian and Red Hat were instrumental in the anticipated launching of Linux version 1.0.0 in 1994 with 176,250 lines of code. Over the next five years the big three released some of the greatest Linux distributions, including the Jurix Linux, which is allegedly the first distribution to include a scriptable installer; the installer allows an administrator install across similar machines. The Juris Linux distribution is mostly noted in Linux history because it was used as a base system for SUSE Linux which is still in operation today (Hayward, 2012). Launched in 1996, Linux 2.0 had 41 releases in the series; inclusion of critical operating system features and rapid releases helped to make the Linux operating system the OS of choice for IT professionals. Another notable moment in Linux history was the release of Version 2.4 which contained support for USB, PC Cards, ISA Plug and Play and Bluetooth, just to name a few; these features demonstrated the versatility and the advancement of the Linux kernel since the early...

Words: 745 - Pages: 3

Free Essay

Linux Paper

...Linux Features of Red Hat Red hat has many different features, I will cover a few of the main features in this section, and Red Hat contains more than 1,200 components covering a broad range of functionality. Red Hat Enterprise Linux provides CIOs and IT managers with the means to reduce costs while improving operational flexibility throughout their computing infrastructure. The following list provides a brief summary of the more important features: * Virtualization is provided in all Red Hat Enterprise Linux server products and is optionally available for desktop products. * Storage and extended server virtualization are provided with Red Hat Enterprise Linux Advanced Platform. * Red Hat Network supports virtualized guest operating systems * Virtual-manager, other management tools are available for single system or scripted virtualization management. * Integration with Red Hat Enterprise Virtualization is available for enterprise virtualization management. Networking & interoperability * Network storage enhancements include Autofs, FS-Cache, and iSCSI support * IPv6 support and conformance enhancements * Improved Microsoft® file/print and Active Directory integration, including support for Windows Security Features * SE Linux enhancements include Multi-Level Security and targeted policies for all services * SE troubleshooter GUI simplifies SE Linux management * Integrated directory and security capabilities * IPSEC enhancements...

Words: 769 - Pages: 4

Free Essay

Linux

...security enhancement to Linux which allows users and administrators more control over access control. Access can be constrained on such variables as which users and applications can access which resources. These resources may take the form of files. Standard Linux access controls, such as file modes (-rwxr-xr-x) are modifiable by the user and the applications which the user runs. Conversely, SELinux access controls are determined by a policy loaded on the system which may not be changed by careless users or misbehaving applications. The United States National Security Agency, the original primary developer of SELinux, released the first version to the open source development community under the GNU GPL on December 22, 2000. The software merged into the mainline Linux kernel 2.6.0-test3, released on 8 August 2003. Other significant contributors include Network Associates, Secure Computing Corporation, Trusted Computer Solutions, and Tresys Technology. Experimental ports of the FLASK/TE implementation have been made available via the TrustedBSD Project for the FreeBSD and Darwin operating systems. SELinux also adds finer granularity to access controls. Instead of only being able to specify who can read, write or execute a file, for example, SELinux lets you specify who can unlink, append only, move a file and so on. SELinux allows you to specify access to many resources other than files as well, such as network resources and interprocess communication. A Linux kernel integrating...

Words: 1252 - Pages: 6

Free Essay

Outline Linux

...Benson Medley-Childs Outline SERVERS 1. The 1st vendor is IBM which uses power servers that runs both Red Hat and SUSE Linux server operating systems, offering a scalable alternative for your open source application. a. Red Hat has big business support and its easy to find certified technicians, administrators and engineers who know their way around Red Hat. Its also supported on a wide variety of hardware whether your running x86 servers on racks, blade servers, IBM power systems, or mainframes then Red Hat is your best choice b. Ubuntu is a great linux server that offers free upgrades and support. It provides windows integration and a cloud system. Provides an easy to use GUI to many manage many machines at once, group machines that match your needs. Workstations 1. Penguin computing Linux workstations they offer three different workstation models a. Tempest 4201: The Tempest 4201 is based on the latest generation of AMD Opteron processors. The 4201 is the right server for real power users with demanding I/O intensive applications. The Platinum power supply makes this system extremely power efficient and best of all. b. Tempest 4400: With up to 64 processor cores and 512GB of RAM the Tempest 4400 delivers the performance of a small cluster in a desktop form factor with server-grade RAS options. c. Niveus 5200: The Niveus is an expert workstation that features Intel's latest CPU and IO technologies. It is ideal for demanding...

Words: 364 - Pages: 2

Free Essay

Linux Security

...Project Part 1 ITT Technical Institute Table of Contents Task 1 Page 3 Task 2 Page 6 Task 3 Page 7 References Page 8 Task 1 First World Bank is a savings and loan financial institution that provides services to their customers like loans, credit cards and standard banking services. First World Bank believes that once they can provide their services online they will gain $100,000,000 a year in online credit card transactions. The issue is how to securely provide their services to their customers and how they can provide those services and still save money in doing so. First World Bank will have to comply with federal regulations to be compliant and to avoid fines and sanctions. If the First World Bank fails to safe guard the information that they have stored on their customers and that information is compromised then the First World Bank will lose customers and also their reputation. Gramm-Leach-Bliley Act (GLBA) is one of the federal regulations that the First World Bank needs to be in complaint and stay in compliance with. Gramm-Leach-Bliley is a regulation that requires banks to safe guard customer’s information and to provide how the institution shares customer’s information, what information is collected, who they share the information with, and how they protect it. This information is required to be disclosed to customers in writing, in the written notice the customer will also be advised...

Words: 1405 - Pages: 6