Free Essay

Linux

In:

Submitted By sandeepkaur
Words 10610
Pages 43
2
Adaptive and Reflective Middleware

2.1 Introduction

Middleware platforms and related services form a vital cog in the construction of robust distributed systems. Middleware facilitates the development of large software systems by relieving the burden on the applications developer of writing a number of complex infrastructure services needed by the system; these services include persistence, distribution, transactions, load balancing, clustering, and so on. The demands of future computing environments will require a more flexible system infrastructure that can adapt to dynamic changes in application requirements and environmental conditions. Next-generation systems will require predictable behavior in areas such as throughput, scalability, dependability, and security. This increase in complexity of an already complex software development process will only add to the already high rates of project failure. Middleware platforms have traditionally been designed as monolithic static systems. The vigorous dynamic demands of future environments such as large-scale distribution or ubiquitous and pervasive computing will require extreme scaling into large, small, and mobile environments. In order to meet the challenges presented in such environments, next-generation middleware researchers are developing techniques to enable middleware platforms to obtain information concerning environmental conditions and adapt their behavior to better serve their current deployment. Such capability will be a prerequisite for any next-generation middleware; research to date has exposed a number of promising techniques that give middleware the ability to meet these challenges head on. Adaptive and reflective techniques have been noted as a key emerging paradigm for the development of dynamic next-generation middleware platforms [1, 2]. These
Middleware for Communications. Edited by Qusay H. Mahmoud  2004 John Wiley & Sons, Ltd ISBN 0-470-86206-8

U

N

C

O R

R

EC

TE

D

PR

O

O

FS

Edward Curry National University of Ireland, Galway, Ireland

30

Adaptive and Reflective Middleware

techniques empower a system to automatically self-alter (adapt) to meet its environment and user needs. Adaptive and Reflective system support advanced adaptive behavior. Adaptation can take place autonomously or semiautonomously, on the basis of the systems deployment environment, or within the defined policies of users or administrators [3]. The objective of this chapter is to explore adaptive and reflective techniques, their motivation for use, and introduce their fundamental concepts. The application of these techniques will be examined, and a summary of a selection of middleware platforms that utilize these techniques will be conducted. The tools and techniques that allow a system to alter its behavior will be examined; these methods are vital to implementing adaptive and reflective systems. Potential future directions for research will be highlighted; these include advances in programming techniques, open research issues, and the relationship to autonomic computing systems.

Traditionally, middleware platforms are designed for a particular application domain or deployment scenario. In reality, multiple domains overlap and deployment environments are dynamic, not static; current middleware technology does not provide support for coping with such conditions. Present research has been focused on investigating the possibility of enabling middleware to serve multiple domains and deployment environments. In recent years, platforms have emerged that support reconfigurability, allowing platforms to be customized for a specific task; this work has led to the development of adaptive multipurpose middleware platforms.

2.1.2 Reflective Middleware

The groundbreaking work on reflective programming was carried out by Brian Smith at MIT [5]. Reflective middleware is the next logical step once an adaptive middleware has been achieved. A reflective system is one that can examine and reason about its capabilities and operating environment, allowing it to self-adapt at runtime. Reflective middleware builds on adaptive middleware by providing the means to allow the internals of a system to be manipulated and adapted at runtime; this approach allows for the automated selfexamination of systems capabilities and the automated adjustment and optimization of those capabilities. The process of self-adaptation allows a system to provide an improved service for its environment or user’s needs. Reflective platforms support advanced adaptive behavior; adaptation can take place autonomously on the basis of the status of the systems, environment, or in the defined policies of its users or administrators [3].

U

N

C

O R

An adaptive system has the ability to change its behavior and functionality. Adaptive middleware is software whose functional behavior can be modified dynamically to optimize for a change in environmental conditions or requirements [4]. These adaptations can be triggered by changes made to a configuration file by an administrator, by instructions from another program, or by requests from its users. The primary requirements of a runtime adaptive system are measurement, reporting, control, feedback, and stability [1].

R

EC

TE

Adapt–a. To alter or modify so as to fit for a new use

D

PR

O

O

FS

2.1.1 Adaptive Middleware

Introduction

31

Reflect–v. To turn (back), cast (the eye or thought) on or upon something Reflection is currently a hot research topic within software engineering and development. A common definition of reflection is a system that provides a representation of its own behavior that is amenable to inspection and adaptation and is causally connected to the underlying behavior it describes [6]. Reflective research is also gaining speed within the middleware research community. The use of reflection within middleware for advanced adaptive behavior gives middleware developers the tools to meet the challenges of nextgeneration middleware, and its use in this capacity has been advocated by a number of leading middleware researchers [1, 7]. Reflective middleware is self-aware middleware [8] The reflective middleware model is a principled and efficient way of dealing with highly dynamic environments yet supports the development of flexible and adaptive systems and applications [8]. This reflective flexibility diminishes the importance of many initial design decisions by offering late-binding and runtime-binding options to accommodate actual operating environments at the time of deployment, instead of only anticipated operating environments at design time. [1] A common definition of a reflective system [6] is a system that has the following: Self-Representation: A description of its own behavior Causally Connected: Alterations made to the self-representation are mirrored in the system’s actual state and behavior Causally Connected Self Representation (CCSR) Few aspects of a middleware platform would not benefit from the use of reflective techniques. Research is ongoing into the application of these techniques in a number of areas within middleware platforms. While still relatively new, reflective techniques have already been applied to a number of nonfunctional areas of middleware. One of the main reasons nonfunctional system properties are popular candidates for reflection is the ease and flexibility of their configuration and reconfiguration during runtime, and changes to a nonfunctional system property will not directly interfere with a systems user interaction protocols. Nonfunctional system properties that have been enhanced with adaptive and reflective techniques include distribution, responsiveness, availability, reliability, faulttolerance, scalability, transactions, and security. Two main forms of reflection exist, behavioral and structural reflection. Behavioral reflection is the ability to intercept an operation and alter its behavior. Structural reflection is the ability to alter the programmatic definition of a programs structure. Low-level structural reflection is most commonly found in programming languages, that is, to change the definition of a class, a function, or a data structure on demand is outside the scope of this chapter. In this chapter, the focus is on behavioral reflection, specifically altering the behavior of middleware platforms at runtime, and structural reflection concerned with the high-level system architecture and selection of plugable service implementations used in a middleware platform.

U

N

C

O R

R

EC

TE

D

PR

O

O

FS

32

Adaptive and Reflective Middleware

2.1.3 Are Adaptive and Reflective Techniques the Same? Adaptive and Reflective techniques are intimately related, but have distinct differences and individual characteristics: —An adaptive system is capable of changing its behavior. —A reflective system can inspect/examine its internal state and environment. Systems can be both adaptive and reflective, can be adaptive but not reflective, as well as reflective but not adaptive. On their own, both of these techniques are useful, but when used collectively, they provide a very powerful paradigm that allows for system inspection with an appropriate behavior adaptation if needed. When talking about reflective systems, it is often assumed that the system has adaptive capabilities.

Absorption This is the process of enacting the changes made to the external representation of the system back into the internal system. Absorbing these changes into the system realizes the casual connection between the model and system. Structural Reflection Structural reflection provides the ability to alter the statically fixed internal data/functional structures and architecture used in a program. A structural reflective system would provide a complete reification of its internal methods and state, allowing them to be inspected and changed. For example, the definition of a class, a method, a function, and so on, may be altered on demand. Structural reflection changes the internal makeup of a program Behavioral Reflection Behavioral reflection is the ability to intercept an operation such as a method invocation and alter the behavior of that operation. This allows a program, or another program, to change the way it functions and behaves. Behavioral reflection alters the actions of a program Nonfunctional Properties The nonfunctional properties of a system are the behaviors of the system that are not obvious or visible from interaction with the system. Nonfunctional properties include distribution, responsiveness, availability, reliability, scalability, transactions, and security.

U

N

C

O R

R

EC

TE

D

PR

O

O

Common Terms Reification The process of providing an external representation of the internals of a system. This representation allows for the internals of the system to be manipulated at runtime.

FS

Implementation Techniques

33

2.1.4 Triggers of Adaptive and Reflective Behavior In essence, the reflective capabilities of a system should trigger the adaptive capabilities of a system. However, what exactly can be inspected in order to trigger an appropriate adaptive behavior? Typically, a number of areas within a middleware platform, its functionality, and its environment are amenable to inspection, measurement, and reasoning as to the optimum or desired performance/functionality. Software components known as interceptors can be inserted into the execution path of a system to monitor its actions. Using interceptors and similar techniques, reflective systems can extract useful information from the current execution environment and perform an analysis on this information. Usually, a reflective system will have a number of interceptors and system monitors that can be used to examine the state of a system, reporting system information such as its performance, workload, or current resource usage. On the basis of an analysis of this information, appropriate alterations may be made to the system behavior. Potential monitoring tools and feedback mechanisms include performance graphs, benchmarking, user usage patterns, and changes to the physical deployments infrastructure of a platform (network bandwidth, hardware systems, etc).

2.2.1 Meta-Level Programming In 1991, Gregor Kiczale’s work on combining the concept of computational reflection and object-oriented programming techniques lead to the definition of a meta-object protocol [9]. One of the key aspects of this groundbreaking work was in the separation of a system into two levels. The base-level provides system functionality, and the meta-level contains the policies and strategies for the behavior of the system. The inspection and alteration of this meta-level allows for changes in the system’s behavior. The base-level provides the implementation of the system and exposes a meta-interface that can be accessed at the meta-level. This meta-interface exposes the internals of the base-level components/objects, allowing it to be examined and its behavior to be altered and reconfigured. The base-level can now be reconfigured to maximize and fine-tune the systems characteristics and behavior to improve performance in different contexts and operational environments. This is often referred to as the Meta-Object Protocol or MOP.

U

N

C

O R

R

Software development has evolved from the ‘on-the-metal’ programming of assembly and machine codes to higher-level paradigms such as procedural, structured, functional, logic, and Object-Orientation. Each of these paradigms has provided new tools and techniques to facilitate the creation of complex software systems with speed, ease, and at lower development costs. In addition to advancements in programming languages and paradigms, a number of techniques have been developed that allow flexible dynamic systems to be created. These techniques are used in adaptive systems to enable their behavior and functionality changes. This section provides an overview of such techniques, including meta-level programming, components and component framework, generative programming, and aspect-oriented programming.

EC

TE

D

PR

O

2.2 Implementation Techniques

O

FS

34

Adaptive and Reflective Middleware

The design of a meta-interface/MOP is central to studies of reflection, and the interface should be sufficiently general to permit unanticipated changes to the platform but should also be restricted to prevent the integrity of the system from being destroyed [10]. Meta Terms Explained MetaPrefixed to technical terms to denote software, data, and so on, which operate at a higher level of abstraction – Oxford English Dictionary Meta-Level The level of software that abstracts the functional and structural level of a system. Meta-level architectures are systems designed with a base-level (implementation level) that handles the execution of services and operations, and a meta-level that provides an abstraction of the base-level. Meta-Object The participants in an object-oriented meta-level are known as meta-objects

2.2.2 Software Components and Frameworks

With increasing complexity in system requirements and tight development budget constraints, the process of programming applications from scratch is becoming less feasible. Constructing applications from a collection of reusable components and frameworks is emerging as a popular approach to software development. A software component is a functional discrete block of logic. Components can be full applications or encapsulated functionality that can be used as part of a larger application, enabling the construction of applications using components as building blocks. Components have a number of benefits as they simplify application development and maintenance, allowing systems to be more adaptive and respond rapidly to changing requirements. Reusable components are designed to encompass a reusable block of software, logic, or functionality. In recent years, there is increased interest in the use of components as a mechanism of building middleware platforms; this approach has enabled middleware platforms to be highly flexible to changing requirements. Component frameworks are a collection of interfaces and interaction protocols that define how components interact with each other and the framework itself, in essence frameworks allow components to be plugged into them. Examples of component frameworks include Enterprise Java Beans (EJB) [11] developed by Sun Microsystems, Microsoft’s .NET [12] and the CORBA Component Model (CMM) [13]. Components frameworks have also been used as a medium for components to access middleware services, for example, the EJB component model simplifies the development of middleware applications by providing automatic support for services such as transactions, security, clustering, database connectivity, life-cycle management, instance pooling, and so on. If

U

N

C

O R

R

EC

TE

D

PR

Meta-Object Protocol The protocol used to communicate with the meta-object is known as the Meta-Object Protocol (MOP)

O

O

FS

Overview of Current Research

35

• Q1

components are analogous to building blocks, frameworks can be seen as the cement that holds them together. The component-oriented development paradigm is seen as a major milestone in software construction techniques. The process of creating applications by composing preconstructed program ‘blocks’ can drastically reduce the cost of software development. Components and component frameworks leverage previous• development efforts by capturing key implementation patterns, allowing their reuse in future systems. In addition, the use of replaceable software components can improve reliability, simplify the implementation, and reduce the maintenance of complex applications [14]. 2.2.3 Generative Programming Generative programming [15] is the process of creating programs that construct other programs. The basic objective of a generative program, also known as a program generator [16], is to automate the tedious and error-prone tasks of programming. Given a requirements specification, a highly customized and optimized application can be automatically manufactured on demand. Program generators manufacture source code in a target language from a program specification expressed in a higher-level Domain Specific Language (DSL). Once the requirements of the system are defined in the higher-level DSL, the target language used to implement the system may be changed. For example, given the specification of text file format, a program generator could be used to create a driver program to edit files in this specified format. The program generator could use Java, C, Visual Basic (VB) or any other language as the target language for implementation; two program generators could be created, a Java version and a C version. This would allow the user a choice for the implementation of the driver program. Generative programming allows for high levels of code reuse in systems that share common concepts and tasks, providing an effective method of supporting multiple variants of a program; this collection of variants is known as a program family. Program generation techniques may also be used to create systems capable of adaptive behavior via program recompilation.

2.3 Overview of Current Research
Adaptive and reflective capabilities will be commonplace in future next-generation middleware platforms. There is consensus [1, 8] that middleware technologies will continue to incorporate this new functionality. At present, these techniques have been applied to a number of middleware areas. There is a growing interest in developing reflective middleware with a large number of researchers and research groups carrying out investigations in this area. A number of systems have been developed that employ adaptive and reflective techniques, this section provides an overview of some of the more popular systems to have emerged. 2.3.1 Reflective and Adaptive Middleware Workshops Reflective and adaptive middleware is a very active research field with the completion of a successful workshop on the subject at the IFIP/ACM Middleware 2000 conference [17].

U

N

C

O R

R

EC

TE

D

PR

O

O

FS

36

Adaptive and Reflective Middleware

Papers presented at this workshop cover a number of topics including reflective and adaptive architectures and systems, mathematical model and performance measurements of reflective platforms. Building on the success of this event a second workshop took place at Middleware 2003, The 2nd Workshop on Reflective and Adaptive Middleware [18] covered a number of topics including nonfunctional properties, distribution, components, and future research trends. 2.3.2 Nonfunctional Properties Nonfunctional properties of middleware platforms have proved to be very popular candidates for enhancement with adaptive and reflective techniques. These system properties are the behaviors of the system that are not obvious or visible from interaction with the system. This is one of the primary reasons they have proved popular with researchers, because they are not visible in user/system interactions and changes made to these properties will not affect the user/system interaction protocol. Nonfunctional properties that have been enhanced with adaptive and reflective techniques include distribution, responsiveness, availability, reliability, scalability, transactions, and security. 2.3.2.1 Security

A system with a Quality-of-Service (QoS) demand is one that will perform unacceptably if it is not carefully configured and tuned for the anticipated environment and deployment infrastructure. Systems may provide different levels of service to the end-user, depending on the deployment environment and operational conditions. An application that is targeted to perform well in a specific deployment environment will most likely have trouble if the environment changes. As an illustration of this concept, imagine a system designed to support 100 simultaneous concurrent users, if the system was deployed in an environment with 1000 or 10,000 users it will most likely struggle to provide the same level of service or QoS when faced with demands that are 10 or 100 times greater than what it is designed to handle. Another example is a mobile distributed multimedia application. This type of application may experience drastic changes in the amount of bandwidth provided by the underlying network infrastructure from the broadband connections offered by residential or office networks to the 9600 bps GSM connection used while traveling. An application designed to operate on a broadband network will encounter serious difficulties when deployed over the substantially slower GSM-based connection. Researchers at Lancaster University have developed a reflective middleware platform [10] that adapts to the underlying network

U

N

C

O R

R

EC

2.3.2.2 Quality-of-Service

TE

The specialized Obol [19] programming language provides flexible security mechanisms for the Open ORB Python Prototype (OOPP). In OOPP, the flexible security mechanisms based on Obol is a subset of the reflective features of the middleware platform enabling programmable security via Obol. Reflective techniques within OOPP provide the mechanisms needed to access and modify the environment; Obol is able to access the environment meta-model making it possible to change and replace security protocols without changing the implementation of the components or middleware platform.

D

PR

O

O

FS

Overview of Current Research

37

infrastructure in order to improve the QoS provided by the application. This research alters the methods used to deliver the content to the mobile client, achieved by using an appropriate video and audio compression component for the network bandwidth available or the addition of a jitter-smoothing buffer to a network with erratic delay characteristics. 2.3.2.3 Fault-Tolerant Components Adaptive Fault-Tolerance in the CORBA Component Model (AFT-CCM) [20] is designed for building component-based applications with QoS requirements related to fault-tolerance. AFT-CCM is based on the CORBA Component Model (CCM) [13] and allows an application user to specify QoS requirements such as levels of dependability or availability for a component. On the basis of these requirements, an appropriate replication technique and the quantity of component replicas will be set to achieve the target. These techniques allow a component-based distributed application to be tolerant of possible component and machine faults. The AFT-CCM model enables fault-tolerance in a component with complete transparency for the application without requiring changes to its implementation. 2.3.3 Distribution Mechanism

Projects such as GARF and CodA are seen as a milestone in reflective research. GARF [21] (automatic generation of reliable applications) is an object-oriented tool that supports the design and programming of reliable distributed applications. GARF wraps the distribution primitives of a system to create a uniform abstract interface that allows the basic behavior of the system to be enhanced. One technique to improve application reliability is achieved by replicating the application’s critical components over several machines. Groupcommunication schemes are used to implement these techniques by providing multicasting to deliver messages to groups of replicas. In order to implement this group-communication, multicasting functionality needs to be mixed with application functionally. GARF acts as an intermediate between group-communication functionality and applications; this promotes software modularity by clearly separating the implementation of concurrency, distribution, and replication from functional features of the application. The CodA [22] project is a pioneering landmark in reflective research. Designed as an object meta-level architecture, its primary design goal was to allow for decomposition by logical behavior. Through the application of the decomposition OO technique, CodA eliminated the problems existing in ‘monolithic’ meta-architectures. CodA achieves this by using multiple meta-objects, with each one describing a single small behavioral aspect of an object, instead of using one large meta-object that describes all aspects of an objects behavior. Once the distribution concern has been wrapped in meta-objects, aspects of the systems distribution such as message queues, message sending, and receiving can be controlled. This approach offers a fine-grained approach to decomposition.

U

N

C

O R

R

EC

TE

D

2.3.3.1 GARF and CodA

PR

A number of reflective research projects focus on improving the flexibility of application distribution. This section examines the use of adaptive and reflective techniques in enhancing application distribution mechanisms.

O

O

FS

38

Adaptive and Reflective Middleware

2.3.3.2 Reflective Architecture Framework for Distributed Applications The Reflective Architecture Framework for Distributed Applications (RAFDA) [23] is a reflective framework enabling the transformation of a nondistributed application into a flexibly distributed equivalent one. RAFDA allows an application to adapt to its environment by dynamically altering its distribution boundaries. RAFDA can transform a local object into a remote object, and vice versa, allowing local and remote objects to be interchangeable. As illustrated in Figure 2.1, RAFDA achieves flexible distribution boundaries by substituting an object with a proxy to a remote instance. In the above example, objects A and B both hold references to a shared instance of object C, all objects exist in a single address space (nondistributed). The objective is to move object C to a new address space. RAFDA transforms the application so that the instance of C is remote to its reference holders; the instance of C in address space A is replaced with a proxy, Cp, to the remote implementation of C in address space B. The process of transformation is performed at the bytecode level. RAFDA identifies points of substitutability and extracts an interface for each substitutable class; every reference to a substitutable class must then be transformed to use the extracted interface. The proxy implementations provide a number of transport options including SOAP•, RMI,• and IIOP•. The use of interfaces makes nonremote and remote versions of a class interchangeable, thus allowing for flexible distribution boundaries. Policies determine substitutable classes and the transportation mechanisms used for the distribution. 2.3.3.3 mChaRM

A

C

O R

R

The Multi-Channel Reification Model (mChaRM) [24] is a reflective approach that reifies and reflects directly on communications. The mChaRM model does not operate on base-objects but on the communications between base-objects, resulting in a communication-oriented model of reflection. This approach abstracts and encapsulates interobject communications and enables the meta-programmer to enrich and or replace the predefined communication semantics. mChaRM handles a method call as a message sent through a logical channel between a set of senders and a set of receivers. The model supports the reification of such logical channels into logical objects called multi-channels. A multi-channel can enrich the messages (method calls) with new functionality. This

EC

N

C

TE
A

D

PR
Transformation

Cp B

O

• Q3 • Q4

O

• Q2

Single Address Space

U

B

Address Space A

Address Space B

Figure 2.1 RAFDA redistribution transformation. Reproduced by permission of Springer, in Portillo, A. R., Walker, S., Kirby, G., et al. (2003) A Reflective Approach to Providing Flexibility in Application Distribution. Proceedings of the 2nd Workshop on Reflective and Adaptive Middleware, Middleware 2003, Rio de Janeiro, Brazil

FS
C

Overview of Current Research

39

technique allows for a finer reification reflection granularity than those used in previous approaches, and for a simplified approach to the development of communication-oriented software. mChaRM is specifically targeted for designing and developing complex communication mechanism from the ground up, or for extending the behavior of a current communication mechanism; it has been used to extend the standard Java RMI framework to one supporting multicast RMI. 2.3.3.4 Open ORB The Common Object Request Broker Architecture (CORBA) [25] is a popular choice for research projects applying adaptive and reflective techniques. A number of projects have incorporated these techniques into CORBA Object Request Brokers or ORBs. The Open ORB 2 [10] is an adaptive and dynamically reconfigurable ORB supporting applications with dynamic requirements. Open ORB has been designed from the ground up to be consistent with the principles of reflection. Open ORB exposes an interface (framework) that allows components to be plugable; these components control several aspects of the ORBs behavior including thread and buffer management and protocols. Open ORB is implemented as a collection of configurable components that can be selected at buildtime and reconfigured at runtime; this process of component selection and configurability enables the ORB to be adaptive. Open ORB is implemented with a clear separation between base-level and meta-level operations. The ORBs meta-level is a causally connected self-representation of the ORBs base-level (implementation) [10]. Each base-level component may have its own private set of meta-level components that are collectively referred to as the components metaspace. Open ORB has broken down its meta-space into several distinct models. The benefit of this approach is to simplify the interface to the meta-space by separating concerns between different system aspects, allowing each distinct meta-space model to give a different view of the platform implementation that can be independently reified. As shown in Figure 2.2, models cover the interfaces, architecture, interceptor, and resource

R

EC

TE

D

PR

O R

Architecture

Interface

Interception

Meta-Level - (contains meta-objects)
Base-level component Base-level component

N

C

U

Base-level component

Base-Level - (contains implementation objects) Open ORB Address space
Figure 2.2 Open ORB architecture. Reproduced by permission of IEEE, in Blair, G. S., Coulson, G., Andersen, A., et al. (2001) The Design and Implementation of Open ORB 2, IEEE Distributed Systems Online, 2(6)

O

O

Resource (per address space)

FS

40

Adaptive and Reflective Middleware

meta-spaces. These models provide access to the underlying platform and component structure through reflection; every application-level component offers a meta-interface that provides access to an underlying meta-space, which is the support environment for the component. Structural Reflection Open ORB version 2 uses two meta-models to deal with structural reflection, one for its external interfaces and one for its internal architecture. The interface meta-model acts similar to the Java reflection API allowing for the dynamic discovery of a component’s interfaces at runtime. The architecture meta-model details the implementation of a component broken down into two lines, a component graph (a local-binding of components) and an associated set of architectural constraints to prevent system instability [10]. Such an approach makes it possible to place strict controls on access rights for the ORBs adaptation. This allows all users the right to access the interface meta-model while restricting access rights to the architecture meta-model permitting only trusted third parties to modify the system architecture. Behavioral Reflection Two further meta-models exist for behavioral reflection, the interception, and resource models. The interception model enables the dynamic insertion of interceptors on a specific interface allowing for the addition of prebehavior and postbehavior. This technique may be used to introduce nonfunctional behavior into the ORB. Unique to Open ORB is its use of a resource meta-model allowing for access to the underlying system resources, including memory and threads, via resource abstraction, resource factories, and resource managers [10]. 2.3.3.5 DynamicTAO – Real-Time CORBA Another CORBA-based reflective middleware project is DynamicTAO [26]. DynamicTAO is designed to introduce dynamic reconfigurability into the TAO ORB [27] by adding reflective and adaptive capabilities. DynamicTAO enables on-the-fly reconfiguration and customization of the TAO ORBs internal engine, while ensuring it is maintained in a consistent state. The architecture of DynamicTAO is illustrated in Figure 2.3; in this architecture, reification is achieved through a collection of component configurators. Component implementations are provided via libraries. DynamicTAO allows these components to be dynamically loaded and unloaded from the ORBs process at runtime, enabling the ORB to be inspected and for its configuration to be adapted. Component implementations are organized into categories representing different aspects of the ORB’s internal engine such as concurrency, security, monitoring, scheduling, and so on. Inspection in DynamicTAO is achieved through the use of interceptors that may be used to add support for monitoring, these interceptors may also be used to introduce behaviors for cryptography, compression, access control, and so on. DynamicTAO is designed to add reflective features to the TAO ORB, reusing the codebase of the existing TAO ORB results in a very flexible, dynamic, and customizable system implementation.

U

N

C

O R

R

EC

TE

D

PR

O

O

FS

Overview of Current Research

41

Servant 1 Configurator

Servant 2 Configurator

Servant N Configurator

Concurrency Strategy Domain Controller Scheduling Strategy TAO ORB Configurator Security Strategy Monitoring Strategy N Strategy Dynamic TAO

This Chameleon messaging service [28] is a research prototype that provides a generic framework in which reflective techniques can perform customizations and adoptions to Message-Oriented Middleware (MOM). Chameleon focuses specifically on the application of such techniques within message queues and channel/topic hierarchies used as part of the publish/subscribe messaging model. The publish/subscribe messaging model is a very powerful mechanism used to disseminate information between anonymous message consumers and producers. In the publish/subscribe model, clients producing messages “publish” these to a specific topic or channel. These channels are “subscribed” to by clients wishing to consume messages of interest to them. Hierarchical channel structures allows channels to be defined in a hierarchical fashion so that channels may be nested under other channels. Each subchannel offers a more granular selection of the messages contained in its parent channel. Clients of hierarchical channels subscribe to the most appropriate level of channel in order to receive the most relevant messages. For further information on the publish/subscribe, model and hierarchical channels, please refer to Chapter 1 (Message-Oriented Middleware). Current MOM platforms do not define the structure of channel hierarchies. Application developers must therefore manually define the structure of the hierarchy at design time. This process can be tedious and error-prone. To solve this problem, the Chameleon messaging architecture implements reflective channel hierarchies [28] with the ability to autonomously self-adapt to their deployment environment. The Chameleon architecture exposes a causally connected meta-model to express the set-up and configuration of the

U

N

C

O R

R

EC

TE

D

2.3.3.6 Reflective Channel Hierarchies

PR

Figure 2.3 Architecture of dynamicTAO. Reproduced by permission of Springer, in Kon, F., Rom´ n, M., Liu, P., et al. (2002) Monitoring, Security, and Dynamic Configuration with the dynama icTAO Reflective ORB. Proceedings of the IFIP/ACM International Conference on Distributed Systems Platforms and Open Distributed Processing (Middleware’2000), New York

O

O

FS

42

Adaptive and Reflective Middleware

queues and the structure of the channel hierarchy, which enables the runtime inspection and adoption of the hierarchy and queues within MOM systems. Chameleon’s adaptive behavior originates from its reflection engine whose actions are governed by plugable reflective policies; these intelligent policies contain the rules and strategies used in the adaptation of the service. Potential policies could be designed to optimize the distribution mechanism for group messaging using IP multicasts or to provide support for federated messaging services using techniques from Hermes [29] or Herald [30]. Policies could also be designed to work with different levels of filtering (subject, content, composite) or to support different formats of message payloads (XML, JPEG, PDF, etc). Policies could also be used to introduce new behaviors into the service; potential behaviors include collaborative filtering/recommender systems, message transformations, monitoring and accounting functionality.

2.4 Future Research Directions
While it is difficult to forecast the future direction of any discipline, it is possible to highlight a number of developments on the immediate horizon that could affect the direction taken by reflective research over the coming years. This section will look at the implications of new software engineering techniques and highlight a number of open research issues and potential drawbacks that effect adaptive and reflective middleware platforms. The section will conclude with an introduction to autonomic computing and the potential synergy between autonomic and reflective systems.

We can view a complex software system as a combined implementation of multiple concerns, including business-logic, performance, logging, data and state persistence, debugging and unit tests, error checking, multithreaded safety, security, and various other concerns. Most of these are system-wide concerns and are implemented throughout the entire codebase of the system; these system-wide concerns are known as crosscutting concerns. The most popular practice for implementing adaptive and reflective systems is the Object-Oriented (OO) paradigm. Excluding the many benefits and advantages objectoriented programming has over other programming paradigms, object-oriented and reflective techniques have a natural fit. The OO paradigm is a major advancement in the way we think of and build software, but it is not a silver bullet and has a number of limitations. One of these limitations is the inadequate support for crosscutting concerns. The Aspect-Oriented-Programming (AOP) [31] methodology helps overcome this limitation,

U

N

C

O R

2.4.1.1 Aspect-Oriented Programming (AOP)

R

EC

The emergence of multifaceted software paradigms such as Aspect-Oriented Programming (AOP) and Multi-Dimensional Separation of Concerns (MDSOC) will have a profound effect on software construction. These new paradigms have a number of benefits for the application of adaptive and reflective techniques in middleware systems. This section provides a brief overview of these new programming techniques.

TE

D

2.4.1 Advances in Programming Techniques

PR

O

O

FS

Future Research Directions

43

AOP complements OO by creating another form of separation that allows the implementation of a crosscutting concern as a single unit. With this new method of concern separation, known as an aspect, crosscutting concerns are more straightforward to implement. Aspects can be changed, removed, or inserted into a systems codebase enabling the reusability of crosscutting code. A brief illustration would be useful to explain the concept. The most commonly used example of a crosscutting concern is that of logging or execution tracking; this type of functionality is implemented throughout the entire codebase of an application making it difficult to change and maintain. AOP [31] allows this functionality to be implemented in a single aspect; this aspect can now be applied/weaved throughout the entire codebase to achieve the required functionality. Dynamic AOP for Reflective Middleware The Object-Oriented paradigm is widely used within reflective platforms. However, a clearer separation of crosscutting concerns would be of benefit to meta-level architectures. This provides the incentive to utilize AOP within reflective middleware projects. A major impediment to the use of AOP techniques within reflective systems has been the implementation techniques used by the initial implementations of AOP [32]. Traditionally, when an aspect is inserted into an object, the compiler weaves the aspect into the objects code; this results in the absorption of the aspects into the object’s runtime code. The lack of preservation of the aspects as an identifiable runtime entity is a hindrance to the dynamic adaptive capabilities of systems created with aspects. Workarounds to this problem exist in the form of dynamic system recompilation at runtime; however, this is not an ideal solution and a number of issues, such as the transference of the system state, pose problems. Alternative implementations of AOP have emerged that do not have this limitation. These approaches propose an AOP method of middleware construction using composition [33] to preserve aspect as runtime entities, this method of creation facilitates the application of AOP for the construction of reflective middleware platforms. Another approach involving Java bytecode manipulation libraries such as Javassist [34] provide a promising method of implementing AOP frameworks (JBossAOP) with dynamic runtime aspect weaving. One of the founding works on AOP highlighted the process of performance optimization that bloated a 768-line program to 35,213 lines. Rewriting the program with the use of AOP techniques reduced the code back to 1039 lines while retaining most of the performance benefits. Grady Booch, while discussing the future of software engineering techniques, predicts the rise of multifaceted software, that is, software that can be composed in multiple ways at once, he cites AOP as one of the first techniques to facilitate a multifaceted capability [35]. 2.4.1.2 Multi-Dimensional Separation of Concerns The key difference between AOP and Multi-Dimensional Separation of Concerns [36] (MDSOC) is the scale of multifaceted capabilities. AOP will allow multiple crosscutting aspects to be weaved into a program, thus changing its composition through the addition of these aspects. Unlike AOP, MDSOC multifaceted capabilities are not limited to the

U

N

C

O R

R

EC

TE

D

PR

O

O

FS

44

Adaptive and Reflective Middleware

2.4.2 Open Research Issues

2.4.2.1 Open Standards The most important issue currently faced by adaptive and reflective middleware researchers is the development of an open standard for the interaction of their middleware platforms. An international consensus is needed on the interfaces and protocols used to interact with these platforms. The emergence of such standards is important to support the development

U

There are a number of open research issues with adaptive and reflective middleware systems. The resolution of these open issues is critical to the wide-scale deployment of adaptive and reflective techniques in production and mission critical environments. This section highlights a number of the more common issues.

N

C

O R

use of aspects; MDSOC allows for the entire codebase to be multifaceted, enabling the software to be constructed in multiple dimensions. MDSOC also supports the separation of concerns for a single model [37], when using AOP you start with base and use individually coded aspects to augment this base. Working from a specific base makes the development of the aspects more straightforward but also introduces limitations on the aspects, such as limitations on aspect composition [37]; you cannot have an aspect of an aspect. In addition, aspects can be tightly coupled to the codebase for which they are designed; this limits their reusability. MDSOC enables software engineers to construct a collection of separate models, each encapsulating a concern within a class hierarchy specifically designed for that concern [37]. Each model can be understood in isolation, any model can be augmented in isolation, and any model can be augmented with another model. These techniques streamline the division of goals and tasks for developers. Even with these advances, the primary benefit of MDSOC comes from its ability to handle multiple decompositions of the same software simultaneously, some developers can work with classes, others with features, others with business rules, other with services, and so on, even though they model the system in substantially different ways [37]. To further illustrate these concepts, an example is needed, which by Ossher [37] is of a software company developing personnel management systems for large international organizations. For the sake of simplicity, assume that their software has two areas of functionality, personal tracking that records employees’ personal details such as name, address, age, phone number, and so on, and payroll management that handles salary and tax information. Different clients seeking similar software approach the fictitious company, they like the software but have specific requirements, some clients want the full system while others do not want the payroll functionality and refuse to put up with the extra overhead within their system implementation. On the basis of market demands, the software house needs to be able to mix and match the payroll feature. It is extremely difficult to accomplish this sort of dynamic feature selection using standard object-oriented technology. MDSOC allows this flexibility to be achieved within the system with on-demand remodularization capabilities; it also allows the personnel and payroll functionality to be developed almost entirely separate using different class models that best suit the functionality they are implementing.

R

EC

TE

D

PR

O

O

FS

Future Research Directions

45

of next-generation middleware platforms that are configurable and reconfigurable and also to offer applications portability and interoperability across proprietary implementation of such platforms [10]. Service specification and standards are needed to provide a stable base upon which to create services for adaptive and reflective middleware platforms. Because of the large number of application domains that may use these techniques, one generic standard may not be enough; a number of standards may be needed. As adaptive and reflective platforms mature, the ability of such system to dynamically discover components with corresponding configuration information at runtime would be desirable. Challenges exist with this proposition, while it is currently possible to examine a components interface at runtime; no clear method exists for documenting the functionality of neither a component nor its performance or behavioral characteristics. A standard specification is needed to specify what is offered by a component. 2.4.2.2 System Cooperation One of the most interesting research challenges in future middleware platforms is the area of cooperation and coordination between middleware services to achieve a mutual beneficial outcome. Middleware platforms may provide different levels of services, depending on environmental conditions and resource availability and costs. John Donne said ‘No man is an island’; likewise, no adaptive or reflective middleware platform, service, or component is an island, and each must be aware of both the individual consequences and group consequences of its actions. Next-generation middleware systems must coordinate/trade with each other in order to maximize the available resources to meet the system requirements. To achieve this objective, a number of topics need to be investigated. The concept of negotiation-based adaptation will require mechanisms for the trading of resources and resource usage. A method of defining a resource, its capabilities, and an assurance of the QoS offered needs to be developed. Trading partners need to understand the commodities they are trading in. Resources may be traded in a number of ways from simple barter between two services to complex auctions with multiple participants, each with their own tradable resource budget, competing for the available resource. Once a trade has been finalized, enforceable contracts will be needed to ensure compliance with the trade agreement. This concept of resource trading could be used across organizational boundaries with the trading of unused or surplus resources in exchange for monetary reimbursement. 2.4.2.3 Resource Management In order for next-generation middleware to maximize system resource usage and improve quality-of-service, it must have a greater knowledge with regard to the available resources and their current and projected status. Potentially, middleware platforms may wish to participate in system resource management. A number of resource support services will need to be developed including mechanisms to interact with a resource, obtain a resource’s status, coordination techniques to allow a resource to be reserved for future usage at a specified time. A method to allow middleware to provide resource management policies to the underlying system-level resource managers or at the minimum to influence these policies by indicating the resources it will need to meet its requirements will also be required.

U

N

C

O R

R

EC

TE

D

PR

O

O

FS

46

Adaptive and Reflective Middleware

2.4.2.4 Performance Adaptive and reflective systems may suffer in performance because of additional infrastructure required to facilitate adaptations and the extra self-inspection workload required by self-adaptation; such systems contain an additional performance overhead when compared to a traditional implementation of a similar system. However, under certain circumstances, the changes made to the platform through adaptations can improve performance and reduce the overall workload placed on the system. This saving achieved by adaptations may offset the performance overhead or even write it off completely. System speed may not always be the most important measurement of performance for a given system, for example, the Java programming language is one of the most popular programming languages even though it is not the fastest language; other features such as its cross-platform compatibility make it a desirable option. With an adaptive and reflective platform, a performance decrease may be expected from the introduction of new features, what limits in performance are acceptable to pay for a new feature? What metrics may be used to measure such a trade-off? How can a real measurement of benefit be achieved? Reflective systems will usually have a much larger codebase compared to a nonreflective one, which is due to the extra code needed to allow for the system to be inspected and adapted as well as the logic needed to evaluate and reason about the systems adaptation. This larger codebase results in the platform having a larger memory footprint. What techniques could be used to reduce this extra code? Could this code be made easily reusable within application domains?

2.4.2.6 Clearer Separation of Concerns The clearer separation of concerns within code is an important issue for middleware platforms. A clear separation of concerns would reduce the work required to apply adaptive and reflective techniques to a larger number of areas within middleware systems. The use of dynamic AOP and MDSOC techniques to implement nonfunctional and crosscutting concerns eases the burden of introducing adaptive and reflective techniques within these areas.

U

N

Reflection focuses on increasing flexibility and the level of openness. The lack of safebounds for preventing unconstrained system adaptation resulting in system malfunctions is a major concern for reflective middleware developers. This has been seen as an ‘Achilles heel’ of reflective systems [38]. It is important for system engineers to consider the impact that reflection may have on system integrity and to include relevant checks to ensure that integrity is maintained. Techniques such as architectural constraints are a step in the right direction to allowing safe adaptations. However, more research is needed in this area, particularly where dynamically discovered components are introduced into a platform. How do we ensure that such components will not corrupt the platform? How do we discover the existence of such problems? Again, standards will be needed to document component behavior with constant checking of its operations to ensure it does not stray from its contracted behavior.

C

O R

R

EC

TE

D

2.4.2.5 Safe Adaptation

PR

O

O

FS

Future Research Directions

47

As system workloads and environments become more unpredictable and complex, they will require skilled administration personnel to install, configure, maintain, and provide 24/7 support. In order to solve this problem, IBM has announced an autonomic computing initiative. IBM’s vision of autonomic computing [40] is an analogy with the human autonomic nervous system; this biological system relieves the conscious brain of the burden of having to deal with low-level routine bodily functions such as muscle use, cardiac muscle use (respiration), and glands. An autonomic computing system would relieve the burden

U

2.4.3 Autonomic Computing

N

C

Deployment of adaptive and reflective systems into production mission critical environments will require these systems to reach a level of maturity where system administrators feel comfortable with such a platform in their environment. Of utmost importance to reaching this goal is the safe adaptation of the system with predictable results in the systems behavior. The current practices used to test systems are inadequate for adaptive and reflective systems. In order to be accepted as a deployable technology, it is important for the research community to develop the necessary practices and procedures to test adaptive and reflective systems to ensure they perform predictably, and such mechanisms will promote confidence in the technology. Adaptive and reflective techniques must also mature enough to require only the minimum amount of system adoption necessary to achieve the desired goal. Once these procedures are in place, an incremental approach to the deployment of these systems is needed; the safe coexistence of both technologies will be critical to acceptance, and it will be the responsibility of adaptive and reflective systems to ensure that their adaptations do not interfere with other systems that rely on them or systems they interact with.

O R

R

EC

TE

D

PR

2.4.2.7 Deployment into Production Environments

O

The separation of concerns with respect to responsibility for adaptation is also an important research area, multiple subsystems within a platform may be competing for specific adaptations within the platform, and these adaptations may not be compatible with one another. With self-configuring systems and specifically when these systems evolve to become self-organizing groups, who is in charge of the group’s behavior? Who performs the mediations between the conflicting systems? Who chooses what adaptations should take place? These issues are very important to the acceptance of a self-configuration system within production environments. The Object Management Group (OMG) Model Driven Architecture (MDA) [39] defines an approach for developing systems that separates the specification of system functionality from the specification of the implementation of that functionality with a specific technology. MDA can be seen as an advance on the concept of generative programming. The MDA approach uses a Platform Independent Model (PIM) to express an abstract system design that can be implemented by mapping or transforming to one or more Platform Specific Models (PSMs). The major benefit of this approach is that you define a system model over a constantly changing implementation technology allowing your system to be easily updated to the latest technologies by simple switching to the PSMs for the new technology. The use of MDA in conjunction with reflective components-based middleware platforms could be a promising approach for developing future distributed systems.

O

FS

48

Adaptive and Reflective Middleware

Table 2.1 Fundamental characteristics of autonomic systems Characteristic Self-Configuring Description The system must adapt automatically to its operating environment, hardware and software platforms must possess a self-representation of their abilities and to self-configure to the environment Systems must be able to diagnose and solve service interruptions. For a system to be self-healing, it must be able to recognize a failure and isolate it, thus shielding the rest of the system from its erroneous activity. It then must be capable of recovering transparently from failure by fixing or replacing the section of the system that is responsible for the error On a constant basis, the system must be evaluating potential optimizations. Through self-monitoring and resource tuning, and through self-configuration, the system should self-optimize to efficiently maximize resources to best meet the needs of its environment and end-user needs Perhaps the most interesting of all the characteristics needed by an autonomic system is that self-protecting systems need to protect themselves from attack•. These systems must anticipate a potential attack, detect when an attack is under way, identify the type of attack, and use appropriate countermeasures to defeat or at least nullify the attack. Attacks on a system can be classified as Denial-of-Service (DoS) or the infiltration of an unauthorized user to sensitive information or system functionality

Self-Healing

Self-Optimizing

Self-Protecting

• Q5

of low-level functions such as installation, configuration, dependency management, performance optimization management, and routine maintenance from the conscious brain, the system administrators. The basic goal of autonomic computing is to simplify and automate the management of computing systems, both hardware and software, allowing them to self-manage, without the need for human intervention. Four fundamental characteristics are needed by an autonomic system to be self-managing; these are described in Table 2.1. The common theme shared by all of these characteristics is that each of them requires the system to handle functionality that has been traditionally the responsibility of a human system administrator. Within the software domain, adaptive and reflective techniques will play a key role in the construction of autonomic systems. Adaptive and reflective techniques already exhibit a number of the fundamental characteristics that are needed by autonomic systems. Thus, reflective and adaptive middleware provide the ideal foundations for the construction of autonomic middleware platforms. The merger of these two strands of research is a realistic prospect. The goals of autonomic computing highlight areas for the application of reflective and adaptive techniques, these areas include self-protection and self-healing, with some work already initiated in the area of fault-tolerance [20].

2.5 Summary
Middleware platforms are exposed to environments demanding the interoperability of heterogeneous systems, 24/7 reliability, high performance, scalability and security while

U

N

C

O R

R

EC

TE

D

PR

O

O

FS

Bibliography

49

U

• Q7

N

• Q6

[1] Schantz, R. E. and Schmidt, D. C. (2001) Middleware for Distributed Systems: Evolving the Common Structure for Network-centric Applications, Encyclopedia of Software Engineering, Wiley & Sons.• [2] Geihs, K. (2001) Middleware Challenges Ahead. IEEE Computer, 34(6). [3] Blair, G. S., Costa, F. M., Coulson, G., et al.• (1998) The Design of a ResourceAware Reflective Middleware Architecture, Proceedings of the Second International Conference on Meta-Level Architectures and Reflection (Reflection’99), Springer, St. Malo, France. [4] Loyall, J., Schantz, R., Zinky, J., et al. (2001) Comparing and Contrasting Adaptive Middleware Support in Wide-Area and Embedded Distributed Object Applications. Proceedings of the 21st International Conference on Distributed Computing Systems, Mesa, AZ.

C

O R

Bibliography

R

EC

maintaining a high QoS. Traditional monolithic middleware platforms are capable of coping with such demands as they have been designed and fine-tuned in advance to meet these specific requirements. However, next-generation computing environments such as largescale distribution, mobile, ubiquitous, and pervasive computing will present middleware with dynamic environments with constantly changing operating conditions, requirements, and underlying deployment infrastructures. Traditionally, static middleware platforms will struggle when exposed to these environments, thus providing the motivation to develop next-generation middleware systems to adequately service such environments. To prepare next-generation middleware to cope with these scenarios, middleware researchers are developing techniques to allow middleware platforms to examine and reason about their environment. Middleware platforms can then self-adapt to suit the current operating conditions based on this analysis; such capability will be a prerequisite for next-generation middleware. Two techniques have emerged that enable middleware to meet these challenges. Adaptive and reflective techniques allow applications to examine their environment and selfalter in response to dynamically changing environmental conditions, altering their behavior to service the current requirements. Adaptive and reflective middleware is a key emerging paradigm that will help simplify the development of dynamic next-generation middleware platforms [1, 2]. There is a growing interest in developing reflective middleware with a large number of researchers and research group’s active in this area. Numerous architectures have been developed that employ adaptive and reflective techniques to allow for adaptive and selfadaptive capabilities; these techniques have been applied in a number of areas within middleware platforms including distribution, responsiveness, availability, reliability, concurrency, scalability, transactions, fault-tolerance, and security. IBM’s autonomic computing envisions a world of self-managing computer systems, and such autonomic systems will be capable of self-configuration, self-healing, selfoptimization, and self-protection against attack, all without the need for human intervention. Adaptive and reflective enabled middleware platforms will play a key role in the construction of autonomic middleware as they share a number of common characteristics with autonomic systems.

TE

D

PR

O

O

FS

50

Adaptive and Reflective Middleware

• Q8

• Q9

• Q10 • Q11

• Q12

[5] Smith, B. C. (1982) Procedural Reflection in Programming Languages, PhD Thesis, MIT Laboratory of Computer Science. [6] Coulson, G. (2002) What is Reflective Middleware? IEEE Distributed Systems Online.• [7] Geihs, K. (2001) Middleware Challenges Ahead. IEEE Computer, 34(6). [8] Kon, F., Costa, F., Blair, G., et al. (2002) The Case for Reflective Middleware. Communications of the ACM, 45(6). [9] Kiczales, G., Rivieres, J. d., and Bobrow, D. G. (1992) The Art of the Metaobject Protocol, MIT Press.• [10] Blair, G. S., Coulson, G., Andersen, A., et al. (2001) The Design and Implementation of Open ORB 2. IEEE Distributed Systems Online, 2(6). [11] DeMichiel, L. G. and Sun Microsystems, Inc. Enterprise JavaBeansTM Specification, Version 2.1.• [12] Microsoft.• Overview of the .NET Framework White Paper, http://msdn.microsoft.com [13] Object Management Group (2002) CORBA Components OMG Document formal/ 02-06-65. [14] Szyperski, C. (1997) Component Software: Beyond Object-Oriented Programming, Addison-Wesley. [15] Czarnecki, K. and Eisenecker, U. (2000) Generative Programming: Methods, Tools, and Applications, Addison-Wesley. [16] Cleaveland, C. (2001) Program Generators with XML and Java, Prentice Hall. [17] Kon, F., Blair, G. S., and Campbell, R. H. (2000) Workshop on Reflective Middleware. Proceedings of the IFIP/ACM Middleware 2000, New York, USA. [18] Corsaro, A., Wang, N., Venkatasubramanian, N., et al. (2003) The 2nd Workshop on Reflective and Adaptive Middleware. Proceedings of the Middleware 2003, Rio de Janeiro, Brazil. [19] Andersen, A., Blair, G. S., Stabell-Kulo, T., et al. (2003) Reflective Middleware and Security: OOPP meets Obol. Proceedings of the Workshop on Reflective Middleware, Middleware 2003, Rio de Janeiro, Brazil; Springer-Verlag, Heidelberg, Germany. [20] Favarim, F., Siqueira, F., and Fraga, J. (2003) Adaptive Fault-Tolerant CORBA Components. Proceedings of the 2nd Workshop on Reflective and Adaptive Middleware, Middleware 2003, Rio de Janeiro, Brazil. [21] Garbinato, B., Guerraoui, R., and Mazouni, K.R. (1993) Distributed Programming in GARF, Proceedings of the ECOOP Workshop on Object-Based Distributed Programming, Springer-Verlag. [22] McAffer, J. (1995) Meta-level Programming with CodA. Proceedings of the European Conference on Object-Oriented Programming (ECOOP)•. [23] Portillo, A. R., Walker, S., Kirby, G., et al. (2003) A Reflective Approach to Providing Flexibility in Application Distribution. Proceedings of the 2nd Workshop on Reflective and Adaptive Middleware, Middleware 2003, Rio de Janeiro, Brazil; Springer-Verlag, Heidelberg, Germany. [24] Cazzola, W. and Ancona, M. (2002) mChaRM: a Reflective Middleware for Communication-Based Reflection. IEEE Distributed System On-Line, 3(2).

U

N

C

O R

R

EC

TE

D

PR

O

O

FS

Bibliography

51

[25] Object Management Group (1998) The Common Object Request Broker: Architecture and Specification. [26] Kon, F., Rom´ n, M., Liu, P., et al. (2002) Monitoring, Security, and Dynamic Cona figuration with the dynamicTAO Reflective ORB. Proceedings of the IFIP/ACM International Conference on Distributed Systems Platforms and Open Distributed Processing (Middleware’2000), New York. [27] Schmidt, D. C. and Cleeland, C. (1999) Applying Patterns to Develop Extensible ORB Middleware. IEEE Communications Special Issue on Design Patterns, 37(4), 54–63. [28] Curry, E., Chambers, D., and Lyons, G. (2003) Reflective Channel Hierarchies. Proceedings of the 2nd Workshop on Reflective and Adaptive Middleware, Middleware 2003, Rio de Janeiro, Brazil; Springer-Verlag, Heidelberg, Germany. [29] Pietzuch, P. R. and Bacon, J. M. (2002) Hermes: A Distributed Event-Based Middleware Architecture. [30] Cabrera, L. F., Jones, M. B., and Theimer, M. (2001) Herald: Achieving a Global Event Notification Service. Proceedings of the 8th Workshop on Hot Topics in OS. [31] Kiczales, G., Lamping, J., Mendhekar, A., et al. (1997) Aspect-Oriented Programming. Proceedings of the European Conference on Object-Oriented Programming. [32] Kiczales, G., Hilsdale, E., Hugunin, J., et al. (2001) An Overview of AspectJ. Proceedings of the European Conference on Object-Oriented Programming (ECOOP), Budapest, Hungary. [33] Bergmans, L. and Aksit, M. (2000) Aspects and Crosscutting in Layered Middleware Systems. Proceedings of the IFIP/ACM (Middleware2000) Workshop on Reflective Middleware, Palisades, New York. [34] Chiba., S. (1998) Javassist – A Reflection-based Programming Wizard for Java. Proceedings of the Workshop on Reflective Programming in C++ and Java at OOPSLA’98. [35] Booch, G. (2001) Through the Looking Glass, Software Development. [36] Tarr, P., Ossher, H., Harrison, W., et al. (1999) N Degrees of Separation: MultiDimensional Separation of Concerns. Proceedings of the International Conference on Software Engineering ICSE’99. [37] Ossher, H. and Tarr, P. (2001) Using Multidimensional Separation of Concerns to (re)shape Evolving Software. Communications of the ACM, 44(10), 43–50. [38] Moreira, R. S., Blair, G. S., and Garrapatoso, E. (2003) Constraining Architectural Reflection for Safely Managing Adaptation. Proceedings of the 2nd Workshop on Reflective and Adaptive Middleware, Middleware 2003, Rio de Janeiro, Brazil; Springer-Verlag, Heidelberg, Germany. [39] OMG (2001) Model Driven Architecture - A Technical Perspective. OMG Document: ormsc/01-07-01. [40] Ganek, A. and Corbi, T. (2003) The Dawning of the Autonomic Computing Era. IBM Systems Journal, 42(1).

U

N

C

O R

R

EC

TE

D

PR

O

O

FS

Similar Documents

Free Essay

Linux

...University of Sunderland School of Computing and Technology File Management System in Linux CUI Interface A Project Dissertation submitted in partial fulfillment of the Regulations governing the award of the degree of BA in Computer Studies, University of Sunderland 2006 I. Abstract This dissertation details a project to design and produce a prototype Linux character environment file manipulation assisting application. The application is offering a friendly menu driven interface to handle the jobs that non-programmers keep finding cumbersome to master when it comes to working in a Unix/Linux interface, resulting in serious mistakes and much loss of productive time. The Linux File Management System is a basic program for every user at a Unix/Linux terminal. Advantages here include the fact that the support team does not have to be burdened with solving simple file based queries by the employees. The areas of Designing GUI interfaces in Linux and Windows versus Linux Security were researched and a prototype has been designed, developed and tested. An evaluation of the overall success of the project has been conducted and recommendations for future work are also given. Words II. Table of Contents 1) Introduction.................................................................................................................................4 1.1 Overview.................................

Words: 17681 - Pages: 71

Premium Essay

Linux

...Carlos Espiritu 12/10/11 Week 1 homework Page 19 1. What is free software? List three characteristics of free software. Free software includes GNU/Linux, Apache, and some examples of free applications are: KDE, OpenOffice.org. all these application can be used for router/mobile phones..Etc. Linux is free and price plays a roll but not so crucial as other OS. Also source code is available, and software can be used for any purpose, also can be studied and changed. Linux software can be distributed and changed versions as well. 2. What are multiuser sytems? Why are they successful? A multiuser system allows each user at their work terminals to be connected to the computer. The operating system of computer assigns each user a portion of RAM and drives the computer’s time among various users; it is also called time sharing system. In other words, a multiuser system allows many users to simultaneously access the facilities of the host computer. This type of system is capable of having 100’s of users use this computer at once. The commonly used multiuser systems are: 1. LOCAL AREA NETWORK 2. WIDE AREA NETWORK 3. METROPOLITAN AREA NETWORK. 3. In what language is linux written? What does the language have to do with the success of linux? Linux is written in the C programming language and because it is written in the C language the language can be imbedded in all type of devices from TV’s, to PDA’s, cell phones, cable boxes for the reason of this language being so portable...

Words: 796 - Pages: 4

Free Essay

Linux

...Linux CIS 155 Victor Gaines Dr. Weidman December 19, 2012 An operating system is, in the most basic of terms, the back bone of any modern day personal computer. They allow for users to start applications, manipulate the system, and, in general, use the computer effectively and efficiently. There are many different operating systems, all of which are used by different people for different reasons. The Apple OS operating system is the sole property of the Apple Company and is used in all of their computers and technology that they create. Then you have Windows, which is quite possibly the most widely recognizable operating system on the market today. Then there is Linux. Linux is seen as the operating system for “people who know computers”. Linux is not as user friendly as the Apple OS or Windows but it is seen as one of the most flexible operating systems around. Linux was born from the brain trust of a small group of friends lead by a Finn computer science student, Linus Torvalds. Linus built the kernel, which is the core of the Linux operating system, though the kernel itself does not fully constitute an operating system. Richard Stallman’s GNU tools were used to fully flesh out the Linux operating system. Torvald matched these two together to make these two parts one whole working body. Linux is still in its infancy but has gathered a tremendous following since its inception in 1991. Linux is greatly favored by amongst developers, being used in everything from computers...

Words: 1046 - Pages: 5

Free Essay

Linux

...NT1430 Linux Networking: Study Guide Wed 21-November-2012 Linux Commands: Know these commands and what they do: • Directory and list commands o ls, ls –l o pwd o cd / o cd and cd~ (hint: both take you to your home directory) o cd .. (takes you up one directory • Know what cp and mv do and how to use them • File viewing commands: o cat o less and more (one page at atime) o vi and view o tail (shows the last 10 lines of a file) o head (shows the top 10 lines) • chmod for changing permissions on files and directories • know the differences in read write and execute for owner group and all • > to redirect output to a file (overwrites if file exists) • >> appends to a file • & puts a process in the background while fg brings it to the foreground. • ps –ef | grep programname locates a running process for you • grep is a program that searches for a string within a directory or command output • The pipe symbol ( | ) sends output from one command to the input of another. • Know what a Linux shell script is. Direcories and file systems • / is the root of the entire file system • /usr stores program files • /home stores user home directories • /etc stores Linux configuration files • /var stores various miscellaneous files • /proc is a virtual directory that stores system performance metrics...

Words: 1137 - Pages: 5

Free Essay

Linux

...1) Describe some reasons why Linux is installed on only a very small fraction of desktop computers. Are there particular categories of products or users who might see Linux as more appealing than conventional operating systems? Do you think Linux's share of the desktop market will increase? Why or why not? Linux is used proportionally due to the fact that we live in a Windows world. All of the name brand software applications like Office, Peachtree and QuickBooks are Windows based. I couldn’t imagine playing Call of Duty on Linux. Not saying it couldn’t happen. Without being said there is a huge demand to make Windows applications. The overall installation process for Linux is different. I won’t say difficult but different. Linux overall doesn’t have the virus issues that Windows tends to obtain. I know there are a ton of LIVE CD’s out there that is used for forensics, firewalls, backup and recovery. I have used a few of them in the past to recover partitions on hard drives unattainable by windows. I see windows becoming more and more of an online service in the future. If Microsoft goes this route, I can see users adapting to Linux just to avoid a big brother conspiracy. One thing that could also increase the usage of Linux might be those entities that are trying to implement technology with a tight budget. 2) What are some of the benefits of cloud computing? What are some of the drawbacks? Find an article about cloud computing online. Summarize and critique the article...

Words: 663 - Pages: 3

Free Essay

Linux

...Unit 2 Discussion 1: Identifying Layers of Access Control in Linux One of the most vital security tasks is to maintain control over incoming network connections. As system administrator, there are many layers of control over these connections. At the lowest level unplug network cables, but this is rarely necessary unless your computer has been badly cracked beyond all trust. More realistically, you have the following levels of control in software, from general to service-specific: Network interface - The interface can be brought entirely down and up. Firewall - By setting firewall rules in the Linux kernel, you control the handling of incoming (and outgoing and forwarded) packets. This topic is covered in Chapter 2. A superdaemon or Internet services daemon- A superdaemon controls the invocation of specific network services. Suppose the system receives an incoming request for a Telnet connection. The superdaemon could accept or reject it based on the source address, the time of day, the count of other Telnet connections open... or it could simply forbid all Telnet access. Superdaemons typically have a set of configuration files for controlling your many services conveniently in one place. Individual network services - Any network service, such as sshd or ftpd, may have built-in access control facilities of its own. For example, sshd has its AllowUsers configuration keyword, ftpd has /etc/ftpaccess, and various services require user authentication. ...

Words: 324 - Pages: 2

Premium Essay

Linux

...to 10 optional extra credit points each Part that you submit in this project, but you have to begin the first week. You can always drop out later if you don’t have time…it’s optional! The project will help you select and install a Linux OS on an old computer. It will be easy XC points for those who have already done so, and a great learning experience for everyone. Part 1—Find an old computer you can install Linux on, and determine its hardware Note: If you do not do Part 1, you are not eligible to do any of the following parts! A. Old computers which are too slow for Windows often make great *nix boxes. B. Find one in your garage, from a neighbor or family member, or at a garage sale. C. You will need system unit, keyboard, mouse, monitor & [optional—network card] D. If it used to run Windows, it should be fine E. Determine what hardware it has, including a. CPU speed, # of cores, etc. b. Memory c. Hard drive space and interface (SATA, PATA, SCSI) d. Network card—ethernet? 100Mbps? Gbps? F. If you have trouble determining what hardware you have, hit the discussion board. G. Submit brand & specs in the link under the weekly Content folder for credit Part 2—Select a Linux, UNIX, or BSD OS and verify that your hardware will support it A. This is strictly research. Find a *nix flavor with which you are unfamiliar! B. Look up the hardware compatibility specs to verify that your system will support...

Words: 478 - Pages: 2

Premium Essay

Linux

...Chapter 18 Exercises 1.What is the difference between the scp and sftp utilities? copies file to and from a remote system SFTP is the same but is secure 2.How can you use ssh to find out who is logged in on a remote system? Assuming you have the same username on both systems, the following command might prompt you for your password on the remote system; it displays the output of who run on host: $ ssh host who 3.How would you use scp to copy your ~/.bashrc file from the system named plum to the local system? $ scp ~/.bashrc zack@plum: 4.How would you use ssh to run xterm on plum and show the display on the local system? Assuming you have the same username on both systems and an X11 server running locally, the following command runs xterm on plum and presents the display on the local system: $ ssh plum xterm You need to use the –Y option if trusted X11 forwarding is not enabled. 5.What problem can enabling compression present when you are using ssh to run remote X applications on a local display? When using compression latency is increased and the outcome is always undesirable slower speeds, and data interruption. 6.When you try to connect to a remote system using an OpenSSH client and you see a message warning you that the remote host identification has changed, what has happened?What should you do? This message indicates that the fingerprint of the remote system is not the same as the local system remembers it. Check with the remote system’s...

Words: 1325 - Pages: 6

Free Essay

Linux

...After researching some popular commercial windows applications, I have found a few good open-source alternatives for Linux users. The four Windows applications I researched were Adobe Acrobat, Adobe Photoshop, Internet Explorer, and Norton Anti-Virus. The most user friendly Adobe Acrobat alternative I found was PDFMod. This a very user friendly platform with a nice GUI interface that allows you to reorder, rotate, and remove pages, export images from a document, edit the title, subject, author, and keywords, and combine documents via drag and drop. This program is very simple and easy to use to modify PDF documents. Adobe Photoshop was a little harder to find a good alternative, but I think that GIMP 2.6 answers that call. GIMP is a very simple yet complex application that can be used as a simple paint program, an expert quality photo retouching program, an online batch processing system, a mass production image renderer, an image format converter, etc. You can expand GIMP with the use of plug-ins and extensions to do just about anything. Gimp also has an advanced scripting interface allows everything from the simplest task to the most complex image manipulation procedures to be easily scripted. An obvious choice for me as a replacement for Internet Explorer(due to the fact that I already use it) is Mozilla Firefox. Firefox is, in my opinion, a superior browser with better security, performance, personalization, etc. With Firefox you can sync your desktop browser with your...

Words: 446 - Pages: 2

Free Essay

Linux

...the creation of the Linux kernel by Linus Torvalds in 1991, many versions of Linux have been created. Due to the open source of the kernel, this gives advanced users the option to alter the kernel to their liking. This, in turn, has yielded a near endless amount of distributions and versions available out there. In my research, I have found the main versions of Linux have derived from Debian Linux, Slackware Linux, or RedHat Linux. However, the first distribution meant for the masses was Yggdrasil Linux (Citation). First, there were versions such as MCC Interim Linux developed by University of Manchester and TAMU developed by Texas A&M, however these were in-house developments not really meant to be widely distributed. Yggdrasil, one of the first widely distributed version of Linux, was described as a Plug and play Linux. Its’ initial release took place in December of 1992, but in the form of an alpha release. The beta version was released in 1993, and the official release the next year in 1994. It was the first Linux operating system distributed via a live CD-ROM. It included an automatic configuration of the software installation, much like we see today, making it very easy for even a novice user to set it up. Yggdrasil was not free, however, the company charged $39.95 per copy (Yggdrasil Computing). After conducting research of the number of distribution of Linux, the exact number could not be pinpointed. There are so many developers tweaking the Linux kernel and submitting...

Words: 1003 - Pages: 5

Premium Essay

Linux

...What is free software? List three characteristics of free software. 1- Distribution 2- Development 3- Collaboration. 2. Why is Linux popular? Why is it popular in academia? Because of it portability and it is free as Free Expression easy to manipulate and transport. Because of its portability and easy to manipulate. 3. What are multiuser systems? Why are they successful? Multi-user are the several individual user that can access one system that being physical machine or VM. They are popular because it help to centralize resources and energies and minimize security concerns. 4. What is the Free Software Foundation/GNU? What is Linux? Which parts of the Linux operating system did each provide? Who else has helped build and refine this operating system? The Free Software Foundation (www.fsf.org) is the principal organizational sponsor of the GNU Project. GNU developed many of the tools, including the C compiler, that are part of the NU/Linux Operating System. Linux is the name of an operating system kernel developed by Linus Torvalds and expanded and improved by thousands of people on the Internet. Torvalds’s kernel and GNU’s tools work Together as the GNU/Linux Operating System. 5. In which language is Linux written? What does the language have to do with the success of Linux? Linux was written in C language. C can be used to write machine-independent programs. A programmer who designs a program to be portable can easily move...

Words: 699 - Pages: 3

Free Essay

Linux

...operating system kernel, Linux version 0.01. Linux evolved into a fully functioning Operating System (OS) with one of its first distributions created by the Manchester Computing Center, MCC Interim Linux, using a combined boot/root disk (Hayward, 2012). Linux luminaries, Slackware, RedHat and Debian began to rise between 1992 and 1994 as well as the Linux kernel growing to version 0.95, becoming the first kernel to run the X Windows System. The Big Three, Slackware, Debian and Red Hat were instrumental in the anticipated launching of Linux version 1.0.0 in 1994 with 176,250 lines of code. Over the next five years the big three released some of the greatest Linux distributions, including the Jurix Linux, which is allegedly the first distribution to include a scriptable installer; the installer allows an administrator install across similar machines. The Juris Linux distribution is mostly noted in Linux history because it was used as a base system for SUSE Linux which is still in operation today (Hayward, 2012). Launched in 1996, Linux 2.0 had 41 releases in the series; inclusion of critical operating system features and rapid releases helped to make the Linux operating system the OS of choice for IT professionals. Another notable moment in Linux history was the release of Version 2.4 which contained support for USB, PC Cards, ISA Plug and Play and Bluetooth, just to name a few; these features demonstrated the versatility and the advancement of the Linux kernel since the early...

Words: 745 - Pages: 3

Free Essay

Linux Paper

...Linux Features of Red Hat Red hat has many different features, I will cover a few of the main features in this section, and Red Hat contains more than 1,200 components covering a broad range of functionality. Red Hat Enterprise Linux provides CIOs and IT managers with the means to reduce costs while improving operational flexibility throughout their computing infrastructure. The following list provides a brief summary of the more important features: * Virtualization is provided in all Red Hat Enterprise Linux server products and is optionally available for desktop products. * Storage and extended server virtualization are provided with Red Hat Enterprise Linux Advanced Platform. * Red Hat Network supports virtualized guest operating systems * Virtual-manager, other management tools are available for single system or scripted virtualization management. * Integration with Red Hat Enterprise Virtualization is available for enterprise virtualization management. Networking & interoperability * Network storage enhancements include Autofs, FS-Cache, and iSCSI support * IPv6 support and conformance enhancements * Improved Microsoft® file/print and Active Directory integration, including support for Windows Security Features * SE Linux enhancements include Multi-Level Security and targeted policies for all services * SE troubleshooter GUI simplifies SE Linux management * Integrated directory and security capabilities * IPSEC enhancements...

Words: 769 - Pages: 4

Free Essay

Linux

...security enhancement to Linux which allows users and administrators more control over access control. Access can be constrained on such variables as which users and applications can access which resources. These resources may take the form of files. Standard Linux access controls, such as file modes (-rwxr-xr-x) are modifiable by the user and the applications which the user runs. Conversely, SELinux access controls are determined by a policy loaded on the system which may not be changed by careless users or misbehaving applications. The United States National Security Agency, the original primary developer of SELinux, released the first version to the open source development community under the GNU GPL on December 22, 2000. The software merged into the mainline Linux kernel 2.6.0-test3, released on 8 August 2003. Other significant contributors include Network Associates, Secure Computing Corporation, Trusted Computer Solutions, and Tresys Technology. Experimental ports of the FLASK/TE implementation have been made available via the TrustedBSD Project for the FreeBSD and Darwin operating systems. SELinux also adds finer granularity to access controls. Instead of only being able to specify who can read, write or execute a file, for example, SELinux lets you specify who can unlink, append only, move a file and so on. SELinux allows you to specify access to many resources other than files as well, such as network resources and interprocess communication. A Linux kernel integrating...

Words: 1252 - Pages: 6

Free Essay

Outline Linux

...Benson Medley-Childs Outline SERVERS 1. The 1st vendor is IBM which uses power servers that runs both Red Hat and SUSE Linux server operating systems, offering a scalable alternative for your open source application. a. Red Hat has big business support and its easy to find certified technicians, administrators and engineers who know their way around Red Hat. Its also supported on a wide variety of hardware whether your running x86 servers on racks, blade servers, IBM power systems, or mainframes then Red Hat is your best choice b. Ubuntu is a great linux server that offers free upgrades and support. It provides windows integration and a cloud system. Provides an easy to use GUI to many manage many machines at once, group machines that match your needs. Workstations 1. Penguin computing Linux workstations they offer three different workstation models a. Tempest 4201: The Tempest 4201 is based on the latest generation of AMD Opteron processors. The 4201 is the right server for real power users with demanding I/O intensive applications. The Platinum power supply makes this system extremely power efficient and best of all. b. Tempest 4400: With up to 64 processor cores and 512GB of RAM the Tempest 4400 delivers the performance of a small cluster in a desktop form factor with server-grade RAS options. c. Niveus 5200: The Niveus is an expert workstation that features Intel's latest CPU and IO technologies. It is ideal for demanding...

Words: 364 - Pages: 2