Free Essay

Cooja Simulation

In:

Submitted By sagarreddy
Words 3070
Pages 13
Network Monitoring Applications
Based on IoT System
Andrej Kos, Urban Sedlar, Janez Sterle, Mojca Volk,
Janez Bešter
Laboratory for Telecommunications
Faculty of Electrical Engineering, University of Ljubljana
Ljubljana, Slovenia andrej.kos@fe.uni-lj.si Abstract— We present applications for network monitoring based on intelligent communication platform that can also be used to support various usage scenarios related to future internet of things. Applications presented include real time DSL access line monitoring and IPTV monitoring, correlated with lightning reports. The solution is used in the field of proactive monitoring, enabling the operator's helpdesk and field technical teams to pinpoint the cause of service degradations.
Keywords— network monitoring; IoT; DSL; IPTV, lightning; visualisation; correlation

I.

INTRODUCTION

Today, telecommunications operators offering access
(i.e. xDSL, FTTx) and services (i.e. IPTV, Internet access,
VoIP) are confronted with the problem of SLA (Service Level
Agreement) and minimization of operational cost.
In xDSL (including FTTN and local loop shortening), the fulfillment of SLA is an issue due to crosstalk, which is the interference between two pairs of a cable. Increasing the number of subscribers on the same cable, shortening the last mile, increasing the speeds, the mixture of different broadband technologies and interference with wireless devices causes degradation of service already on physical layer. Operators are failing to provide services that they have sold to users in the past. As the need for bandwidth is becoming larger, operators are forced to move access devices closer to the users.
In IPTV, the fulfillment of SLA is an issue of all devices and systems that provide the service, starting with set-top-boxes, home installation, access device, access network, aggregation network, core network, service delivery platform and head end.
Therefore, the operators have to deploy efficient monitoring and correlation tools for all network layers to keep the current network operation under control and proactively detect changes and trends in their network. Data collected and correlated enables commercial sectors to better plan, define and offer new packages, technical departments to better plan and upgrade networks, and helpdesk teams to pinpoint the cause of eventual service degradations, thus being able to respond faster and more accurately.
Such a monitoring and alarm correlation tool is a combination of performance monitoring on different network layers and alarm monitoring with addition of an event

Marko Bajec
Laboratory for Data Technologies
Faculty of Computer and Information Science,
University of Ljubljana
Ljubljana, Slovenia marko.bajec@fri.uni-lj.si correlation and pattern matching expert tool. As the data are collected from the network and from other external sources, such as the environment, and there are huge amounts of data, such systems are also known as IoT (Internet of Things) or big data. Advantages of such approach are that (i) it enables 100 % penetration (monitoring the entire network) and (ii) monitoring the entire network line, including the CPEs, means real end-toend operator network coverage.
As the DSL solution is well described in [1], we put focus on IPTV in this article.
II.

IOT PLATFORM

The IoT platform is an open communication platform for data integration and development of data-driven and event-driven services to be used in various communication and service environments. The platform is designed universally and can be scaled in order to accommodate different scenarios (i.e. different fields of application), further enhancements and modifications. The majority of platform components are generic and can be used in various use cases (where simple user scenarios apply), while domain specific components are kept to minimum (required for complex user scenarios) [2, 3].
One of the key functionalities of the platform is the possibility of merging data from different sources, use cases and scenarios (Fig. 1) [4, 5]. This prospect paves the way towards enhancing data with data from other domains.
Consequently, new use cases and new data integrations are possible. The platform architecture enables simple and straightforward inclusion and integration of new domain scenarios. Based on that and through open interfaces, development of new and innovative services is easy and straightforward. Fig. 1. Merging data from different sources. It enables common storage, preprocessing and visualizations.

18th European Conference on Network and Optical Communications & 8th Conference on Optical Cabling and Infrastructure - NOC/OC&I 2013
ISBN: 978-1-4673-5822-4, July 10-12, 2013, Graz, Austria

69

The platform architecture (Fig. 2) comprises event-driven components (i.e. real time analysis, business activity monitoring, and alarming), data-driven components (i.e. persistence, interoperability, expert analysis and business intelligence), sources and external systems. The inputs to the platform are data and event sources; sensors and devices forwarding measured data into the platform, i.e. applications based on service and event driven architectures and legacy systems, and trusted external data sources, i.e. databases.
Events are placed in the event channel. From there they are dispatched to the event processing kernel and the database.
Event processing kernel provides for event filtering, identification of noticeable events, pattern matching, time series analysis, and event correlation. The outputs are the socalled generated or output events, representing important information for various recipients, i.e. alarms and indicators.
These events are placed into outbound queue to make sure all concerned recipients are informed about particular events. The events are placed into two queues for the use of internal and external subscribers. Processed events can be used internally or externally. Inside the platform, monitoring is possible with generic monitoring dashboard providing key performance indicators (KPI). A basic KPI is an individual event that a user wants to monitor, while complex KPI consists of more
(correlated) events. Users can select KPIs and define type and mode of visualization (graph type, refresh rate, labels, etc.).
Special type of KPI, which can be forwarded to an email, short message service (SMS), or start an application, is an alarm.
The processing kernel provides for: duplicate elimination, data enrichment, complex pattern matching, prediction,
External subscribers and web applications events via external output queue. These external applications, mobile devices, etc.

entity resolution, expert analysis, and forecasting. are notified about are, for example,

We use DSL system in combination with OpenMN as a sensor network. Each individual DSL line represents a sensor node that provides required information and sends it to IoT platform. More details about the solution can be found in [1].
Therefore, in this section we only present the most representative case.
In Fig. 3, the values of actual downstream and upstream speeds on a given port are compared to the required speed of the user profile. For each user, a speed is set in a user profile; also, the minimum and the maximum tolerance is defined.

Fig. 3. DSL line monitoring. Values of actual speed on a given port is compared to the required speed of the user profile within tolerance.

IV.

IPTV

The second subsystem, in which we monitor in real-time is the IPTV system. IPTV systems are substantially more complex classic broadcast systems. They consist of a chain of at least 6 elements to each user and the multitude of support systems. The architecture of the IPTV system is shown in
Fig. 4.

Fig. 4. IPTV architecture. Complete system is substantially more complex classic broadcast systems.

Fig. 2. IoT platform architecture. It consists of internal building blocks and external sources and systems.

III.

DSL

We designed and implemented a solution for real-time domain monitoring of access line and access network. It consists of multi-service access node (MSANs) and management system. MSAN (in our case Iskratel SI300) has optical, DSL, fixed wireless and POTS/ISDN types of access.

70

Because of the maturity and reliability of the traditional broadcasting technologies, the anticipated user experience relating to the use of IPTV services is very high. The users are especially intolerant of errors in video content and poor service accessibility. The causes for faults and a lowering of quality can appear anywhere in the chain of providing the IPTV service; Anything from faulty content at the source, faults in the operator’s backbone network, faults in the access network, faults in the users’ home network, to errors in the video decoding process or functioning of the terminal equipment. In addition, there is a complex and technologically heterogeneous communication infrastructure between the provider of the
IPTV service and the user. This means that the possibility of

NOC/OC&I 2013, ISBN: 978-1-4673-5822-4

error increases proportionally with the number of network building blocks and technologies, and the capacity of the network connection to the user.
For the aforementioned reasons, monitoring of the users’ quality of experience (QoE) is one of the key diagnostic mechanisms in managing an IPTV system and services, used for an overall control of the system vitality, monitoring the quality of guaranteed services and identification, localization and elimination of network and application faults.
Typical monitoring systems that allow for the supervision of vital indicators of the quality of an IPTV system and services Key Performance Indicator – KPI, Key Quality
Indicator – KQI, have many flaws and limitations.
Firstly, the QoE is a subjective and individual measure of satisfaction of the end user, which demands time consuming empirical measurements of guaranteeing IPTV services, conducted on a sample set of end users, or an automated modeling and a QoE grade with objective methods of measuring the qualitative network and application parameters.
Secondly, the automated objective systems for measuring the vitality of the IPTV system available today are typically based on obtaining relevant data from a limited number of systematically chosen nodes in the network, which enables the monitoring of technical parameters of the quality of service, but it does not ensure the adequately sampled service context for a realistic evaluation of the application QoE. The collection of comprehensive data about the functioning of a system and the state of the topologically distributed termination points is not a trivial issue and has not been satisfyingly solved.
Our solution consists of a system for monitoring the network and the IPTV services, with advanced techniques for contextualization of quality and reliability of the IPTV services’ performance at the application level, with the use of techniques and principles of IoT. We designed a distributed system for the advanced measurement of KPI and KQI of an
IPTV network and services, as well as their in-depth analysis for the needs of assessing relevant contextualized events that have an effect on the quality and reliability of IPTV services.
The designed system encompasses the following architectural segments: (1) A software agent in an IPTV STB with the possibility of measuring application and network KPI/KQI; (2)
An IoT platform for gathering, storing and analyzing data, as well as a visual display on the system control panel; (3) A centralized system for agent management (control of the software agent on STB, setup of measuring functionality, measurements, logging and sending data to the platform).
Modeling and evaluation of QoE in IPTV systems are possible, based on measuring low-level network parameters
(QoS), but this approach does not offer accurate results due to the nondeterministic video flow and time-conditioned intra-event relevance, predictive and two-way predictive frames. More accurate information about the quality of an
IPTV user experience can be obtained directly from video flow decoding function of the IPTV terminal device. This calls for the dissection of the decoding process, isolation of relevant parameters in real time, with the help of data flow mining

NOC/OC&I 2013, ISBN: 978-1-4673-5822-4

techniques and forwarding of the collected statistical values of the video flow to the central data storage.
To diagnostically localize fault sources, there is a need for correlation of the captured and enriched measurements, as well as the physical and logical topological map of the IPTV network. The segment of potentially interesting metrics and additional information contains measurements of network QoS, streaming and application STB KPI (amount and type of decoding video and audio error, IGMP group, switching times, amount and type of network error), metadata about network elements (e.g. geographical location, network addresses, interface logic), network tree topology diagram, etc.
We use techniques of statistical analysis on the aggregated data, which, with knowledge of the network topology, allow for fault localization over the entire provider-user chain. Such information is useful to the operator for monitoring of the network state in real time (interactive control panel with identification of the root cause of issues in the network) and for long-term statistical error analysis, based on geographical region, network hierarchical level or TV channel.
Enrichment of acquired data with data from the operator’s backend systems allows for further analysis according to type and manufacturer of network equipment, and integration with the helpdesk service system, where the time from the user report to the cause identification is drastically shortened.

Fig. 5. IPTV error graph. Percents of error seconds are shown.

Fig. 6. IPTV monitoring. Dots with the same size represent a local area router. The color of the dot represent the level of errors (in this case, all dots are green, which means the level is below minimal treshhold). The red dots show the location and the strength of lightnings.

71

V.

VII. CONCLUSION

LIGHTNING

We developed a solution to correlate and visualize lightning information with DSL data and parameters and IPTV data and parameters. The IoT platform fetches the data from the trusted external server that collects and, over an API, exposes the information about lightning. The API provides information about location, location confidence range and time of the lightning [7,8]. Combining this information allows us to visualize the situation and correlate the events.

We presented network and service monitoring based on a generic light weight, open-source based real-time IoT platform.
It is a combination of a performance monitoring on different network layers and alarm monitoring, with addition of an event correlation and pattern matching.
With the solution, the operators get efficient monitoring and correlation tools for all network layers to keep the current network operation under control and proactively detect changes and trends in their network. Data collected and correlated enables commercial sectors to better plan, define and offer new packages, technical departments to better plan and upgrade networks, and helpdesk teams to pinpoint the cause of eventual service degradations, thus being able to respond faster and more accurately.
As IoT based applications are only starting to rise on all fields of life and business: from optimizing production processes, network monitoring, event correlation to a health and well-being.

Fig. 7. Example od lightning alerts. Similar table is used for any kind of alarm. VI.

RESULTS

Based on IoT approach and the real-time information from different sources, including last end-user device in a chain, we get and overview of whole network performance (CPEs, home, access, aggregation, core, service, inter-domain). Numerous graphs can be shown and alarms can be triggered, based on different combinations of KPIs, from statistical data to root cause discovery [6].
As an example, aggregated results from DSL, IPTV and lightning are shown in Fig. 8. Currently, in Telekom
Slovenije’s network, we have approx. 100.000 agents (sensors) enabled, with 7.000 agents sending data. Red dots show the location of the lightning, with the size of the dot representing the strengths. Green dots represent router locations. Depending on the number of errors, the green dots change color in real, so we have also the visual representation of IPTV error rate.
Graphs on the right show DSL performance.

The future brings us more user-friendly language to define
KPIs, more cross-domain data fusion, event correlations, visualizations and stream-mining.
In the long run, we anticipate future internet of things to develop into the so-called collective adaptive systems that will consist of many units/nodes, with individual properties, objectives and actions. Decision-making will become more distributed and the interaction between units will lead to an increase in unexpected phenomena. The operating principles will go beyond existing monitoring, control and correlation tasks, taking into account the diversity of objectives within the system, and the need to reason in the presence of partial, noisy, out-of-date and inaccurate information.
Many additional scenarios and applications will grow. Data collected has already become big-data.
ACKNOWLEDGMENT
The authors would like to thank company Telekom
Slovenije for cooperation on the research and development project "Automated system for triple-play QoE measurements", company Iskratel and the Slovenian Research Agency for co-financing the project “Quality of service and quality of experience measurement and control system in multimedia communications environments”, Ministry of Education,
Science, Culture and Sport and the Open Communication
Platform Competence Center.
REFERENCES
[1]

Fig. 8. Correlation of DSL, IPTV and lightnings. Based on different sources, different graphs and tables are shown. KPIs that are a correlation of other
KPIs can be monitored. Actions can be started based on the KPIs

72

[2]

Kos A., Pristov D., Sedlar U., Sterle J., Volk M., Vidonja T., Bajec M.,
Bokal D., and Bešter J.: Open and Scalable IoT Platform and Its
Applications for Real Time Access Line Monitoring and Alarm
Correlation. In: S. Andreev et al. (Eds.): NEW2AN/ruSMART 2012,
Lecture Notes in Computer Science 7469, pp. 27–38; Springer,
Heidelberg (2012)
Kos, A., Sedlar, U., Peternel, K., Volk, M., Sterle, J., Zebec, L., Vidonja,
T., and Bešter, J.; Odprta komunikacijska platformaIoT. VITEL - 25.

NOC/OC&I 2013, ISBN: 978-1-4673-5822-4

[3]
[4]

[5]

delavnica o telekomunikacijah, "Internet stvari", 12. 5. - 13. 5. 2011, pp.
1–5 (in slovene language)
Inteligoo Platform: www.inteligoo.com
Šubelj, L., Jelenc, D., Zupančič, E., Lavbič, D., Trček, D., Krisper, M., and Bajec, M.: Merging Data Sources based on Semantics, Contexts and
Trust. IPSI BGD Trans. Internet Res.; Vol. 7, No. 1, pp. 18–30 (2011)
Lavbič, D.. Rapid ontology development model based on rule management approach in business applications. Informatica (Ljublj.),
Mar. 2012, vol. 36, no. 1, pp 115-116

NOC/OC&I 2013, ISBN: 978-1-4673-5822-4

[6]

[7]
[8]

Sedlar, U., Volk, M., Sterle, J., Sernec, R., and Kos, A.: Contextualized
Monitoring and Root Cause Discovery in IPTV Systems Using Data
Visualization. IEEE Netw. Mag., Special Issue - Computer Network
Visualization; 1–9 (2012)
SCALAR System Network: http://www.scalar.si/en
Kosmač, J., Djurica, V., and Babuder, M.: Automatic Fault Localization
Based on Lighting Information. In: Power Engineering Society General
Meeting, IEEE: 8-22

73

This page is intentionally left blank.

74

Similar Documents

Premium Essay

Everest Executive Summary

...This report provides an in-depth analysis of the two Everest Simulations conducted by Group 10 of MGMT1001 Thursday Tutorial. This task required students to form teams consisting of five to six members whose goals were to summit Mount Everest. While it provided us with a rich experience in team dynamics and collaboration, it also enabled us to explore key managerial concepts taught in the course, consisting of: • Communication • Groups and Teams • Leadership In this report, we examine the effectiveness of Face to Face Communication (FTFC) versus Computer Mediated Communication (CMC), and the problems encountered through the utilisation of the virtual medium including efficiency of the feedback system, loss of personal focus and other emergent issues. It includes personal reflections on attitudes and perceptions, as well as group performance and strategies adopted in the second Simulation in order to create a more positive team experience. Theories which relate to interpersonal communication have also been integrated in the report to illustrate its relation to certain situations encountered during the Simulation. Additionally, we provide a multifaceted analysis on the notion of team cohesiveness and how it attributes to better performance outcomes. An overview on the different intragroup conflicts encountered in the Simulation has been included, examining the positive and negative impact that conflict had on team experience and performance, and how mutual agreements were reached...

Words: 287 - Pages: 2

Free Essay

Pom 333

...is a computer necessary in conducting a realworld simulation? Answer It is important because there are many different types of outcomes that comes in with simulation. Computers are used in daily life activities and it is necessary. 14-11 What is operational gaming? What is systems simulation? Give examples of how each may be applied. Answer Operational gaming is the use of simulation in competitive situations such as military games and business or management. System simulation ls that deal with the dynamics of large organizational or governmental systems. Validation The process of comparing a model to the real system that it represents to make sure that it is accurate. 14-17 (a) Resimulate the number of stockouts incurred over a 20-week period (assuming Higgins maintains a constant supply of 8 heaters). (b) Conduct this 20-week simulation two more times and compare your answers with those in part (a). Did they change significantly? Why or why not? (c) What is the new expected number of sales per week? Answer A. The number of stockouts incurred over a 20 week period is HOT WATER NUMBER OF HEATER SALES WEEKS THIS PER WEEK NUMBER WAS SOLD 3 2 4 9 5 10 6 15 7 25 8 12 9 12 10 10 B, Two more times would give us the value of a multiplied by 2. c. 25 14-18 A. 15 days of barge uploadings and average number of barges delayed B, They both are probabilistic simulations. Chapter 5 HW 5-14 Using MAD, determine whether...

Words: 648 - Pages: 3

Premium Essay

Student

...A SYSTEM SIMULATION STUDY ON THE THREE FAST MOVING PRODUCTS (MARLBORO, C2, VIVA) OF THE COLLEGE VARIETY SHOPPE USING THE MONTE CARLO SIMULATION IN INVENTORY MANAGEMENT EXECUTIVE SUMMARY This study shows how the selected three fast moving products (Marlboro cigarettes, C2, Viva mineral water) move from the current Inventory Management technique of the College Variety Shoppe from its distributors to its warehouse storage to the end user or customer. An excel program and a simulation model was made to observed its current performance. After the observation, the group performs an experimentation that will improve the current technique of the College Variety Shoppe. After simulating the experimentation, the group then give conclusions and recommendations on how to improve the College Variety Shoppe’s current Inventory Management. TABLE OF CONTENTS Title Page ........................................................ 1 Executive Summary ........................................................ 2 Table of Contents ........................................................ 3 Introduction ........................................................ 4 Methodology ........................................................ 6 Model Development ........................................................ 7 Model Validation ........................................................ 11 Experimentation, Results .......

Words: 2096 - Pages: 9

Premium Essay

Littlefield Simulation Report

...Initial operations strategy Prior to the commencement of the simulation, we examined the 50 days of historical data to glean as much information as we could about the operations. We performed some analysis in Excel and created a dashboard to illustrate various data. Specifically, we regressed the prior 50 days of jobs accepted to forecast demand over the next 2 - 3 months within a 95% confidence interval. The yellow and grey lines represent the maximum and minimum variability, respectively, based on two standard deviations (95%). Exhibit 1: Forecasted and actual demand by Day 50 and Day 270 Our two primary goals at the beginning of the simulation were as follows: 1) Eliminate bottlenecks and increase capacity in order to meet forecasted demand 2) Decrease lead time to 0.25 days in order to satisfy Contract 2 and maximize revenue In order to achieve these goals, we would need to know the capacity and throughput time of the entire system. We used the time required by each machine to process a lot to calculate capacity per station and then capacity of the entire production line (380 kits/day or 6 orders/day). In Exhibit 2 we can observe that the capacity of the production line is given by the station that produces the least number of units per day. Exhibit 2: Capacity of the production line The Decisions We decided to work with the maximum variability of demand because there was a penalty for late jobs and because there was no revenue for orders that took more than “x”...

Words: 765 - Pages: 4

Premium Essay

Monte Carlo Analysis

...Monte Carlo Statistical Analysis Name Course Instructor Date The Monte Carlo method is a mathematical method used for problem solving through the generation of random numbers and then observing a fraction of these numbers and the properties they obey. It is useful in obtaining numerical solutions to problems that are too complicated for analytical solutions. It is a form of probability used to understand the impact of risk and uncertainty in various areas such as financial and cost forecasting. It involves computation of the likelihood of given events occurring or not occurring, without taking into account the interaction of the elements involved in influencing the occurrence. The mathematical method was invented by Stanislaw Ulam in 1946 and named by Nicholas Metropolis after a classy gambling resort in Monaco, where a relative of Ulam frequently gambled [ (Fishman, 1996) ]. Concepts of the Monte Carlo method Uncertainty Being a forecasting model, there are assumptions that need to be made due to the uncertainty of various factors. One therefore needs to be able to make estimations of the expected results as they cannot predict with certainty what the end value will be. Important factors such as historical data and past experiences in the field can be helpful in making an accurate estimate. Estimation In some cases, estimation may be possible but in others it is not. In situations where estimation is possible, it is wise to use a wide range of possible values instead...

Words: 2486 - Pages: 10

Premium Essay

Ns2 Unit 2 Assignment

...NS2 soft solution: Ns2 soft solution is a software development based company which contain innovative and expertise to facilitate complex projects in an efficient way. We offer various broad solution projects for researchers and students to increase demands among other centers and customer centric solution with high standard. We offer various projects under NS2 simulation based on IEEE papers and non IEEE papers. We deploy various NS2 projects as a virtual one in real time application. Ns2 soft solution is a highly experienced team member of developer professionals providing a wide range of complex projects and network protocol simulation. Our motto: • Advance technology enhancement. • Make everything possible. • Provide service quality for every commitment. Basic aims of Ns2 soft solution are: • Providing guidance for students to select the efficient project based on student interest which ensures a success in their projects. • We train and make students to learn all the concepts from basic to advance such that students can get a clear idea about the project what they do. • Based on...

Words: 607 - Pages: 3

Premium Essay

Mikes Bikes Simulation Summary

...The Mikes Bikes simulation is an exciting and interesting way to gain critical insights into the development of a business. By operating a simulated bicycle manufacturing corporation over a period of ten years was an opportunity to gain insights on a real entrepreneurial experience. It allowed us to expand on the ideas taught in class such as creating a business strategy and using tools like SWOT and Porter’s five forces. We had many assumptions initially regarding the procedures but gradually we could learn the basics by facing enough challenges and by trial and error method. These skills cannot be learned by the usual form of lecturing. Considering our team, this was our first comprehensive exposure to real business environment. Each...

Words: 566 - Pages: 3

Free Essay

Advances in Metal Forming Research at the Center for Precision Forming

...materials such as Ultra/Advanced High Strength Steels (U/AHSS), aluminum alloys, magnesium alloys and boron steels in automotive industry is increasing to reduce vehicle weight and increase crash performance. The use of these relatively new materials requires advanced and reliable techniques to a) obtain data on material properties and flow stress, b) predicting springback and fracture in bending and flanging, c) selecting lubricants and die materials/coatings for stamping and forging and d) designing tools for blanking and shearing. In addition, designing the process and tooling for a) hot stamping of boron steels, b) warm forming of Al and Mg alloys, and c) optimizing the use of servo-drive presses require advanced Finite Element based simulation methods. CPF is conducting R&D in most of these topics and also in many hot and cold forging related topics. This paper gives an overview of this research and discusses how the research results are applied in cooperation with industry. Keywords: Metal Forming, Sheet metal, Forging, FEM 1 INTRODUCTION The Center for Precision Forming (CPF) has been established with funding from the National Science Foundation (NSF) and a number of companies (www.cpforming.org). CPF is an outgrowth of the Engineering Research Center for Net Shape Manufacturing (ERC/NSM – www.ercnsm.org) and conducts research in sheet metal forming while ERC/NSM focuses on cold and hot forging related R&D projects. Both Centers work closely with industry under contract....

Words: 3894 - Pages: 16

Free Essay

Comp 125

...Introduction This chapter describes our work in evolution of buildable designs using miniature plastic bricks as modular components. Lego 1 bricks are well known for their flexibility when it comes to creating low cost, handy designs of vehicles and structures. Their simple modular concept make toy bricks a good ground for doing evolution of computer simulated structures which can be built and deployed. Instead of incorporating an expert system of engineering knowledge into the program, which would result in more familiar structures, we combined an evolutionary algorithm with a model of the physical reality and a purely utilitarian fitness function, providing measures of feasibility and functionality. Our algorithms integrate a model of the physical properties of Lego structures with an evolutionary process that freely combines bricks of different shape and size into structures that are evaluated by how well they perform a desired function. The evolutionary process runs in an environment that has not been unnecessarily constrained by our own preconceptions on how to solve the problem. The results are encouraging. The evolved structures have a surprisingly alien look: they are not based in common knowledge on how to build with brick toys; instead, the computer found ways of its own through the evolutionary search process. We were able to assemble the final designs manually and confirm that they accomplish the objectives introduced with our fitness functions. This chapter...

Words: 438 - Pages: 2

Free Essay

Ims of Tcs in Pakistan

...developed, operated and maintained the Concept Simulator since 1998. The Concept Simulator models the functional operation and inter-operation of key components of train control system architecture and the external systems to which the system interfaces. The simulator is valuable in the development and analysis of operational principles, and in assessing design trade-offs. The SEA-designed facility simulates operation at ERTMS Levels 1, 2, and 3. Components include a Communications Network, a Network Management Centre, a generic Interlocking, a Radio Block Centre, a Track Simulator including both conventional and TCS equipment, a Driver Desk, a European Vital Computer and a Driver MMI. The components are modelled using software-based simulations hosted on networked PCs. The simulator has been valuable in the engineering evaluation and validation of emergent system architectures, and enables system constraints to be explored and defined. ERTMS operational modes and the transitions between them are simulated and ERTMS principles and procedures are followed. Innovative Customer Information System (ICIS) SEA's Innovative Customer Information System (ICIS) is capable of managing and displaying customer information, including real time information, in a visually dynamic manner. The system utilises intelligent screens and wireless technology to distribute data, thus minimising installation and commissioning efforts. The system will be suitable for any transport environment, mobile...

Words: 397 - Pages: 2

Free Essay

Simulation Modeling

...Brooklyn Warren Chapter 10 Simulation Modeling What is Simulation? * To try to duplicate the features, appearance, and characteristics of a real system. * Imitate a real-world situation mathematically. * Study its properties and operating characteristics. * Draw conclusions and make action decisions based on the results. Processes of Simulation: 1. Define Problem 2. Introduce Important Variables 3. Construct Simulation Model 4. Specify Values of Variables to Be Tested 5. Conduct the Simulation 6. Examine the Results 7. Select Best Course of Action Advantages of Simulation: * Straightforward and flexible. * Can handle large and complex systems. * Allows "what-if" questions. * Does not interfere with real-world systems. * Study interactions among variable. * "Time comparison" is possible. * Handles complications that other methods can't. Disadvantages of Simulation: * Can be expensive and time consuming. * Does not generate optimal solutions. * Managers must generate all conditions and constraints. * Each model is unique. Monte Carlo Simulation: Can be used with variables that are probabilistic. Steps: * Establish the probability distribution for each random variable. * Use random numbers to generate random values. * Repeat for some number of replications. Probability Distributions: Historical data Goodness-of-fit tests for common distributions: ...

Words: 259 - Pages: 2

Premium Essay

Nursing Simulation Paper

...Simulation technology can have countless benefits to all the eager students out there. It can, "Simulation is a technique, not a technology, to replace or amplify real experiences with guided experiences, often immersive in nature, that evoke or replicate substantial aspects of the real world in a fully interactive fashion," (Aebersold & Tschannen, 2013). It allows the students to be apart of a bigger picture than just the words in their textbooks. The simulation experience can also be recorded, therefore it allows students to backtrack on their prior experience and review it. They can see what they have done well and what could use some slight alterations to improve their skills. "A large body of research shows that simulation is incredibly effective as a teaching methodology and can contribute both to better patient outcomes and a culture of safety among nursing staff, " (American...

Words: 732 - Pages: 3

Premium Essay

Mat 540

...678-679 of the text. Using simulation estimate the loss of revenue due to copier breakdown for one year, as follows: 1. In Excel, use a suitable method for generating the number of days needed to repair the copier, when it is out of service, according to the discrete distribution shown. 2. In Excel, use a suitable method for simulating the interval between successive breakdowns, according to the continuous distribution shown. 3. In Excel, use a suitable method for simulating the lost revenue for each day the copier is out of service. 4. Put all of this together to simulate the lost revenue due to copier breakdowns over 1 year to answer the question asked in the case study. 5. In a word processing program, write a brief description/explanation of how you implemented each component of the model. Write 1-2 paragraphs for each component of the model (days-to-repair; interval between breakdowns; lost revenue; putting it together). 6. Answer the question posed in the case study. How confident are you that this answer is a good one? What are the limits of the study? Write at least one paragraph. There are two deliverables for this Case Problem, the Excel spreadsheet and the written description/explanation. Please submit both of them electronically via the dropbox. The assignment will be graded using the associated rubric. |Outcome Assessed: |Create statistical analysis of simulation results. ...

Words: 673 - Pages: 3

Premium Essay

Jet Copy

...Jet Copy Simulation Prepared by Joe Miller Prepared for February 7th 2013 TABLE OF CONTENTS INTRODUCTION SUMMARY JET COPY SIMILATION CONCLUSION AND RECOMMENDATIONS APPENDIX INTRODUCTION The purpose of this report is to examine the feasibility to for Jet Copy to purchase a second copy machine. Based on copies produced and lost with only the use of one copier. In this report I will use data from a simulation to assist in the determination of an additional copier for Jet Copy. As a reminder that this is a random simulation and the information in the simulation will only assist between when the test stop and 52 week will not have great affect on the outcome of the in the decision making. The simulation that was run was base on a 52 week scenario. The actual results are based on 51 week trial. The difference between the 51 week test and the actual 52 weeks will not have any great impact on the recommendation. ------------------------------------------------- SUMMARY This report creates a simulation that shows one possible outcome for Jet Copier when there one copier is out for repair for up to 4 days. It will also aid in the decision to ad d or not to add a second copier. This report will answer the following questions: 1. Using Excel to generate the number of days needed to repair the copier. 2. Using Excel to generate the interval between successive breakdowns according to continuous distribution. 3. Using Excel, use a suitable method to...

Words: 968 - Pages: 4

Free Essay

How to Write

...Class: Seminar in Organizational Theory & Behavior Professor :Dr. Dyck's Bio Name: Zhihui Dai Change Management Simulation: Power and Influence Abstract Within this program Change Management Simulation: Power and Influence, I am a Director Product Innovation manager in Spectrum Sunglass Company, which is a private company that designs, manufactures, and sells sunglasses. What I need do is convince people adapt my proposal. I should choose method from 18 change levers that attempt to convince workers to adapt my new proposal that will change their attitude from awareness, interest, trial, to adoption. I must decide the proper method and use the time wisely. Be able to increase credibility and achieve the greatest percentage of adopters. Key words: Convince percentage of adoption credibility limited time This Management Simulation Project from Harvard provides students a virtual experience to convince people in start-up companies. When I started the first test run of the simulation there are various decisions, However, if an administrator make the wrong move, then I lose credibility and set the proposal is a backwards. On the other hand, if I decide on a correct decision, then not only will I get the workers are more interested in my proposal, but it also increases the credibility of the workers. When I was implementing change behavior, building a coalition of support is a important tool to get satisfied from people. When I simulate this behavior, I can have more...

Words: 942 - Pages: 4