Free Essay

New Techniques in Design and Analysis of Exponential Algorithm

In:

Submitted By shigopunk
Words 1436
Pages 6
New Techniques in Design and Analysis of Exponential Algorithm Arshi Aftab, Shikha Mahajan and Shruti Jajoo

Abstract: In this paper we first see how NP problems can be categorized and look at some examples. We then analyze the subset sum problem and prove its NP completeness. We look at various algorithms used to solve the problem and the efficiency of each algorithm. Hence we prove that subset sum problem is NP complete.
Keywords: NP complete, NP hard, complexity, subset sum, exponential I. Introduction
In the study of computation complexity of problems, we find out the resources required during computation like time, memory and space required. We check if the problem can be solved in polynomial time by some algorithm. An algorithm is said to be solved in polynomial time if its worst case efficiency belongs to O (p (n)) where p (n) is the polynomial of the problems input size. Problems that can be solved in polynomial time are called tractable and those that cannot be solved in polynomial time are intractable.
The formal definition of NP problem is as follows: A problem is said to be Non-deterministically Polynomial (NP) if we can find a nondeterministic Turing machine that can solve the problem in a polynomial number of nondeterministic moves.
Equivalent definition of NP problem: A problem is said to be NP if
1. its solution comes from a finite set of possibilities, and
2. It takes polynomial time to verify the correctness of a possible solution.
NP-complete problems
NP-complete problems are a subset of class NP. An problem p is NP-complete, if
1. p ∈ NP (you can solve it in polynomial time by a non-deterministic Turing machine) and
2. All other problems in class NP can be reduced to problem p in polynomial time.
NP-hard problems
NP-hard problems are partially alike but more difficult problems than NP complete problems. They don’t themselves belong to class NP (or if they do, nobody has invented it, yet), but all problems in class NP can be reduced to them. Very often, the NP-hard problems require exponential time or even worse. Figure 1.1

Some of the well-known problems which fall into the category of NP problems are:
1. Hamiltonian circuit: to determine where a graph has a Hamiltonian circuit. (A circuit which covers all the vertices of a graph.)
2. Integer linear programming: To find the optimal (maximal or minimum) solution of linear function subject to a finite set of constraints in the form of linear equalities or inequalities.
3. Travelling Salesman problem: To find the shortest Hamiltonian circuit in a complete graph with positive integer weights.
4. Partition Problem: Determining whether it is possible to partition a set of n positive integers into two disjoint subset of the same sum.
5. Knapsack problem: To find the set of most valuable items among n items of given positive weight and value that fit into a knapsack of given positive integer capacity.
6. Graph coloring: To find the chromatic number of a given graph
In our paper we aim to show that the subset sum problem belongs to the NP class.
The subset sum problem in computer science is noteworthy problem in complexity theory and cryptography. It can be defined as: Given a set of non-negative integers, and a value sum, determine if there is a subset of the given set with sum equal to given sum.
Example 1.1
Set [] = {3, 34, 4, 12, 5, 2}, sum = 9
Output: True //There is a subset (4, 5) with sum 9.
The complexity of a problem can be analyzed depending on two parameters: N, number of decision variables and P, precision of the problem. The complexity of best known algorithms is exponential or in other words the problem is difficult if both N and P are of same order. The solution becomes compatible when either N or P has smaller value. If N is small then exhaustive search can give a better solution and if P has a smaller value then dynamic problem algorithms can give the exact solution. The algorithms which provide an efficient time complexity are introduced and elaborated in the next section.
II .Methodology
2.1 NP Completeness of Subset Sum Problem
The merest requisite for an algorithm to be considered as efficient is that its time efficiency should be polynomial[2]: O(n^c),for some constant c and n is the size of the input. Scientists remarked on this point long ago that most of the real time currently existing problems be figured out this quickly, but they said that they had a tough time unraveling which problems could and which could not. There are a considerable number of so called NP-hard problems which most people deem that its solution cannot be computed in polynomial time, nevertheless nobody could justify a super-polynomial lower bound.
Generally, given a set of numbers S {s1,s2,s3,……., son} and a target number t, the objective is to find a subset S’ of S such that the elements in it sum up to t. Although the proposed problem appears deliberately facile, solving it is exceedingly tough if not provided with any additional instruction. It would be established later that it is an NP-complete problem and possibly an efficient algorithm may not exist at all.
2.1.1 Problem Definition
The decision statement of the problem is: Given a set S and target t ,does there prevail a subset such that .
2.1.2 Exponential Time Algorithm Approaches:
One exceptional feature to be noted about this problem is that the problem becomes polynomial if the size of S’ is given. For instance an exemplary question may be: Given a cluster of numbers we have to find two elements such that they add up to t. This problem is entirely polynomial and we can come up with a candid algorithm with running time O(n^2) and with nested foe loops.
There can be considerably one more convoluted problem asking for ,say, 3 elements that sums to t. Again, we can try to achieve a solution with an ingenuous approach of complexity . The catch in the generic part of subset sum is that we do not know . At the worst case is and hence the running time of brute force approach is approximately .
A slightly higher efficiency algorithm tries to find out all the desirable .There is a symbolic way in which this can be done, that is all numbers from 0 to can be intended in binary representation and thus a subset of elements could be generated whose indexes are equal to the bit positions that correspond to 1.For instance if n is 4 and let the given number in decimal be 10 and its binary representation is 1010.The next step is that we go through the subsets wherein lies the 1st and the 3rd elements of S. The boon of this kind of approach is that it uses a homogeneous amount of space. At every repetition ,we read a particular number ,thus leading to a deliberate result if |S’| is small in value.
Let us ponder over the case where . We have to investigate and figure out around distinct subsets in order to obtain this solution .A somewhat different way discovers all probable sums of subsets and examines if t has appeared in the subset
2.2 Algorithms to Solve Subset Sum Problem
2.2.1 Polynomial-Time Approximate Algorithm
A polynomial time approximation scheme (PTAS) is an algorithm that takes as input not only an instance of the problem but also a value ε > 0 and approximates the optimal solution to within a ratio bound of 1 + ε. For any choice of ε the algorithm has a running time that is a polynomial in n, the size of the input. A fully polynomial-time approximation scheme (FPTAS) is a PTAS with a running time that is polynomial not only in n but also in 1/ε.
3.1 Trim Subroutine for Approximate Subset Sum Algorithm
Let L{x1, x2, . . . , xm} be a list. If several values in L are close to each other, maintain only one of them, i.e. we trim each list Li after it is created. To trim the list by parameter δ means to remove as many elements from L as possible in such a way that the list L’ of remaining elements has the following property:
For every y ∈ L there exists a z ∈ L’ such that (1 − δ) y ≤ z ≤ y.
Algorithm: Trim (L, δ)
1. m

Similar Documents

Premium Essay

Nt1310 Unit 6 Data Analysis Paper

...practical limits on the complexity of the problems a long time in the past. However, the parallel development of a Monte Carlo simulation based numerical strategies have revolutionized the applied Bayesian data analysis with the availability of super computing power over the past number of decades. These strategies provide a structure within which many complex Bayesian models can be analyzed using generic software. Moreover, the simulation study is a tractable practice that can be utilized to examine and compare the performance of different estimators empirically. In this study, one generates a sample of random data in such a way that mimics a real problem and recapitulates...

Words: 574 - Pages: 3

Free Essay

Stats and Modelling Notes

...Stats/Modelling Notes Introduction & Summary Computer system users, administrators, and designers usually have a goal of highest performance at lowest cost. Modeling and simulation of system design trade off is good preparation for design and engineering decisions in real world jobs. In this Web site we study computer systems modeling and simulation. We need a proper knowledge of both the techniques of simulation modeling and the simulated systems themselves. The scenario described above is but one situation where computer simulation can be effectively used. In addition to its use as a tool to better understand and optimize performance and/or reliability of systems, simulation is also extensively used to verify the correctness of designs. Most if not all digital integrated circuits manufactured today are first extensively simulated before they are manufactured to identify and correct design errors. Simulation early in the design cycle is important because the cost to repair mistakes increases dramatically the later in the product life cycle that the error is detected. Another important application of simulation is in developing "virtual environments" , e.g., for training. Analogous to the holodeck in the popular science-fiction television program Star Trek, simulations generate dynamic environments with which users can interact "as if they were really there." Such simulations are used extensively today to train military personnel for battlefield situations, at a fraction...

Words: 24251 - Pages: 98

Premium Essay

An Iterated Dynasearch Algorithm for the Single-Machine Total Weighted Tardiness Scheduling Problem

...An Iterated Dynasearch Algorithm for the Single-Machine Total Weighted Tardiness Scheduling Problem Faculty of Mathematical Studies, University of Southampton, Southampton, SO17 1BJ, UK Faculty of Mathematical Studies, University of Southampton, Southampton, SO17 1BJ, UK Department of Decision and Information Sciences, Rotterdam School of Management, Erasmus University, P.O. Box 1738, 3000 DR Rotterdam, The Netherlands Richard.Congram@paconsulting.com • C.N.Potts@maths.soton.ac.uk • S.Velde@fac.fbk.eur.nl Richard K. Congram • Chris N. Potts • Steef L. van de Velde T his paper introduces a new neighborhood search technique, called dynasearch, that uses dynamic programming to search an exponential size neighborhood in polynomial time. While traditional local search algorithms make a single move at each iteration, dynasearch allows a series of moves to be performed. The aim is for the lookahead capabilities of dynasearch to prevent the search from being attracted to poor local optima. We evaluate dynasearch by applying it to the problem of scheduling jobs on a single machine to minimize the total weighted tardiness of the jobs. Dynasearch is more effective than traditional first-improve or best-improve descent in our computational tests. Furthermore, this superiority is much greater for starting solutions close to previous local minima. Computational results also show that an iterated dynasearch algorithm in which descents are performed a few random moves away from previous...

Words: 11016 - Pages: 45

Premium Essay

Stephen

...PRODUCTION MANAGEMENT 1. Define production: According to Elwood Butta “production is a process by which goods or services are created”. Production involves the step by step convertion of one form of material into another through chemical or mechanical process with a view to enhance the utility of the product or services. 2. Characteristics features of production system? 1. Production is an organized activity. 2. The system transforms the various inputs into useful outputs. 3. Production system does not operate in isolation from the other organizational systems. 4. There exists a feed back about the activities which is essential to control and improve system proformation. 3. Define production management: “Production management deals with the decision making related to production process of that the resulting goods and service is produced according to specifications in the amounts and at the scheduled demanded and at minimum cost” – Elwood Butta. 4. What are the difference between production management and operation management? 1. 2. 3. 4. 5. Production mgmt It is concerned with manufacturing Out put is tangible In this, job useless labour and more equipment There is no customer participation 1. 2. 3. 4. Operation mgmt It is concerned with services Output is intangible In this, job use more labour and less equipment Frequent customer participation What are the functions of production mgmt? 1. Production planning 2. Production control 1 3. Factory building 4. Provision...

Words: 6088 - Pages: 25

Free Essay

General Introductio

...NATIONAL UNIVERSITY OF RWANDA FACULTY OF APPLIED SCIENCES DEPARTMENT OF COMPUTER SCIENCE ACADEMIC YEAR 2011 Performance analysis of Encryption/Decryption algorithms using SimpleScalar By: MANIRIHO Malachie and NIZEYIMANA Jean-Paul Supervisor: Dr.-Ing. NIYONKURU Adronis Huye, 2011 CHAPTER ONE: GENERAL INTRODUCTION 1.1. BACKGROUND TO THE STUDY There are various security measures that can be imposed in order to secure the information stored. As more and more technologies evolve, an irresponsible person may try to find a way to excavate any loopholes within the system in order to penetrate into the heart of its weaknesses. This is due to the fact that human-made designs can also be broken by another human. Thus, over time security measures must constantly be reviewed and strengthened in order to combat hackers or culprits hot on the heels of system developers who are also using high technologies. One of the means to secure the data is to apply a secret code of encryption. By having it encrypted, the sender can pass the data to the receiver and only the receiver or authorized personnel can have access to the data provided they have been given a key by the sender to decrypt it in order for them to view the information. Thus, without having the right key, nobody is able to read the encrypted data received or stored. Even if hackers or unauthorized person managed to intercept or steal the data, it would be...

Words: 7475 - Pages: 30

Premium Essay

Operation Management

...Journal of Business & Economics Research – July 2005 Volume 3, Number 7 Operations Research And Operations Management: From Selective Optimization To System Optimization Jack A. Fuller, (E-mail: jfuller@wvu.edu), West Virginia University C. Lee Martinec, West Virginia University ABSTRACT The focus of this research paper is to discuss the development of Operations Management (OM) and Operations Research (OR) with respect to their use within the organization’s decision-making structure. In addition, the difference in the tools and techniques of the two fields is addressed. The question is raised as to how distinct the two academic fields have become in light of the application of their models to the service industry. Suggestions are made regarding the possibility of incorporating OM/OR models and their output into the decision making structure of the organization towards the goal of “system optimization”. ORIGINS OF OPERATIONS MANAGEMENT AND OPERATIONS RESEARCH A comparison of the origins of operations management and operations research reveals that both are an innovation of the 20th century. The origin of operations research was in England, circa 1937, and has its roots in scientific management, with its first significant applications to military operations in both World War I and World War II. Operations management had its origins in the early factory system, and was more associated with physical production in a factory environment and it too was strongly influenced...

Words: 2973 - Pages: 12

Free Essay

A New Integrated Control Algorithm for Ieee 802.11g Standards

...A New Integrated Control Algorithm for IEEE 802.11g Standards Qian Liu, Yongxin Yan International School Beijing University of Posts and Telecommunications Beijing, China e-mail: ee08b050@bupt.edu.cn, ee08b044@bupt.edu.cn Abstract—Wi-Fi (Wireless Fidelity) mainly adopts IEEE 802.11g standards. And a binary exponential backoff algorithm is adopted to prevent channel collision, which also introduces reduction of the performance of the network. In this paper, an integrated control mechanism (ICM) is proposed which combines a centrally controlled approach and a distributed access mechanism to control the behavior of the system. Through this mechanism, backoff is eliminated from transmission mode, which can significantly enhance the performance of the system. Through theoretical analysis and simulation, the new mechanism can improve the throughput and reduce delay and packet loss rate. According to the results of simulation, with ICM, the throughput can have an improvement of 83.3% and the packet loss rate remains at 0 under appropriate conditions. Keywords-Wi-Fi, integrated control mechanism, centrally control, distributed access Jing Tu, Bainan Xia International School Beijing University of Posts and Telecommunications Beijing, China e-mail: ee08b253@bupt.edu.cn, ee08b217@bupt.edu.cn Many experts have proposed different algorithms and approaches to improve the performance of Wi-Fi. Souvik Sen, Romit Roy Choudhury and Srihari Nelakuditi transformed the contention from time...

Words: 3037 - Pages: 13

Premium Essay

Observer Based Techniques for the Identification and Analysis of Avascular Tumor Growth.Pdf Uploaded Successfully

...Contents lists available at SciVerse ScienceDirect Mathematical Biosciences journal homepage: www.elsevier.com/locate/mbs Observer-based techniques for the identification and analysis of avascular tumor growth Filippo Cacace a, Valerio Cusimano a, Luisa Di Paola a,⇑, Alfredo Germani a,b a b Università Campus Bio-Medico di Roma, via Álvaro del Portillo, 21, 00128 Roma, Italy Dipartimento di Ingegneria Elettrica e dell’Informazione, Università degli Studi dell’Aquila, Poggio di Roio, 67040 L’Aquila, Italy article info Article history: Received 20 July 2010 Received in revised form 1 October 2011 Accepted 3 October 2011 Available online xxxx Keywords: Tumor growth Gompertz model Non-linear observer Non-linear systems discretization abstract Cancer represents one of the most challenging issues for the biomedical research, due its large impact on the public health state. For this reason, many mathematical methods have been proposed to forecast the time evolution of cancer size and invasion. In this paper, we study how to apply the Gompertz’s model to describe the growth of an avascular tumor in a realistic setting. To this aim, we introduce mathematical techniques to discretize the model, an important requirement when discrete-time measurements are available. Additionally, we describe observed-based techniques, borrowed from the field of automation theory, as a tool to estimate the model unknown parameters. This identification approach is a promising ...

Words: 6851 - Pages: 28

Free Essay

Ds Java

...A Practical Introduction to Data Structures and Algorithm Analysis Third Edition (Java) Clifford A. Shaffer Department of Computer Science Virginia Tech Blacksburg, VA 24061 April 16, 2009 Copyright c 2008 by Clifford A. Shaffer. This document is the draft of a book to be published by Prentice Hall and may not be duplicated without the express written consent of either the author or a representative of the publisher. Contents Preface xiii I Preliminaries 1 1 Data Structures and Algorithms 1.1 A Philosophy of Data Structures 1.1.1 The Need for Data Structures 1.1.2 Costs and Benefits 1.2 Abstract Data Types and Data Structures 1.3 Design Patterns 1.3.1 Flyweight 1.3.2 Visitor 1.3.3 Composite 1.3.4 Strategy 1.4 Problems, Algorithms, and Programs 1.5 Further Reading 1.6 Exercises 3 4 4 6 8 12 13 14 15 16 17 19 21 2 Mathematical Preliminaries 2.1 Sets and Relations 2.2 Miscellaneous Notation 2.3 Logarithms 2.4 Summations and Recurrences 25 25 29 31 33 iii iv Contents 2.5 2.6 2.7 2.8 2.9 3 II 4 Recursion Mathematical Proof Techniques 2.6.1 Direct Proof 2.6.2 Proof by Contradiction 2.6.3 Proof by Mathematical Induction Estimating Further Reading Exercises Algorithm Analysis 3.1 Introduction 3.2 Best, Worst, and Average Cases 3.3 A Faster Computer, or a Faster Algorithm? 3.4 Asymptotic Analysis 3.4.1 Upper Bounds 3.4.2 Lower Bounds 3.4.3 Θ Notation 3.4.4 Simplifying...

Words: 30587 - Pages: 123

Premium Essay

Syllabus

...Code 1. 2. 3. 4. 5. 6. CSE411 CSE461 CSE412 CSE462 CSE414 CSE464 Subject Title Scheme of Teaching L 3 0 3 0 3 0 T 1 0 1 0 1 0 P 0 3 0 3 0 3 Hours 4 3 4 3 4 3 Credit 4 2 4 2 4 2 University External Marks 50 50 50 CSE361 CSE313 CSE363 AS301 EC316 EC366 EC317 EC367 Data Structures (Practical) Peripheral Devices & Interfaces Hardware Lab (Practical) Engineering Mathematics – III Digital Electronics Digital Electronics (Practical) Microprocessors Microprocessors (Practical) 0 3 0 3 3 0 3 0 15 0 1 0 1 1 0 1 0 5 3 0 2 0 0 2 0 2 09 3 4 2 4 4 2 4 2 29 2 4 1 4 4 1 4 1 25 50 50 50 50 250 Internal Total Sessional Marks 50 50 50 50 50 50 50 50 50 450 100 50 100 50 100 100 50 100 50 700 7. 8. Total ASC405 CSE 415 Analysis & Design of Algorithms Analysis & Design of Algorithms (Practical) Database Management System Database Management System (Practical) Object Oriented Programming Object Oriented Programming (Practical) Cyber Law & IPR Computer Architecture & Organization Internal Total Sessional Marks 50 100 50 50 50 50 50 50 100 50 100 50 3 3 15 0 1 4 0 0 9 3 4 28 3 4 25 50 50 250 50 50 400 100 100 650 2 Scheme of Examination of B.E. in Computer Science & Engineering Third Year - Fifth Semester Sr. Paper Subject Title Scheme of Teaching Univesity Internal Sessional Code External L T P Hou Credit Marks Total Marks rs s 1. CSE511 Operating System 3 1 0 4 4 50 50...

Words: 14784 - Pages: 60

Free Essay

Dsp Lessons

...1 A DSP A-Z http://www.unex.ucla.edu Digital Signal Processing An “A” to “Z” R.W. Stewart Signal Processing Division Dept. of Electronic and Electrical Eng. University of Strathclyde Glasgow G1 1XW, UK Tel: +44 (0) 141 548 2396 Fax: +44 (0) 141 552 2487 E-mail: r.stewart@eee.strath.ac.uk M.W. Hoffman Department of Electrical Eng. 209N Walter Scott Eng. Center PO Box 880511 Lincoln, NE 68588 0511 USA Tel: +1 402 472 1979 Fax: +1 402 472 4732 Email:hoffman@unlinfo.unl.edu © BlueBox Multimedia, R.W. Stewart 1998 2 The DSPedia DSPedia An A-Z of Digital Signal Processing This text aims to present relevant, accurate and readable definitions of common and not so common terms, algorithms, techniques and information related to DSP technology and applications. It is hoped that the information presented will complement the formal teachings of the many excellent DSP textbooks available and bridge the gaps that often exist between advanced DSP texts and introductory DSP. While some of the entries are particularly detailed, most often in cases where the concept, application or term is particularly important in DSP, you will find that other terms are short, and perhaps even dismissive when it is considered that the term is not directly relevant to DSP or would not benefit from an extensive description. There are 4 key sections to the text: • • • • DSP terms A-Z Common Numbers associated with DSP Acronyms References page 1 page 427 page 435 page 443 the...

Words: 73093 - Pages: 293

Free Essay

Nit-Silchar B.Tech Syllabus

...Chemistry/Physics Laboratory Workshop Physical Training-I NCC/NSO/NSS L 3 3 3 1 3 0 0 0 0 13 T 1 0 1 0 0 0 0 0 0 2 1 1 1 1 0 0 0 0 4 1 1 0 0 0 0 0 0 2 0 0 0 0 P 0 0 0 3 0 2 3 2 2 8 0 0 0 0 0 2 2 2 2 0 0 0 0 0 2 2 2 6 0 0 8 2 C 8 6 8 5 6 2 3 0 0 38 8 8 8 8 6 2 0 0 40 8 8 6 6 6 2 2 2 40 6 6 8 2 Course No EC-1101 CS-1101 MA-1102 ME-1101 PH-1101/ CH-1101 CS-1111 EE-1111 PH-1111/ CH-1111 Course Name Semester-2 Basic Electronics Introduction to Computing Mathematics-II Engineering Mechanics Physics/Chemistry Computing Laboratory Electrical Science Laboratory Physics/Chemistry Laboratory Physical Training –II NCC/NSO/NSS Semester-4 Structural Analysis-I Hydraulics Environmental Engg-I Structural Design-I Managerial Economics Engg. Geology Laboratory Hydraulics Laboratory Physical Training-IV NCC/NSO/NSS Semester-6 Structural Design-II Structural Analysis-III Foundation Engineering Transportation Engineering-II Hydrology &Flood Control Concrete Lab Structural Engineering Lab L 3 3 3 3 3 0 0 0 0 0 15 3 3 3 3 3 0 0 0 0 15 3 3 3 3 3 0 0 T 0 0 1 1 1 0 0 0 0 0 3 1 1 0 1 0 0 0 0 0 3 1 1 1 0 0 0 0 P 0 0 0 0 0 2 2 2 2 2 6 0 0 0 0 0 2 2 2 2 4 0 0 0 0 0 2 2 C 6 6 8 8 8 2 2 2 0 0 42 8 8 6 8 6 2 2 0 0 40 8 8 8 6 6 2 2 MA-1201 CE- 1201 CE -1202 CE -1203 CE-1204 CE-1211 Semester-3 Mathematics-III Building Materials and...

Words: 126345 - Pages: 506

Free Essay

On-Chip Networks from a Networking Perspective: Congestion and Scalability in Many-Core Interconnects

...ABSTRACT In this paper, we present network-on-chip (NoC) design and contrast it to traditional network design, highlighting similarities and differences between the two. As an initial case study, we examine network congestion in bufferless NoCs. We show that congestion manifests itself differently in a NoC than in traditional networks. Network congestion reduces system throughput in congested workloads for smaller NoCs (16 and 64 nodes), and limits the scalability of larger bufferless NoCs (256 to 4096 nodes) even when traffic has locality (e.g., when an application’s required data is mapped nearby to its core in the network). We propose a new source throttlingbased congestion control mechanism with application-level awareness that reduces network congestion to improve system performance. Our mechanism improves system performance by up to 28% (15% on average in congested workloads) in smaller NoCs, achieves linear throughput scaling in NoCs up to 4096 cores (attaining similar performance scalability to a NoC with large buffers), and reduces power consumption by up to 20%. Thus, we show an effective application of a network-level concept, congestion control, to a class of networks – bufferless on-chip networks – that has not been studied before by the networking community. Categories and Subject Descriptors C.1.2 [Computer Systems Organization]: Multiprocessors – Interconnection architectures; C.2.1 [Network Architecture and Design]: Packet-switching networks Keywords On-chip networks...

Words: 13410 - Pages: 54

Premium Essay

Data Minig

...Mining refers to the extraction or “Mining” knowledge from large amount of data or Data Warehouse. To do this extraction data mining combines artificial intelligence, statistical analysis and database management systems to attempt to pull knowledge form stored data. This paper gives an overview of this new emerging technology which provides a road map to the next generation of library. And at the end it is explored that how data mining can be effectively and efficiently used in the field of library and information science and its direct and indirect impact on library administration and services. R P Bajpai Keywords : Data Mining, Data Warehouse, OLAP, KDD, e-Library 0. Introduction An area of research that has seen a recent surge in commercial development is data mining, or knowledge discovery in databases (KDD). Knowledge discovery has been defined as “the non-trivial extraction of implicit, previously unknown, and potentially useful information from data” [1]. To do this extraction data mining combines many different technologies. In addition to artificial intelligence, statistics, and database management system, technologies include data warehousing and on-line analytical processing (OLAP), human computer interaction and data visualization; machine learning (especially inductive learning techniques), knowledge representation, pattern recognition, and intelligent agents. One may distinguish between data and knowledge by defining data as corresponding to real world observations...

Words: 3471 - Pages: 14

Premium Essay

Hai, How Are U

...UNIVERSITY OF KERALA B. TECH. DEGREE COURSE 2008 ADMISSION REGULATIONS and I  VIII SEMESTERS SCHEME AND SYLLABUS of COMPUTER SCIENCE AND ENGINEERING B.Tech Comp. Sc. & Engg., University of Kerala 2 UNIVERSITY OF KERALA B.Tech Degree Course – 2008 Scheme REGULATIONS 1. Conditions for Admission Candidates for admission to the B.Tech degree course shall be required to have passed the Higher Secondary Examination, Kerala or 12th Standard V.H.S.E., C.B.S.E., I.S.C. or any examination accepted by the university as equivalent thereto obtaining not less than 50% in Mathematics and 50% in Mathematics, Physics and Chemistry/ Bio- technology/ Computer Science/ Biology put together, or a diploma in Engineering awarded by the Board of Technical Education, Kerala or an examination recognized as equivalent thereto after undergoing an institutional course of at least three years securing a minimum of 50 % marks in the final diploma examination subject to the usual concessions allowed for backward classes and other communities as specified from time to time. 2. Duration of the course i) The course for the B.Tech Degree shall extend over a period of four academic years comprising of eight semesters. The first and second semester shall be combined and each semester from third semester onwards shall cover the groups of subjects as given in the curriculum and scheme of examination ii) Each semester shall ordinarily comprise of not less than 400 working periods each of 60 minutes duration...

Words: 34195 - Pages: 137