Free Essay

Se This Method If You'D Like to Upload a Document from Your Computer. We Support the Following File

In:

Submitted By lzmshiwo
Words 3725
Pages 15
Proceedings of 2010 IEEE 17th International Conference on Image Processing

September 26-29, 2010, Hong Kong

K-NEAREST NEIGHBOR SEARCH: FAST GPU-BASED IMPLEMENTATIONS AND APPLICATION TO HIGH-DIMENSIONAL FEATURE MATCHING ´ Vincent Garcia1 , Eric Debreuve2 , Frank Nielsen1,3 , Michel Barlaud2
2

Ecole Polytechnique, Laboratoire d’informatique LIX, 91128 Palaiseau Cedex, France Laboratoire I3S, 2000 route des lucioles, BP 121, 06903 Sophia Antipolis Cedex, France 3 Sony CSL 3-14-13 Higashi Gotanda, Shinagawa-Ku, 141-0022 Tokyo, Japan rithm. Behind its apparent simplicity, this algorithm is highly demanding in terms of computation time. In the last decades, several approaches [4, 5] have been proposed with one common goal: to reduce the computation time. These methods generally seek to reduce the number of distances that have to be computed using, for instance, a pre-arrangement of the data. The direct consequence is a speed-up of the searching process. However, in spite of this improvement, the computation time required by the kNN search still remains the bottleneck of methods based on kNN. General-purpose computing on graphics processing units (GPGPU) is the technique of using a Graphics processing unit (GPU) to perform the computations usually handled by the CPU. The key idea is to use the parallel computing power of the GPU to achieve significant speed-ups. Numerous recent publications use the GPU programming to speed-up their methods [6, 7]. In a previous work [8], we showed that the implementation of the brute-force method using GPU programming (through NVIDIA CUDA) enables to greatly reduce the computation time in comparison to a similar C implementation and to the highly optimized ANN C++ library [4]. In this paper, we modify our approach in order to use the CUBLAS library (CUDA implementation of the BLAS library). We show that the new method/implementation is up to 189 times faster than ANN and up to 4 times faster than our previous approach. Experiments on high dimensional feature matching (SIFT [9]) are reported. 2. BRUTE-FORCE KNN SEARCH AND GPU IMPLEMENTATION 2.1. Algorithm Let us consider a set R of m reference points R = {r1 , r2 , · · · , rm } in a d-dimensional space and a set Q of n query points Q = {q1, q2 , · · · , qn } in the same space. Given a query point q ∈ Q, the brute-force algorithm (denoted by BF) is composed of the following steps: 1. Compute the distance between q and the m reference points of R; 2. Sort the m distances; 3. The k-nearest neighbors of q are the k points of R corresponding to the k lowest distances. The output of the algorithm can be the ordered set of these k distances, the set of the k neighbors (actually their indices in R) ordered by increasing distance, or both. If we apply this algorithm for the n query points and if we consider the typical case of large sets (both references and queries), the complexity of this algorithm is overwhelming: O(nmd) multiplications for the n × m distances computed and O(nm log m) for the



Fig. 1. Illustration of the kNN search problem in R2 with k = 3 using the Euclidean distance.

ABSTRACT The k-nearest neighbor (kNN) search problem is widely used in domains and applications such as classification, statistics, and biology. In this paper, we propose two fast GPU-based implementations of the brute-force kNN search algorithm using the CUDA and CUBLAS APIs. We show that our CUDA and CUBLAS implementations are up to, respectively, 64X and 189X faster on synthetic data than the highly optimized ANN C++ library, and up to, respectively, 25X and 62X faster on high-dimensional SIFT matching. Index Terms— k-nearest neighbors, GPU, CUDA/CUBLAS, SIFT 1. INTRODUCTION The k-nearest neighbor (kNN) search is a problem found in many research and industrial domains such as 3-dimensional object rendering, content-based image retrieval [1], statistics (estimation of entropies and divergences [2]), biology (gene classification [3]). . . Let us consider a set R of m reference points R = {r1 , r2 , · · · , rm } defined in a d-dimensional space, and let q be a query point defined in the same space. The kNN search problem consists in determining the k points closest to q among R. The distance considered between two points is not restricted to the Euclidean distance. For some applications, other distances are indeed more adapted to the nature of points (e.g., a mutual information-based metric in the case of histograms). Figure 1 illustrates an example of the kNN search problem in a 2-dimensional space for k = 3 using the Euclidean distance. The blue dots are the reference points and the red cross is the query point. The exhaustive search, also called brute-force algorithm, is a basic kNN search method consisting in computing the distances between the query point and each of the reference points. Then, the k-nearest neighbors are trivially determined using a sorting algo-

978-1-4244-7994-8/10/$26.00 ©2010 IEEE

3757

ICIP 2010

n sorting processes. However, the BF method is by nature highlyparallelizable and, as a consequence, is perfectly suitable for a GPU implementation. 2.2. CUDA implementation In a recent publication [8], we proposed a GPU implementation of the BF method. This implementation was written using the API NVIDIA CUDA and was composed of two kernels (CUDA functions): 1. The first kernel computed the distance matrix of size m × n containing the distances between the n query points and the m reference points. The computation of this matrix was fully parallelized since the distances between pairs of points are independent: each thread computed the distance between a given query point qi and a given reference point rj ; 2. The second kernel sorted the distance matrix. The n sorting processes (one for each query point) were parallelized since they are independent: each thread sorted all the distances computed for a given query point. The sorting algorithm used was a modified version of the insertion sort. Let us assume that the first k element of the array D are already sorted, the proposed version insert an element (let’s say the l-th element) into the correct position only if D[l] < D[k]. For small values of k, the proposed sorting algorithm appears to be faster than the efficient comb sort algorithm. Besides the distances, in case the indices of the k-nearest neighbors were also needed, an index matrix of size m × n and containing on each column the indices of the reference points per query was defined, each column being initialized with the vector (1, 2, . . . m) , where M denotes the transpose of M . The element insertion performed in the distance matrix as part of the sorting processes were simultaneously applied to the index matrix so that, in the end, its uppermost k × n-submatrix corresponded to the queries k-nearest neighbor indices ordered by increasing distance. Working with an initial m × n-index matrix represents a waste of memory. A simple “trick” allows us to work with a k × n-index matrix from the beginning, thus avoiding the (m − k) × n memory overhead. The sorting process deals with each array column monotonically from the first to the last element. Consequently, at each iteration, the index of the considered reference point is known. While sorting, if the l-th element needs to be inserted into the distance matrix, the index value l is also inserted into the k × n-index matrix at the exact same position, iteratively filling in the whole matrix. The main result of [8] is that the proposed CUDA implementation was up to 300X faster than a similar C implementation and up to 150X faster than the highly optimized ANN C++ library [4]. 2.3. CUBLAS implementation BLAS is the celebrated, highly optimized linear algebra library specialized in vector/matrix operations. CUBLAS is the CUDA implementation of BLAS and improves the performance of the classical BLAS functions. The CUDA implementation of the kNN search [8] was very efficient in terms of computation time. The distance matrix computation represented the main part of the computation time. In this paper, we show that we can greatly improve the global performances of the kNN search by reformulating the way to compute the distance matrix and by using the CUBLAS library. Let us consider two points x and y in a d-dimensional space x = (x1 , x2 , · · · xd ) , y = (y1 , y2 , · · · yd ) , (1)

where M denotes the transpose of M . The classical way to compute the Euclidean distance, denoted by ρ, between x and y is v u d uX ρ(x, y) = t (xi − yi )2 . (2) i=1 However, this distance computation can be rewritten to involve matrix additions and multiplications ρ2 (x, y) = (x − y) (x − y) = x
2

+ y

2

− 2x y

(3)

where . is the Euclidean norm. The square root is then computed at the end. This approach can be extended to handle sets of points. Let R and Q be two matrices of size d × m and d × n, resp., containing the m reference points and the n query points, respectively. The m×n-matrix ρ2 (R, Q) containing all the pairwise squared distances between query points and reference points is given by ρ2 (R, Q) = NR + NQ − 2R Q. th 2

(4)

The elements of the i row of NR are all equal to ri . The elements of the j th column of NQ are all equal to qj 2 . The way ρ2 (R, Q) is expressed in Eq. (4) (i.e., through matrix additions and multiplications) is perfectly adapted to a CUBLAS implementation (only NR and NQ have to be computed separately using for instance a CUDA kernel). However, this method is highly demanding in terms of memory usage and need to be optimized. Indeed, NR , NQ , and R Q are stored in three matrices of size m × n. Nevertheless, the matrices NR and NQ have a specific form. To optimize the memory usage, we took advantage of this: we stored NR and NQ as vectors of dimension 1 × m and 1 × n, respectively. The ith element of NR is equal to ri 2 , and similarly for NQ . Then, the addition and subtraction in Eq. (4) were handled by classical CUDA kernels. The proposed kNN search implementation is then based on CUDA and CUBLAS and is composed of the following kernels: 1. Compute the vector NR using CUDA (coalesced read/write); 2. Compute the vector NQ using CUDA (coalesced read/write); 3. Compute the m × n-matrix A = −2R Q using CUBLAS; 4. Add the ith element of NR to every element of the ith row of the matrix A using CUDA (grid of m × n threads, non coalesced read/write: use of the shared memory); The resulting matrix is denoted by B; 5. Sort in parallel each column of B (with n threads) using the modified insertion sort proposed in [8]; The resulting matrix is denoted by C; 6. Add the j th value of NQ to the first k elements of the j th column of the matrix C using CUDA (coalesced read/write); The resulting matrix is denoted by D; 7. Compute the square root of the first k elements of D to obtain the k smallest distances (coalesced read/write); The resulting matrix is denoted by E; 8. Extract the uppermost k × n-submatrix of E; The resulting matrix is the desired distance matrix for the k-nearest neighbors of each query. Note that the matrix names were given for algorithmic clarity only. Actually, once A is computed, all the remaining computations are done “in place”, meaning that the matrices from A to E are in fact a single matrix occupying a unique area of memory. The main computation task (i.e., the computation of A in kernel 3) is performed by CUBLAS. The addition of NQ to C and the

3758

computation of the square root can be done after the sorting process since these steps do not influence the distance order. By applying these two kernels (6 and 7) at the end of the procedure, the computation time is reduced since only the first k elements of each column are processed. If the matrix A does not fit into the GPU memory, the query points are splitted, processed separately, and the distances to the k-nearest neighbors are then merged together on the CPU/classical memory side. As explained in Section 2.2, if the indices of the k-nearest neighbors are also required, an index matrix I of size k × n is defined. In the kernel 5, the distance ordering is replicated in this index matrix. 2.4. Source code The source code corresponding to this paper, as well as the code for the paper [8], are available at: http://www.i3s.unice.fr/˜creative/KNN under a Creative Common License. They are proposed in two versions: one computing only the distances to the k-th nearest neighbors, the second computing both the distances and the corresponding neighbor indices. In the experiments, we used the second (thus slowest) version. 3. EXPERIMENTS We compared our CUDA and CUBLAS implementations to one of the fastest kNN search method: the C++ library ANN [4]. CUDA will refer to the fully CUDA-based algorithm proposed in [8], CUBLAS will refer to the mixed CUBLAS/CUDA algorithm proposed in this paper, and ANN will refer to the C++ ANN library. The three algorithms were compared using Matlab (The MathWorks, Inc.) Mex files. The computer used for this comparison was a Dell Precision M6400 laptop (Intel Core 2 duo/2.53GHz, 4Go DDR2 memory, NVIDIA Quadro FX 3700M) with Microsoft Windows XP 32 bits, NVIDIA CUDA 2.2, and Matlab 2007b. 3.1. kNN search on synthetic datasets We compared the computation times of the three algorithms for synthetic datasets. The points (references and queries) were randomly drawn from a normal distribution N (0, 1). In this experiment, n = m denotes the number of points (identical for references and queries) and k (the number of neighbors to consider) was set equal to 20 (as a reminder, d is the dimension of the points). Table 1 shows the evolution of the computation time for different values of n and d for each algorithm. Figure 2 shows the evolution of the log-computation time as a function of d and n, respectively. The computation time increases with n and d for all three algorithms. For ANN, this was expectable. When checking Fig. 2, one can note that the parallelization (CUDA and CUBLAS) is “better achieved” in terms of the dimension than in terms of the number of points. Moreover, regarding the dimension, CUBLAS is closer to the full parallel performances (flat curve) than CUDA. ANN is consistently faster than CUDA and CUBLAS only for small dimensions (d ≤ 4) or small number of points (n ≤ 256). Indeed, in such cases, CUDA and CUBLAS are penalized by the time needed to transfer data from host memory (CPU) to device memory (GPU) and back. For high dimensions and large number of points (usually the case in practice), this penalty becomes negligible compared to the gain in computation time achieved by parallelization. In such conditions, CUDA and CUBLAS were up to 64X and 189X faster than ANN, respectively.

Fig. 2. Log-computation time as a function (Top) of d for k = 20 and n = 16384, and (Bottom) of n for k = 20 and d = 32.

CUBLAS appears faster than CUDA (up to 4X faster) for high dimensional spaces and large datasets. For small values of d and n, CUDA is faster than CUBLAS due to the smaller number of kernels called (2 instead of 7), each kernel costing some time to be called: the time needed to compute the distances for CUDA (1 kernel) is too small to be efficiently replaced by the 6 CUBLAS kernels. Again, in practice, spaces are of high dimension and datasets are large. 3.2. kNN search applied to high-dimensional SIFT matching We compared the computation times of the three algorithms in the context of high-dimensional SIFT [9] feature matching. This kind of matching can be found in applications such as content-based image retrieval: SIFT features are extracted from a set of reference images and stored into a database. Then, given a query image I, the retrieval process extracts SIFT features from I and, for each of them, finds the k closest features in the database. Finally, a voting algorithm enables to determine the images most similar to I in the image database. For this experiment, we considered a set Q of 1024 query points (SIFT features), which is approximately the usual number of features extracted from a single image. The reference set R contained from 27 = 128 to 216 = 65536 points. These two sets of SIFT features correspond to a subset of the features extracted from the INRIA Holidays dataset [10]. The dimension of a SIFT feature is

3759

d=1

d=4

d=16

d=64

d=256

Method ANN CUDA CUBLAS ANN CUDA CUBLAS ANN CUDA CUBLAS ANN CUDA CUBLAS ANN CUDA CUBLAS

n=256 0.001 0.042 0.042 0.003 0.042 0.042 0.007 0.043 0.044 0.017 0.044 0.044 0.051 0.045 0.044

n=512 0.002 0.046 0.047 0.005 0.048 0.048 0.028 0.049 0.049 0.073 0.050 0.049 0.194 0.055 0.050

n=1024 0.004 0.054 0.055 0.012 0.055 0.057 0.109 0.056 0.056 0.299 0.062 0.057 0.742 0.081 0.060

n=2048 0.007 0.063 0.066 0.027 0.066 0.068 0.385 0.070 0.067 0.949 0.087 0.069 2.933 0.159 0.079

n=4096 0.015 0.081 0.091 0.059 0.086 0.093 1.421 0.105 0.092 3.279 0.176 0.102 14.579 0.459 0.146

n=8192 0.031 0.154 0.213 0.125 0.176 0.220 5.468 0.247 0.203 13.365 0.528 0.242 76.454 1.641 0.405

n=16384 0.063 0.480 0.698 0.275 0.561 0.733 20.289 0.805 0.791 74.183 1.950 0.904 334.509 6.910 2.394

n=32768 0.132 2.489 1.615 0.591 2.591 1.812 84.503 2.542 2.123 313.527 5.518 3.104 1053.819 19.381 7.157

n=65536 0.277 8.302 5.694 1.364 8.425 6.076 378.496 8.900 7.225 1296.367 20.441 10.887 3559.731 75.718 29.480

Table 1. Computation times (in seconds) for the kNN search. CUDA and CUBLAS are up to 64X and 189X faster than ANN, respectively.

tively, on synthetic data than the highly optimized ANN C++ library, and up to 25X and 62X faster, respectively, on high-dimensional SIFT matching. 5. REFERENCES [1] H. Zhang, A. C. Berg, M. Maire, and J. Malik, “SVM-KNN: Discriminative nearest neighbor classification for visual category recognition,” in International Conference on Computer Vision and Pattern Recognition, New York (NY), USA, 2006. [2] M. N. Goria, N. N. Leonenko, V. V. Mergel, and P. L. Novi Inverardi, “A new class of random vector entropy estimators and its applications in testing statistical hypotheses,” J. Nonparametr. Stat., vol. 17, pp. 277–297, 2005. [3] F. Pan, B. Wang, X. Hu, and W. Perrizo, “Comprehensive vertical sample-based knn/lsvm classification for gene expression analysis,” J. Biomed. Inform., vol. 37, pp. 240–248, 2004. [4] S. Arya, D. M. Mount, N. S. Netanyahu, R. Silverman, and A. Y. Wu, “An optimal algorithm for approximate nearest neighbor searching fixed dimensions,” Journal of the ACM, vol. 45, pp. 891–923, 1998. [5] H. J´ gou, M. Douze, and C. Schmid, “Searching with quantie zation: approximate nearest neighbor search using short codes and distance estimators,” Tech. Rep. RR-7020, INRIA, 2009. [6] D. Qiu, S. May, and A. N¨ chter, “GPU-accelerated nearest u neighbor search for 3D registration,” in International Conference on Computer Vision Systems, Li` ge, Belgium, 2009. e [7] Y. Zhuge, Y. Cao, and R.W. Miller, “GPU accelerated fuzzy connected image segmentation by using CUDA,” in Engineering in Medicine and Biology Conference, Minneapolis (MN), USA, 2009. ´ [8] V. Garcia, E. Debreuve, and M. Barlaud, “Fast k nearest neighbor search using gpu,” in CVPR Workshop on Computer Vision on GPU, Anchorage (AK), USA, 2008. [9] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vision, vol. 60, pp. 91–110, 2004. [10] H. J´ gou, M. Douze, and C. Schmid, “Hamming embede ding and weak geometric consistency for large scale image search,” in European Conference on Computer Vision, Marseille, France, 2008.

Fig. 3. Speed-up between every pair of algorithms as a function of the number of reference SIFT features (d = 128, k = 20). 128 and k was set equal to 20. Figure 3 shows the evolution of the speed-up between every pair of algorithms among the three as a function of the number of reference SIFT features. The curve “Alg1 vs. Alg2” must be interpreted as the computation time of Alg1 divided by the computation time of Alg2. Therefore, when the curve is higher than one, it means that Alg2 is faster than Alg1 by a factor equal to the curve level. The speed-up achieved by CUDA or CUBLAS in comparison with ANN increased significantly with the number of features. The speed-up achieved by CUBLAS in comparison with CUDA increased much less. Indeed, both algorithms have recourse to parallelization. However, CUBLAS computes and sorts the distances more efficiently. In this experiment, CUDA was up to 25X faster than ANN, CUBLAS was up to 62X faster than ANN, and CUBLAS is up to 2.5X faster than CUDA. 4. CONCLUSION We proposed two fast GPU-based implementations of the naive brute-force k-nearest neighbor (kNN) search algorithm based on the APIs CUDA and CUBLAS. In our experiments, CUDA and CUBLAS implementations were up to 64X and 189X faster, respec-

3760

Similar Documents

Premium Essay

Html Text

...........................................13 Chapter 3: Creating a Web Page and Entering Text ....................................................................24 Chapter 4: Changing and Customizing HTML Text....................................................................33 Chapter 5: Displaying Text in Lists .............................................................................................43 Chapter 6: Adding Graphics to Your Web Pages.........................................................................54 Chapter 7: Hypertext and Creating Links.....................................................................................64 Chapter 8: Clickable Image Maps and Graphical interfaces........................................................74 Chapter 9: HTML Forms..............................................................................................................85 Chapter 10: Images, Multimedia Objects and Background Graphics ..........................................96 Chapter 11: Adding Tables to your Documents.........................................................................102 Chapter 12: Frames ....................................................................................................................110 Chapter 13: Introduction to DHTML .........................................................................................122 Chapter 14: Introduction to CSS...

Words: 56638 - Pages: 227

Premium Essay

Protect

...of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without the prior written permission of the Publisher. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 7486008, or online at http://www.wiley.com/go/permissions. Trademarks: Wiley, For Dummies, the Dummies Man logo, Dummies.com, Making Everything Easier, and related trade dress are trademarks or registered trademarks of John Wiley & Sons, Inc. and may not be used without written permission. Amazon Web Services is a trademark of Amazon Technologies, Inc. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book. LIMIT OF LIABILITY/DISCLAIMER OF WARRANTY: THE PUBLISHER AND THE AUTHOR MAKE NO REPRESENTATIONS OR WARRANTIES WITH RESPECT TO THE ACCURACY OR COMPLETENESS OF THE CONTENTS OF THIS WORK AND SPECIFICALLY DISCLAIM ALL WARRANTIES, INCLUDING WITHOUT LIMITATION WARRANTIES OF FITNESS FOR A PARTICULAR PURPOSE. NO WARRANTY MAY BE CREATED OR EXTENDED BY SALES OR PROMOTIONAL MATERIALS. THE ADVICE AND STRATEGIES CONTAINED HEREIN MAY NOT BE SUITABLE FOR EVERY SITUATION. THIS WORK...

Words: 121491 - Pages: 486

Free Essay

Android Based Webstatic Server

...Screens………………………………………………………. 6.1 SYSTEM TESTING………………………………………….7 Types of Testing………………………………………………………. 7.1 TESTCASES…………………………………………………..8 CONCLUSION………………………………………………..9 ANDROID BASED STATIC WEBSERVER ABSTRACT Android is software platform and operating system for mobile devices. Being an open-source, it is based on the Linux kernel. It was developed by Google and later the Open Handset Alliance (OHA). It allows writing managed code in the Java language. Due to Android here is the possibility to write applications in other languages and compiling it to ARM native code. This project is a mobile-based web server for serving static HTML/JavaScript pages to the client systems (the systems could be PCs or Mobiles) for access by these client systems. A static website will be designed to serve from the mobile(web server). The Mobile Web server is based on Android OS. The...

Words: 9090 - Pages: 37

Free Essay

Flex 3 - a Beginners Guide - 9780071544184.31985.Pdf

...no part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior written permission of the publisher. 0-07-160364-6 The material in this eBook also appears in the print version of this title: 0-07-154418-6. All trademarks are trademarks of their respective owners. Rather than put a trademark symbol after every occurrence of a trademarked name, we use names in an editorial fashion only, and to the benefit of the trademark owner, with no intention of infringement of the trademark. Where such designations appear in this book, they have been printed with initial caps. McGraw-Hill eBooks are available at special quantity discounts to use as premiums and sales promotions, or for use in corporate training programs. For more information, please contact George Hoare, Special Sales, at george_hoare@mcgraw-hill.com or (212) 904-4069. TERMS OF USE This is a copyrighted work and The McGraw-Hill Companies, Inc. (“McGraw-Hill”) and its licensors reserve all rights in and to the work. Use of this work is subject to these terms. Except as permitted under the Copyright Act of 1976 and the right to store and retrieve one copy of the work, you may not decompile, disassemble, reverse engineer, reproduce, modify, create derivative works based upon, transmit, distribute, disseminate, sell, publish or sublicense the work or any part of it without McGraw-Hill’s prior consent. You may use the work for your own noncommercial...

Words: 51940 - Pages: 208

Premium Essay

Server 2008 for Dummies

...Published simultaneously in Canada No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600. Requests to the Publisher for permission should be addressed to the Legal Department, Wiley Publishing, Inc., 10475 Crosspoint Blvd., Indianapolis, IN 46256, (317) 572-3447, fax (317) 572-4355, or online at http:// www.wiley.com/go/permissions. Trademarks: Wiley, the Wiley Publishing logo, For Dummies, the Dummies Man logo, A Reference for the Rest of Us!, The Dummies Way, Dummies Daily, The Fun and Easy Way, Dummies.com, and related trade dress are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates in the United States and other countries, and may not be used without written permission. Microsoft and Windows Server are registered trademarks of Microsoft Corporation in the United States and/or other countries. All other trademarks are the property of their respective owners. Wiley Publishing, Inc., is not associated with any product or vendor mentioned in this book. LIMIT OF LIABILITY/DISCLAIMER OF WARRANTY:...

Words: 139691 - Pages: 559

Premium Essay

Monicah

...Published simultaneously in Canada No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600. Requests to the Publisher for permission should be addressed to the Legal Department, Wiley Publishing, Inc., 10475 Crosspoint Blvd., Indianapolis, IN 46256, (317) 572-3447, fax (317) 572-4355, or online at http:// www.wiley.com/go/permissions. Trademarks: Wiley, the Wiley Publishing logo, For Dummies, the Dummies Man logo, A Reference for the Rest of Us!, The Dummies Way, Dummies Daily, The Fun and Easy Way, Dummies.com, and related trade dress are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates in the United States and other countries, and may not be used without written permission. Microsoft and Windows Server are registered trademarks of Microsoft Corporation in the United States and/or other countries. All other trademarks are the property of their respective owners. Wiley Publishing, Inc., is not associated with any product or vendor mentioned in this book. LIMIT OF LIABILITY/DISCLAIMER OF WARRANTY:...

Words: 139691 - Pages: 559

Premium Essay

Redhat

...edhat® ® Te r r y C o l l i n g s & K u r t W a l l UR ON IT OOLS IN Y T C E CD-R L TH O ED UD M Linux Solutions from the Experts at Red Hat ® ® P R E S S™ SEC Red Hat® Linux® Networking and System Administration Red Hat® Linux® Networking and System Administration Terry Collings and Kurt Wall M&T Books An imprint of Hungry Minds, Inc. Best-Selling Books G Digital Downloads G e-Books G Answer Networks e-Newsletters G Branded Web Sites G e-Learning New York, NY G Cleveland, OH G Indianapolis, IN Red Hat® Linux® Networking and System Administration Published by Hungry Minds, Inc. 909 Third Avenue New York, NY 10022 www.hungryminds.com Copyright © 2002 Hungry Minds, Inc. All rights reserved. No part of this book, including interior design, cover design, and icons, may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording, or otherwise) without the prior written permission of the publisher. Library of Congress Control Number: 2001093591 ISBN: 0-7645-3632-X Printed in the United States of America 10 9 8 7 6 5 4 3 2 1 1O/RT/QT/QS/IN Distributed in the United States by Hungry Minds, Inc. Distributed by CDG Books Canada Inc. for Canada; by Transworld Publishers Limited in the United Kingdom; by IDG Norge Books for Norway; by IDG Sweden Books for Sweden; by IDG Books Australia Publishing Corporation Pty. Ltd. for Australia and New Zealand; by TransQuest Publishers Pte Ltd. for Singapore, Malaysia, Thailand...

Words: 220815 - Pages: 884

Free Essay

Business

...Indiana Published by Wiley Publishing, Inc., Indianapolis, Indiana Published simultaneously in Canada No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600. Requests to the Publisher for permission should be addressed to the Legal Department, Wiley Publishing, Inc., 10475 Crosspoint Blvd., Indianapolis, IN 46256, (317) 572-3447, fax (317) 572-4355, e-mail: brandreview@wiley.com. Trademarks: Wiley, the Wiley Publishing logo, For Dummies, the Dummies Man logo, A Reference for the Rest of Us!, The Dummies Way, Dummies Daily, The Fun and Easy Way, Dummies.com, and related trade dress are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates in the United States and other countries, and may not be used without written permission. All other trademarks are the property of their respective owners. Wiley Publishing, Inc., is not associated with any product or vendor mentioned in this book. LIMIT OF LIABILITY/DISCLAIMER OF WARRANTY: THE PUBLISHER AND THE AUTHOR MAKE NO REPRESENTATIONS OR WARRANTIES...

Words: 155013 - Pages: 621

Free Essay

Eric0427

...collaborative yet independent: Information practices in the physical sciences december 2011 Acknowledgements This report was the result of a collaborative effort between the Research Information Network, the Institute of Physics, Institute of Physics Publishing and the Royal Astronomical Society. They would like to thank the study authors at the 1) Oxford Internet Institute, University of Oxford, 2) Department of Information Systems, London School of Economics, 3) UCL Centre for Digital Humanities and the Department of Information Studies, University College, London, 4) e-Humanities Group, Royal Netherlands Academy of Arts & Sciences (KNAW) and Maastricht University, and 5) Oxford e-Research Centre (OeRC), University of Oxford. The main authors for this report are: Eric T. Meyer, Monica Bulger, Avgousta Kyriakidou-Zacharoudiou, Lucy Power, Peter Williams, Will Venters, Melissa Terras, Sally Wyatt. For the full acknowledgements, please see the project website: www.rin.ac.uk/phys-sci-case contents executive summary Overview method cases Tools and practices of information Information sources 68 69 77 78 4 4 4 4 research software dissemination complexity conclusion and recommendations Information retrieval Information and data management data analysis citation practices dissemination practices collaboration Transformations in practice New questions New technologies recommendations 79 84 84 85 85 86 86 87 88 90 91 92 Glossary Information in the...

Words: 30909 - Pages: 124

Premium Essay

Computer Networking

...COMPUTER NETWORKING SIXTH EDITION A Top-Down Approach James F. Kurose University of Massachusetts, Amherst Keith W. Ross Polytechnic Institute of NYU Boston Columbus Indianapolis New York San Francisco Upper Saddle River Amsterdam Cape Town Dubai London Madrid Milan Munich Paris Montréal Toronto Delhi Mexico City São Paulo Sydney Hong Kong Seoul Singapore Taipei Tokyo Vice President and Editorial Director, ECS: Marcia Horton Editor in Chief: Michael Hirsch Editorial Assistant: Emma Snider Vice President Marketing: Patrice Jones Marketing Manager: Yez Alayan Marketing Coordinator: Kathryn Ferranti Vice President and Director of Production: Vince O’Brien Managing Editor: Jeff Holcomb Senior Production Project Manager: Marilyn Lloyd Manufacturing Manager: Nick Sklitsis Operations Specialist: Lisa McDowell Art Director, Cover: Anthony Gemmellaro Art Coordinator: Janet Theurer/ Theurer Briggs Design Art Studio: Patrice Rossi Calkin/ Rossi Illustration and Design Cover Designer: Liz Harasymcuk Text Designer: Joyce Cosentino Wells Cover Image: ©Fancy/Alamy Media Editor: Dan Sandin Full-Service Vendor: PreMediaGlobal Senior Project Manager: Andrea Stefanowicz Printer/Binder: Edwards Brothers Cover Printer: Lehigh-Phoenix Color This book was composed in Quark. Basal font is Times. Display font is Berkeley. Copyright © 2013, 2010, 2008, 2005, 2003 by Pearson Education, Inc., publishing as Addison-Wesley. All rights reserved. Manufactured in the United States of...

Words: 69922 - Pages: 280

Free Essay

Thought on Business

...file:///C|/Documents%20and%20Settings/Administrator/Deskto...0BILL%20-%20BUSINESS%20AT%20THE%20SPEED%20OF%20THOUGHT.TXT BUSINESS AT THE SPEED OF THOUGHT by bill Gates ALSO By BILL GATES The Road Ahead BUSINESS AT THE SPEED OF THOUGHT: USING A DIGITAL NERVOUS SYSTEM BILL GATES WITH COLLINs HEMINGWAY 0 VMNER BOOKS A Time Warner Company To my wife, Melinda, and my daughter, Jennifer Many of the product names referred to herein are trademarks or registered trademarks of their respective owners. Copyright (D 1999 by William H. Gates, III All rights reserved. Warner Books, Inc, 1271 Avenue of the Americas, New York, NY 10020 Visit our Web site at www.warnerbooks.com 0 A Time Warner Company Printed in the United States of America First Printing: March 1999 10 9 8 7 6 5 4 3 2 1 ISBN: 0-446-52568-5 LC: 99-60040 Text design by Stanley S. Drate lFolio Graphics Co Inc Except as file:///C|/Documents%20and%20Settings/Admini...SINESS%20AT%20THE%20SPEED%20OF%20THOUGHT.TXT (1 of 392)12/28/2005 5:28:51 PM file:///C|/Documents%20and%20Settings/Administrator/Deskto...0BILL%20-%20BUSINESS%20AT%20THE%20SPEED%20OF%20THOUGHT.TXT indicated, artwork is by Gary Carter, Mary Feil-jacobs, Kevin Feldhausen, Michael Moore, and Steve Winard. ACKNOWLEDGMENTS I first want to thank my collaborator, Collins Hemingway, for his help in synthesizing and developing the material in this book and for his overall management of this project. I want to thank four CEOs who read a late draft of the manuscript and...

Words: 146627 - Pages: 587

Free Essay

A Hands on Intro to Hacking

...Hacking by Georgia Weidman San Francisco Penetration testing. Copyright © 2014 by Georgia Weidman. All rights reserved. No part of this work may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage or retrieval system, without the prior written permission of the copyright owner and the publisher. Printed in USA First printing 18 17 16 15 14   123456789 ISBN-10: 1-59327-564-1 ISBN-13: 978-1-59327-564-8 Publisher: William Pollock Production Editor: Alison Law Cover Illustration: Mertsaloff/Shutterstock Interior Design: Octopod Studios Developmental Editor: William Pollock Technical Reviewer: Jason Oliver Copyeditor: Pamela Hunt Compositor: Susan Glinert Stevens Proofreader: James Fraleigh Indexer: Nancy Guenther For information on distribution, translations, or bulk sales, please contact No Starch Press, Inc. directly: No Starch Press, Inc. 245 8th Street, San Francisco, CA 94103 phone: 415.863.9900; fax: 415.863.9950; info@nostarch.com; www.nostarch.com Library of Congress Cataloging-in-Publication Data Weidman, Georgia. Penetration testing : a hands-on introduction to hacking / Georgia Weidman. pages cm Includes index. ISBN 978-1-59327-564-8 (paperback) -- ISBN 1-59327-564-1 (paperback) 1. Penetration testing (Computer security) 2. Kali Linux. 3. Computer hackers. QA76.9.A25W4258 2014 005.8'092--dc23 2014001066 I. Title. No Starch Press and the No Starch Press logo are registered...

Words: 117203 - Pages: 469

Free Essay

Oracle for Dummies

...™ Everything Easier! Making cle 11g Ora ® Learn to: • Set up and manage an Oracle database • Maintain and protect your data • Understand Oracle database architecture • Troubleshoot your database and keep it running smoothly Chris Zeis Chris Ruel Michael Wessler www.it-ebooks.info www.it-ebooks.info Oracle 11g ® FOR DUMmIES ‰ www.it-ebooks.info www.it-ebooks.info Oracle 11g ® FOR DUMmIES by Chris Zeis, Chris Ruel, and Michael Wessler ‰ www.it-ebooks.info Oracle® 11g For Dummies® Published by Wiley Publishing, Inc. 111 River Street Hoboken, NJ 07030-5774 www.wiley.com Copyright © 2009 by Wiley Publishing, Inc., Indianapolis, Indiana Published by Wiley Publishing, Inc., Indianapolis, Indiana Published simultaneously in Canada No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online...

Words: 106399 - Pages: 426

Premium Essay

Managing Information Technology (7th Edition)

...CONTENTS: CASE STUDIES CASE STUDY 1 Midsouth Chamber of Commerce (A): The Role of the Operating Manager in Information Systems CASE STUDY I-1 IMT Custom Machine Company, Inc.: Selection of an Information Technology Platform CASE STUDY I-2 VoIP2.biz, Inc.: Deciding on the Next Steps for a VoIP Supplier CASE STUDY I-3 The VoIP Adoption at Butler University CASE STUDY I-4 Supporting Mobile Health Clinics: The Children’s Health Fund of New York City CASE STUDY I-5 Data Governance at InsuraCorp CASE STUDY I-6 H.H. Gregg’s Appliances, Inc.: Deciding on a New Information Technology Platform CASE STUDY I-7 Midsouth Chamber of Commerce (B): Cleaning Up an Information Systems Debacle CASE STUDY II-1 Vendor-Managed Inventory at NIBCO CASE STUDY II-2 Real-Time Business Intelligence at Continental Airlines CASE STUDY II-3 Norfolk Southern Railway: The Business Intelligence Journey CASE STUDY II-4 Mining Data to Increase State Tax Revenues in California CASE STUDY II-5 The Cliptomania™ Web Store: An E-Tailing Start-up Survival Story CASE STUDY II-6 Rock Island Chocolate Company, Inc.: Building a Social Networking Strategy CASE STUDY III-1 Managing a Systems Development Project at Consumer and Industrial Products, Inc. CASE STUDY III-2 A Make-or-Buy Decision at Baxter Manufacturing Company CASE STUDY III-3 ERP Purchase Decision at Benton Manufacturing Company, Inc. CASE STUDY III-4 ...

Words: 239887 - Pages: 960

Free Essay

None

... Syntax Highlighting: Prism by Lea Verou. Idea & Concept: Smashing Media GmbH Preface Over the past few years, our eBook collection has grown steadily. With more than 50 eBooks already available and counting, we made the decision half a year ago to bundle all of this valuable content into one big lovely package: The Smashing Library1. As a humble gift to you, dear reader, we have now put together this Editor’s Choice eBook. It’s a little Smashing Library treat, featuring some of the most memorable and useful articles that have been published on Smashing Magazine in the last few years — all of them carefully selected and thoroughly edited. Ranging from heavily discussed topics such as responsive Web design, to ideas on UX, to trusty mainstays like nifty Photoshop tricks, to hands-on business advice and design inspiration, this eBook is a potpourri as diverse as your work as a Web designer. We hope you enjoy reading it as much as we do editing and creating each and every eBook page that finds a home in our Smashing Library. — Cosima Mielke, Smashing eBook Producer 1. http://smashed.by/library 2 TABLE OF CONTENTS Designing For The Reading Experience ............................................................. 4 Logical Breakpoints For Your Responsive Design ........................................ 24 Sketching A New Mobile Web............................................................................. 35 Towards A Retina Web ..........................

Words: 39572 - Pages: 159