Free Essay

Digital Image Processing

In:

Submitted By farwahashmi
Words 1187
Pages 5
UNIVERSITY OF ENGINEERING AND TECHNOLOGY, TAXILA
FACULTY OF TELECOMMUNICATION AND INFORMATION ENGINEERING
COMPUTER ENGINEERING DEPARTMENT

Digital Image Processing
Lab Manual No 03

Dated:
31st August, 2015 to 04th September, 2015

Semester:
Autumn 2015

Digital Image Processing
Session:-2012 Computer

Lab Instructor:-Engr. Farwa

UNIVERSITY OF ENGINEERING AND TECHNOLOGY, TAXILA
FACULTY OF TELECOMMUNICATION AND INFORMATION ENGINEERING
COMPUTER ENGINEERING DEPARTMENT

Objectives:The objectives of this session is to understand following.





Image Resizing
Image Interpolation
Relationships between pixels
Distance Transform

Image Resizing:Resizing an image consists of enlarging or shrinking it, using nearest-neighbor, bilinear, or bicubic interpolation. Both resizing procedures can be executed using the imresize function. Let us first explore enlarging an image.
Enlarge the cameraman image by a scale factor of 3. By default, the function uses bicubic interpolation. I=imread('cameraman.tif');
I_big1 = imresize(I,3); figure, imshow(I), title(’Original Image’); figure, imshow(I_big1), interpolation’); title(‘Enlarged

Image

using

bicubic

Use the imtool function to inspect the resized image, I_big1.
Scale the image again using nearest-neighbor and bilinear interpolations.
I_big2 = imresize(I,3,’nearest’);
I_big3 = imresize(I,3,’bilinear’); figure, imshow(I_big2),title(‘Resized interpolation’); figure, imshow(I_big3), interpolation’);

Digital Image Processing
Session:-2012 Computer

using

title(‘Resized

nearest-neighbor using bilinear

Lab Instructor:-Engr. Aamir Arsalan

UNIVERSITY OF ENGINEERING AND TECHNOLOGY, TAXILA
FACULTY OF TELECOMMUNICATION AND INFORMATION ENGINEERING
COMPUTER ENGINEERING DEPARTMENT

Question 1:Visually compare the three resized images. How do they differ?
Close any open figures.
Reduce the size of the cameraman image by a factor of 0.5 in both dimensions.
I_rows = size(I,1);
I_cols = size(I,2);
I_sm1 = I(1:2:I_rows, 1:2:I_cols); figure, imshow(I_sm1);
Although the technique above is computationally efficient, its limitations may require us to use another method. Just as we used the imresize function for enlarging, we can just as well use it for shrinking. When using the imresize function, a scale factor larger than 1 will produce an image larger than the original, and a scale factor smaller than 1 will result in an image smaller than the original. Shrink the image using the imresize function.
I_sm2 = imresize(I,0.5,’nearest’);
I_sm3 = imresize(I,0.5,’bilinear’);
I_sm4 = imresize(I,0.5,’bicubic’); figure, subplot(1,3,1),imshow(I_sm2),title(’Nearest-neighbor
Interpolation’);
subplot(1,3,2), imshow(I_sm3), title(’Bilinear Interpolation’); subplot(1,3,3), imshow(I_sm4), title(’Bicubic Interpolation’);
Image Interpolation:When a small image is enlarged, for example if an image is zoomed to 400% shown in Fig. 1. the color values of original 4 adjacent pixels marked A, B, C, and D in (a) were filled in the new A,
Digital Image Processing
Session:-2012 Computer

Lab Instructor:-Engr. Aamir Arsalan

UNIVERSITY OF ENGINEERING AND TECHNOLOGY, TAXILA
FACULTY OF TELECOMMUNICATION AND INFORMATION ENGINEERING
COMPUTER ENGINEERING DEPARTMENT

B, C, and D locations in (b) accordance with the magnification factor. But there are a large number of pixels which values are unknown between A, B, C, and D such as P so the values of these pixels should be calculated through interpolating estimation.

Figure 1
Nearest Neighbor Interpolation:In nearest neighbor interpolation algorithm, the position of pixel P in the magnified image is converted into the original image, and the distance between P and its neighbor points A, B, C and
D were calculated. Then the color values of pixel P was set as the values of the pixel which was nearest to P.
In Fig. 2, suppose ( i, j ) , ( i, j + 1) , ( i+ 1, j ) and ( i+ 1,j + 1) are the 4-neighbor points, and there values are f( i, j ) , f( i, j + 1) , f( i+ 1, j ) and f( i+ 1,j + 1). The distance between (u,v) and
( i, j ) , ( i, j + 1) , ( i+ 1, j ) and ( i+ 1,j + 1) were calculated, then the values of (u,v) was set as the value of the point which is nearest to (u,v).

Figure 2
Digital Image Processing
Session:-2012 Computer

Lab Instructor:-Engr. Aamir Arsalan

UNIVERSITY OF ENGINEERING AND TECHNOLOGY, TAXILA
FACULTY OF TELECOMMUNICATION AND INFORMATION ENGINEERING
COMPUTER ENGINEERING DEPARTMENT

Lab Task:Write a MATLAB code to interpolate an image using Nearest
Neighbor Interpolation Algorithm.
Basic Relationship between Pixels:The 4- neighbors of pixel p are: N4(p)
Any pixel p(x,y) has two vertical and two horizontal neighbors, given by:

The 4- diagonal neighbors are: ND(p) given by:

The 8-neighbors are : N8(p) 8-neighbors of a pixel pare its vertical, horizontal and 4 diagonal neighbors denoted by N8(p).

Digital Image Processing
Session:-2012 Computer

Lab Instructor:-Engr. Aamir Arsalan

UNIVERSITY OF ENGINEERING AND TECHNOLOGY, TAXILA
FACULTY OF TELECOMMUNICATION AND INFORMATION ENGINEERING
COMPUTER ENGINEERING DEPARTMENT

Connectivity:Two pixels are said to be connected if they are adjacent in some sense.



They are neighbors (N4, ND, N8) and
Their intensity values (gray levels) are similar.

Adjacency:Let V be the set of intensity used to define adjacency; e.g. V={1} in a binary image or
V={100,101,102,…,120} inn a gray-scale image.
We consider three types of adjacency:
4-adjacency:Two pixels p and q with values from V are 4-adjacent if q is in the set N4(p).

8-adjacency:Two pixels p and q with values from V are 8-adjacent if q is in the set N8(p).

Digital Image Processing
Session:-2012 Computer

Lab Instructor:-Engr. Aamir Arsalan

UNIVERSITY OF ENGINEERING AND TECHNOLOGY, TAXILA
FACULTY OF TELECOMMUNICATION AND INFORMATION ENGINEERING
COMPUTER ENGINEERING DEPARTMENT

Lab Task:a. Consider the two image subsets, S1 and S2, shown in the following figure. For V={1}, determine whether these two subsets are
(a)
4-adjacent,
(b)
8-adjacent

b. If the set V= {1, 2} then check whether the following is 4 adjacent or 8 adjacent.

Distance Measures:Distance between two pixels p and q having co-ordinates (x,y) and (s,t) have following different
Distance measures.

Euclidean Distance:De(p, q) = [(x-s)^2 + (y-t)^2]^1/2
Digital Image Processing
Session:-2012 Computer

Lab Instructor:-Engr. Aamir Arsalan

UNIVERSITY OF ENGINEERING AND TECHNOLOGY, TAXILA
FACULTY OF TELECOMMUNICATION AND INFORMATION ENGINEERING
COMPUTER ENGINEERING DEPARTMENT

City Block Distance:D4(p, q) = |x-s| + |y-t|
Chess Board Distance:D8(p, q) = max(|x-s|, |y-t|)
In MATLAB there is a function named bwdist which is used to calculate the distance transform.
Syntax:D = bwdist(BW)
D = bwdist(BW) computes the Euclidean distance transform of the binary image BW. For each pixel in BW, the distance transform assigns a number that is the distance between that pixel and the nearest nonzero pixel of BW. bwdist uses the Euclidean distance metric by default. BW can have any dimension. D is the same size as BW.
Explore the MATLAB help for bwdist function for more understanding.
Lab Task:1. Read at-least 3 built-in images of MATLAB and apply the distance transform function on the images using Euclidean, city-block and chess board distance measures. Comment on the output.

Digital Image Processing
Session:-2012 Computer

Lab Instructor:-Engr. Aamir Arsalan

Similar Documents

Free Essay

Digital Image Processing

...Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic) DIGITAL IMAGE PROCESSING DIGITAL IMAGE PROCESSING PIKS Inside Third Edition WILLIAM K. PRATT PixelSoft, Inc. Los Altos, California A Wiley-Interscience Publication JOHN WILEY & SONS, INC. New York • Chichester • Weinheim • Brisbane • Singapore • Toronto Designations used by companies to distinguish their products are often claimed as trademarks. In all instances where John Wiley & Sons, Inc., is aware of a claim, the product names appear in initial capital or all capital letters. Readers, however, should contact the appropriate companies for more complete information regarding trademarks and registration. Copyright  2001 by John Wiley and Sons, Inc., New York. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic or mechanical, including uploading, downloading, printing, decompiling, recording or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without the prior written permission of the Publisher. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 605 Third Avenue, New York, NY 10158-0012, (212) 850-6011, fax (212) 850-6008, E-Mail: PERMREQ @ WILEY.COM. This publication is designed...

Words: 173795 - Pages: 696

Premium Essay

Importance Of Digital Image Processing

...2.6 Image Processing Image processing is a term which indicates the processing on the image which is taken as input and the result set of processing is may be a set of related parameters of an image. The purpose of image processing is visualization which is to observe the objects that are not visible. There are two types of image processing techniques are used which are analog image processing and digital image processing [22]. Analog image processing can be used for hard copies like printouts and photographs. Image analysts use various fundamentals of interpretation while using these analog techniques. Digital image processing technique will discuss in section 2.6.1. 2.6.1 Digital Image Processing Digital image processing offers more complex...

Words: 851 - Pages: 4

Free Essay

Information on Digital Image Processing

...Digital Image Processing Spring 2007 Sankalp Kallakuri elsanky@gmail.com Books refererenced – Digital Image Processing by Gonzalez and Woods Fundamentals of Digital Image Processing by A K Jain Digital Picture Processing By Rosenfeld and Kak Syllabus • • • • • • • • Fundamentals Image Enhancement [spatial] Image Enhancement [frequency] Sampling and Quantization Image Restoration Color Image Processing Image Compression Image Reconstruction Syllabus • Grading: Assignments - 40% Homework Mid Term Final - 10% - 20% - 30% • Assignments: Matlab and C/C++ IP 101 • • • • • Colour images Grey level images File formats JPG BMP TIFF 2D representations Examples of Fields that use IP X-Rays, UV Imaging, IR Imaging, Satellite Images, Astronomy, License plates, Water Marking, Microwaves, MRI, sonograms, TEMs Image Processing System network Image Displays Processors Mass storage Hard Copy IP software Specialized IP Hardware Image Sensors Problem domain From Gonzalez and Woods Human Eye Vision Details • • • • • • Lens Iris Pupil Cornea Retina Rods / Cones [distribution number use] Blind spot Photopic[bright]/ Scotopic[dim] Brightness adaptation Weber Ratio Ic I Examples of Brightness perception Figures from Gonzalez and Woods Light and EM Spectrum • • • • Wavelength = C/ frequency Energy = h * frequency Reflected light Radiance is total amount of energy that flows from the light source • Luminance is the perceived from light...

Words: 692 - Pages: 3

Free Essay

Digital Image Processing Book

...Communication Engineering Visvesvaraya National Institute of Technology Nagpur- 440010 (India) April, 2012 Program to Detect Number plate of a car %Program to extracting particular things from an image. %This is for detecting vehicle number plate segmentation extraction % input - give the image file name as input. eg :- gau 1 (1).jpg clc; clear all; k=input('Enter the image file name','s'); % input color image of car having number plate. im=imread('num_plate'); im1=rgb2gray(im); im1=medfilt2(im1,[3 3]); %filtering is use to remove noise from image . BW = edge(im1,'sobel'); %this is for finding edges. [imx,imy]=size(BW); msk=[0 0 0 0 0; 0 1 1 1 0; 0 1 1 1 0; 0 1 1 1 0; 0 0 0 0 0;]; %This is the mask we use for detection B=conv2(double(BW),double(msk)); %Smoothing image to reduce the number of connected components L = bwlabel(B,8);% Calculating connected components mx=max(max(L)) % There will be mx connected components.Here U can give a value between 1 and %mx for L or in a loop you can extract all connected components % If you are using the attached car image, by giving 17,18,19,22,27,28 to L %you can extract the number plate completely. [r,c] = find(L==17); rc = [r c]; [sx sy]=size(rc); n1=zeros(imx,imy); for i=1:sx x1=rc(i,1); y1=rc(i,2); n1(x1,y1)=255; end % Storing the extracted image in an array figure,imshow(im); figure,imshow(im1); figure,imshow(B); figure,imshow(n1,[]);...

Words: 279 - Pages: 2

Free Essay

Stereoscopic Building Reconstruction Using High-Resolution Satellite Image Data

...Reconstruction Using High-Resolution Satellite Image Data Anonymous submission Abstract—This paper presents a novel approach for the generation of 3D building model from satellite image data. The main idea of 3D modeling is based on the grouping of 3D line segments. The divergence-based centroid neural network is employed in the grouping process. Prior to the grouping process, 3D line segments are extracted with the aid of the elevation information obtained by using area-based stereo matching of satellite image data. High-resolution IKONOS stereo images are utilized for the experiments. The experimental result proved the applicability and efficiency of the approach in dealing with 3D building modeling from high-resolution satellite imagery. Index Terms—building model, satellite image, 3D modeling, line segment, stereo I. I NTRODUCTION Extraction of 3D building model is one of the important problems in the generation of an urban model. The process aims to detect and describe the 3D rooftop model from complex scene of satellite imagery. The automated extraction of the 3D rooftop model can be considered as an essential process in dealing with 3D modeling in the urban area. There has been a significant body of research in 3D reconstruction from high-resolution satellite imagery. Even though a natural terrain can be successfully reconstructed in a precise manner by using correlation-based stereoscopic processing of satellite images [1], 3D building reconstruction remains to a difficult...

Words: 2888 - Pages: 12

Free Essay

Research Theories

...NUST COLLEGE OF ELECTRICAL AND MECHANICAL ENGINEERING NUST COLLEGE OF ELECTRICAL AND MECHANICAL ENGINEERING DIGITAL IMAGE PROCESSING LAB REPORT 4 Submitted By: NS Fatima Hassan DE-34-CE-B-209 Submitted to: Sir Saqib Ejaz TASK1: Code: img=imread('fig01.jpg'); figure(1) imshow(img,[]); img=rgb2gray(img); rotatedim=imrotate(img,45,'bilinear'); threshold=graythresh(rotatedim); BW = im2bw(rotatedim,threshold); figure(2) imshow(BW,[]) Output: Figure [ 1 ]: Original gray scale image Figure [ 2 ]: Rotated image after thresholding TASK2: Code: img=imread('fig02.jpg'); figure(1) imshow(img,[]); img=rgb2gray(img) doubimg=im2double(img); %converting image to double figure(2) imshow(doubimg,[]); logtransform=log(doubimg); %log transform figure(3) imshow(logtransform,[]); Output: Figure 3: Original image Figure 4: Image after log transformation Figure 5: Image after Log Transform TASK3: Code: img=imread('fig03.jpg'); figure(1) imshow(img,[]); img=rgb2gray(img); doubimg=im2double(img); %power transformtaions powimg1=1*(doubimg.^5); figure(2) imshow(powimg1,[]); powimg2=1*(doubimg.^10); figure(3) imshow(powimg2,[]); powimg3=1*(doubimg.^2); figure(4) imshow(powimg3,[]); powimg4=1*(doubimg.^0.5); figure(5) imshow(powimg4,[]); powimg5=1*(doubimg.^0.1); figure(6) ...

Words: 479 - Pages: 2

Free Essay

Stereo Acoustic Perception Based on Real Time Video Acquisition

...Shetty&, Chinmai$ , Rajeshwari Hegde@ #,*,&,, Department of Telecommunication Engineering, @ Guide and faculty BMS College of Engineering, Bangalore, India # supreethkrao@gmail.com arpithaprasad@gmail.com & anushree.shetty12@gmail.com $ cpchinmai@gmail.com * Abstract— A smart navigation system based on an object detection mechanism has been designed to detect the presence of obstacles that immediately impede the path, by means of real time video processing. This paper is discussed keeping in mind the navigation of the visually impaired. A video camera feeds images of the surroundings to a Da-Vinci Digital Media Processor, DM642, which works on the video, frame by frame. The processor carries out image processing techniques whose result contains information about the object in terms of image pixels. The algorithm aims to select that object, among all others, that poses maximum threat to the navigation. A database containing a total of three sounds is constructed. Hence, each image translates to a beep, where every beep informs the navigator of the obstacles directly in front of him. This paper implements a more efficient algorithm compared to its predecessor, NAVI. Keywords— Navigation, Edge Detection, Flood Function, Object Detection, DM642, Acoustic Transformation I. INTRODUCTION Assistance for the blind or visually impaired can range from simple measures, such as a white cane or a guide dog, to a very sophisticated computer technology...

Words: 2605 - Pages: 11

Free Essay

Fuzzy and Cla Based Edge Detection Method

...i CERTIFICATE ii COMPLIANCE CERTIFICATE iii THESIS APPROVAL CERTIFICATE iv DECLARATION OF ORIGINALITY v Acknowledgment vi Table of Contents vii List of Figures x Abstract xiii Chapter 1 Introduction 1 1.1 Edge Detection: Analysis 3 1.1.1 Fuzzy Logic in Image Processing 4 1.1.2 Fuzzy Logic for Edge Detection 5 1.1.3 Cellular Learning Automata 6 Chapter 2 Literature Review 7 2.1 Edge Detection: Methodology 7 2.1.1 First Order Derivative Edge Detection 7 2.1.1.1 Prewitts Operator 7 2.1.1.2 [pic] Sobel Operator 8 2.1.1.3 Roberts Cross Operator 11 2.1.1.4 Threshold Selection 11 2.1.2 Second Order Derivative Edge Detection 11 2.1.2.1 Marr-Hildreth Edge Detector 11 2.1.2.2 Canny Edge Detector 12 2.1.3 Soft Computing Approaches to Edge Detection 13 2.1.3.1 Fuzzy Based Approach 14 2.1.3.2 Genetic Algorithm Approach 14 2.1.4 Cellular Learning Automata 15 Chapter 3 Fuzzy Image Processing 18 3.1 Need for Fuzzy Image Processing 19 3.2 Introduction to Fuzzy sets and Crisp sets 20 3.2.1 Classical sets (Crisp sets) 20 3.2.2 Fuzzy sets 21 3.3 Fuzzification 22 3.4 Membership Value Assignment 22 3.5 Defuzzification 23 3.6 Enhancing Edges Using Cellular Learning Automata 26 3.6.1 Divide the Edgy Image Into Overlapping 3 × 3 Windows 26 3.6.2 Penalty and Rewards. 27 Chapter 4 Implementation 30 4.1 Simple algorithm for edge detection using Fuzzy Logic 30 Chapter 5 Conclusion 38 References 39 Appendix A : Acronyms 41 Appendix B : Review Card 42 Appendix C:...

Words: 9151 - Pages: 37

Free Essay

Automatic Part Program Generation

...experts' knowledge about image processing techniques, and is capable of solving given vision problems. As a problem domain, we choose vision algorithms for a parts-feeder, which determines the attitude of mechanical parts on a conveyor-belt and rejects parts with inappropriate attitudes. The expert system for parts feeder is designed to consist of three components: FSE (Feature selection expert), IPE (Image processing expert), DTG (Decision tree generator). The knowledge for vision algorithm design to determine parts attitude is discussed. A framework to represent knowledge for finding solutions for pattern classificationproblem is established. 1. Introduction Recently, several expert systems for image processing have been investigated[1][2][3]. Their knowledge-basesinclude human experts' knowledge about image processing techniques, and can generate a sequence of image processing operations to solve the given problem. DIA-Expert system (Digital-Image-Analysis Expert System)[4] is a typical example. When an input image and the goal of analysis are given, it continues to decompose the goal into subgoals until the sequence of executable image processing modules could be found. Here, we apply this framework to the algorithm design for an industrial machine vision[5]. Machine vision systems for advanced automation must solve problems such as the discriminationor positioning of mechanical parts and the detection of their surface flaws. The contemporary image analysis and pattern recognition...

Words: 970 - Pages: 4

Free Essay

Dsp Textbook

...Digital Image Processing Second Edition Rafael C. Gonzalez University of Tennessee Richard E. Woods MedData Interactive Prentice Hall Upper Saddle River, New Jersey 07458 Library of Congress Cataloging-in-Pubblication Data Gonzalez, Rafael C. Digital Image Processing / Richard E. Woods p. cm. Includes bibliographical references ISBN 0-201-18075-8 1. Digital Imaging. 2. Digital Techniques. I. Title. TA1632.G66 621.3—dc21 2001 2001035846 CIP Vice-President and Editorial Director, ECS: Marcia J. Horton Publisher: Tom Robbins Associate Editor: Alice Dworkin Editorial Assistant: Jody McDonnell Vice President and Director of Production and Manufacturing, ESM: David W. Riccardi Executive Managing Editor: Vince O’Brien Managing Editor: David A. George Production Editor: Rose Kernan Composition: Prepare, Inc. Director of Creative Services: Paul Belfanti Creative Director: Carole Anson Art Director and Cover Designer: Heather Scott Art Editor: Greg Dulles Manufacturing Manager: Trudy Pisciotti Manufacturing Buyer: Lisa McDowell Senior Marketing Manager: Jennie Burger © 2002 by Prentice-Hall, Inc. Upper Saddle River, New Jersey 07458 All rights reserved. No part of this book may be reproduced, in any form or by any means, without permission in writing from the publisher. The author and publisher of this book have used their best efforts in preparing this book. These efforts include the development, research, and testing of the theories and programs to determine their effectiveness...

Words: 66542 - Pages: 267

Free Essay

Gender Recognition

...Based Image Retrieval Using Extended Local Tetra Patterns Aasish Sipani1*, Phani Krishna2 and Sarath Chandra3 1*, 2, 3 Department of Computer Science and Engineering, National Institute of Technology, Warangal, India, aasishsipani@gmail.com, ppkk1992@gmail.com, chandra.3162@gmail.com www.ijcaonline.org Received: Oct/26/2014 Revised: Nov/09/2014 Accepted: Nov/20/2014 Published: Nov/30/2014 Abstract— In this modern world, finding the desired image from huge databases has been a vital problem. Content Based Image Retrieval is an efficient method to do this. Many texture based CBIR methods have been proposed so far for better and efficient image retrieval. We aim to give a better image retrieval method by extending the Local Tetra Patterns (LTrP) for CBIR using texture classification by using additional features like Moment Invariants and Color moments. These features give additional information about the color and rotational invariance. So an improvement in the efficiency of image retrieval using CBIR is expected. Keywords— Content Based Image Retrieval (CBIR), Local Tetra Patterns (LTrP), Gabor Filters, Histogram Equalization, Moment Invariants I. INTRODUCTION The explosive growth of digital libraries due to Web cameras, digital cameras, and mobile phones equipped with such devices is making the database management by human annotation an extremely tedious and clumsy task. Thus, there exists a need for developing an efficient approach to search for the desired images form...

Words: 3578 - Pages: 15

Free Essay

Change Detection

...Introduction...................................................3 Digital Change Detection Process...............................4 Description of the most commonly used change detection methods.5 I. Post-Classification Comparison..........................5 II. Direct Classification...................................6 III. Principal Component Analysis (PCA)......................6 IV. Image Differencing......................................8 V. Change Vector Analysis (CVA)............................9 Relative accuracy of the most commonly used change detection methods........................................................9 I. Post-Classification Comparison.........................10 II. Direct Classification..................................11 III. Principal Component Analysis (PCA).....................11 IV. Image Differencing.....................................12 V. Change Vector Analysis (CVA) Conclusion....................................................14 References....................................................15 Introduction Remote sensing change detection has been defined as the process of identifying change in the state of an object or phenomena through the detection of differences between two or more sets of images taken of the same area on different dates (Wang, 1993). The underlying assumption is that changes on the ground cause significant changes in image pixel values (Zhang et al., 2002). Change detection is a...

Words: 3616 - Pages: 15

Free Essay

Image Processing

...INTRODUCTION The term image refer to two dimensional light intensity function f(x, y), where x and y denote spatial coordinates and the value of at any point (x, y) is proportional to the brightness (or gray level) of the image at that point. A digital image is an image f(x, y) that has been discretized both in spatial coordinate and brightness. A digital image can be considered a matrix whose row and column indices identify a point in the image and the corresponding matrix element value identifies the gray level at that point. The elements of such a digital array are called image elements, picture elements. The term digital image processing refers to processing of a two dimensional picture by a digital computer. An image given in the form of a transparency, slide, photograph or chart is first digitized and stored as matrix of binary digits in computer memory. This digitized image can be processed and/or displayed on a high resolution monitor. Segmentation subdivides an image into its constituent regions or objects .The level to which the segmentation is carried depends on the problem being solved. Segmentation is carried until we are able to distinguish the object from its backgrounds. Image segmentation is based on one of the two basic properties of intensity values: discontinuity and similarity. Thresholding is a similarity approach of segmenting an image. The simplest thresholding technique is to partition the image histogram by...

Words: 560 - Pages: 3

Free Essay

Image Dithering

...Image Dithering Image and its parameters An image is a two-dimensional representation of a rectilinear array of pixels. A pixel is sample point in the image that has coordinates location it. These samples are obtained from the continuous function CCD cells in a camera, photoreceptors in the eye and rays in a virtual camera. There are three types of image resolution. The first resolution is where each pixel has only the depth bits for color or intensities. The second type of resolution is the spatial resolution which involves the width and the heights of the image only. Lastly, another image resolution is the temporal resolution, and this kind does the monitoring of the image brightness at a particular frequency only. Image dithering Dithering is the approximation of color from a mixture of several colors by the computer program when the desired color is not available. An example if dithering is the when a certain color in the Web page for a particular browser is not supported by the other. In that case, the browser will attempt to replace the desired color by approximating two colors that it can generate. The same may or may not be accepted by the designer of graphics and may appear grainy because it is composed of varied pixel intensities instead of one intensity. Dithering has applications in color image output and for artistic purposes as well. Secondly, dithering is used in web design for reducing images from high color counts to low color counts. It is also used in...

Words: 970 - Pages: 4

Free Essay

Segmentation

...Aim: To study methods of segmentation In computer vision, segmentation refers to the process of partitioning a digital image into multiple segments (sets of pixels, also known as super pixels). The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain visual characteristics. The result of image segmentation is a set of segments that collectively cover the entire image, or a set of contours extracted from the image (see edge detection). Each of the pixels in a region are similar with respect to some characteristic or computed property, such as color, intensity, or texture. Adjacent regions are significantly different with respect to the same characteristic(s). Some of the methods of segmentation are described as follows. Clustering methods The K-means algorithm is an iterative technique that is used to partition an image into K clusters. The basic algorithm is: Pick K cluster centers, either randomly or based on some heuristic Assign each pixel in the image to the cluster that minimizes the variance between the pixel and the cluster center Re-compute the cluster centers by averaging all of the pixels in the cluster Repeat steps 2 and 3 until...

Words: 586 - Pages: 3