Free Essay

Image Processing

In:

Submitted By tjl940906
Words 3272
Pages 14
1

Introduction to Image Processing and Compression Using
Matlab/Octave
Table of content
Image Processing and Compression Using Matlab ........................................................................... 1
1.0 Basic image operation.............................................................................................................. 1
1.1 Image display ....................................................................................................................... 1
1.2: Color Image Display........................................................................................................... 2
1.3 Image Array Indexing .......................................................................................................... 3
1.4 Converting image array to image file .................................................................................. 8
1.5 Image Manipulation ............................................................................................................. 8
1.5.1 Resize image ..................................................................................................................... 9
1.6 Image Analysis Using Histogram ...................................................................................... 10
1.6.1 Generating feature vector for image using greyscale histogram .................................... 14
1.6.2 Comparing Image Descriptor ......................................................................................... 17
1.6.2 Colour Histogram as Image Descriptor .......................................................................... 18
1.6.2 Two dimensional Colour Histogram as Image Descriptor ............................................. 19
1.7 Image Database Using Cell Array ..................................................................................... 22
1.8 Edge Detection................................................................................................................... 24
2. Image Compression ................................................................................................................. 26
2.1 Forward 2D- DCT transform ............................................................................................. 26
2.2 Inverse 2D- DCT transform ............................................................................................... 30
2.3 DCT Transform Applied to 8x8 Image Sub-blocks .......................................................... 31

1.0 Basic image operation
1.1 Image display
Code
cd('C:\MATLABR11\work\lab1') a= imread('cameraman.tif');; % This is a grayscale image whos % view the image resolution and size

Output
Name
Size a 256x256

Bytes Class
65536 uint8

Note
Each pixel a(i,j) value represent the grayscale intensity value a(i,j) denote the pixel value at row i and column j a(1,1) is the pixel at the top left corner a(256,256) is the pixel at bottom right corner

2 figure(1), imshow(a); axis on

Figure 1 minval= min(min(a)) maxval= max(max(a))

% find the min value of the pixel in array a
% find max value of the pixel in array a

% convert matrix a from uint8 type to double so that b(i,j) is now in the range
[0,1]
b = im2double(a)
% convert b so that it has the same value as image a but in double format b = b * 255;

Result minval = maxval =

7
253

Code: Find image dimension
[nrow ncol] = size(b); % find the number of rows and columns of image array b

1.2: Color Image Display
% 1.2 color image display addpath('C:\MATLABR11\work\lab1\image'); imColor = imread('children.tif'); % read the color image figure(3),imshow(imColor) % display in figure 3

3

1.3 Image Array Indexing

Im = imread('cameraman.tif'); % This is a grascale image figure(1), imshow(Im); axis on pixval on; % move the pixel over the image and view the pixel value

The grayscale image stored in the file ‘kids.tif’ is represented by 2 dimensional array or matrix variable Im . Each element of the array Im(i,j) at location row i and column j
In Matlab the array index start with 1. The variable array f here store the grayscale value of an image having resolution MxN

4

Im(31,22) = 164
50

Grey level intensity at pixel
(31,22) is 164

100

For 8 bits pixel, gray intensity level range from 0 to 255

150

200

250
50

100

150

200

250

Colour Image
RGB colour image is stored in 3 dimensional array variable imC.
%color image display imC = imread('hand.jpg'); % read the color image imshow(imC) % display image pixel_val = imC(35,80,:); disp (pixel_val); ans(:,:,1) = 3 ans(:,:,2) = 121 ans(:,:,3) = 191

% red value
% green value
% blue value

5

From the image plot, the color of the pixel at location row=35 and column=80 imC(35,80) is represented by the 3 tupple number (14,126,192)
Red R =14 , Green G=126 and Blue = 192 imA = imread('redbox.bmp') ;% display in figure 3 imshow(imA) blue

green

Pixel value of the red color plane

» imA(:,:,1) ans =

Red colour plane imA(:,:,1) 10x10 array

255 255 255 255 255 255 255 255 255 255
255 255 255 255 255 255 255 255 255 255
255 255 255 255 255 255 255 255 255 255

6
255
255
255
255
255
255
255

255
255
255
255
255
255
255

255
255
255
255
255
255
255

255
255
255
255
255
255
255

255
255
255
255
255
255
255

255
255
255
255
255
255
255

255
255
255
255
255
255
255

255
255
255
255
255
255
255

255
255
255
255
255
255
255

255
255
255
255
255
255
255

» imA(:,:,2) ans =
0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0

» imA(:,:,3) ans =
0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0

Pixel A(1,1) has the value (255,0,0)
This implys that imA(1,1,1) = 255
% red intensity = 255 imA(1,1,2) = 0
% green intensity = 0 imA(1,1,3) = 0
% blue intensity = 0

and

7 imC = imread('hand.jpg'); imC_blue = imC(:,:,2); imshow(imC_blue); % read the color image

Image of the blue intensity of the color image

Color image can be converted to grayscale image imG = rgb2gray(imC) imshow(imG) ; % convert color image to grayscale

Exercise Q1
Display the red, green and blue component of the image children.tif. You should display 3 images one for each colour component.

8

1.4 Converting image array to image file
Use imwrite() function
% Convert a 3D array to a bmp image
% Create a red color square
A1(:,:,1) = ones(10,10) * 255 ; % set the red component to max value
A1(:,:,2) = ones(10,10) * 0 ;
A1(:,:,3) = ones(10,10) * 0 ;
% imwrite(A,filename,fmt) writes the image A to the file specified by filename in the format specified by fmt. imwrite(A1,'redbox.bmp','bmp'); 1.5 Image Manipulation
After storing the image in an array, the pixel value can be manipulated or changed.
The following example shows how a binary black and white image is transformed. The black pixel is converted to white and white pixel converted to block. In other word, the black (pixel value = 0) and white (pixel value = 1 ) pixels are inverted.
The invert operator ~ can be used. A=1 , ~A = 0
% invert the pixel value of the binary image imBW = imread('bwman.tif'); figure(1),imshow(imBW); imBW2 = ~imBW ; figure(2), imshow(imBW2);
% the for loop version
[nrow ncol] = size(imBW); for i=1:nrow for j=1:ncol imBW3(i,j) = ~imBW(i,j); % invert the pixel value end end figure(3), imshow(imBW3);

Output is a binary image that has been inverted
Figure 1

9

1.5.1 Resize image
When an image is resized into smaller image, the downsampling operation is used. When it is resized to a larger image, the interpolation operation is applied. imC = imread('hand.jpg'); imD = rgb2gray(imC);
[nrow ncol] = size(imD) disp(' Reduce image by half'); nrow2 = round(nrow/2); ncol2= round(ncol/2); imD2 = imresize(imD, [nrow2 , ncol2]); figure(1), imshow(imD); figure(2), imshow(imD2);

Exercise Q2
Down sample the image cameraman.tif four times to get a smaller image of dimension. Then resize the smaller image by upsampling (interpolation) to get back original resolution of 256x256.
Comment on the visual quality of image by comparing the original image and the image after undergoing down-sampling/interpolation. Write the code to compare the difference between the two image using mean square error E(a^, a) as given by the formula below. The original image is represented by the matrix a and the approximated image after downsampling/interpolation is a^ .

10

1.6 Image Analysis Using Histogram
The distribution of the pixel value in an image can be represented by the histogram. For the case of greyscale image, the histogram shows the distribution of the gray intensity of all the pixels in the image. From the histogram we can see that the majority of the gray value is from 90-150
I = imread('pout.tif'); imshow(I) figure, imhist(I)

The histogram consists of a set of bins on the x axis and the count of value that falls within the bin on the y axis.
1600
1400
1200
1000
800
600
400
200
0
0

Greyscale image

50

100

150

200

250

histogram of the image

Definition of Histogram
Given N data samples f1 , f2 , …fi….fN where each sample is a real scalar value
The data samples are grouped into m interval/bins . The m intervals are defined by the points b1 , b2 , ……bm+1 . The dynamic range of the data is usually uniformly spaced where the width of each bin is the same bw = bi+1 - bi
Each bin is defined by the bin index number, [lower, upper limit]
The following figure shows the division of the intervals for 5 bins.

b1

b2
Bin 1

b5

b6
Bin 5

The histogram H = [ h1 h2 …hj….hm ] where hj records the number of samples that falls within the interval [bj , bj+1 ]

11
The histogram H can be transformed to probability distribution by dividing hi with the total samples N.
The algorithm for the histogram function is very simple
Set all the bin count to 0, H = [ 0,0 , …..0] for i = 1 : N
% check if the sample falls into any particular bin if bj < f(i) > database(1).imageName
>> database(1).imageName ans = C:\MATLABR11\work\cbir\images2\110.jpg
>> im=database(1).imageName im = C:\MATLABR11\work\cbir\images2\110.jpg
>> imshow(im)
>> surf(database(1).Hist) % plot the 2D histogram
>>

0.05
0.04
0.03
0.02
0.01
0
15
15

10

10

5

5
0

Image 110.jpg

0

2D histogram (Saturation and Value Channel)

24
The following mat file contains the database that has been prepared using the code demo_createDatabase_all.m . To load the database into matlab workspace
>> cd('C:\MATLABR11\work\cbir')
>> load database_cbir.mat
View the first entry in the database
>> database(1).imageName imageName: 'C:\MATLABR11\work\cbir\images2\110.jpg' database(1).Hist: database(1).label

1.8 Edge Detection close all cd('C:\MATLABR11\work\lab1') I = imread('coins.png');
BW = edge(I, 'sobel'); imshow(I); figure, imshow(BW);

Edge detection result using sobel edge operator
Simple example image with sharp edges and no background clutter

coins.png

edge map (white colour denote edge pixel)

Image with some blurred edges

25
A suitable code can be used to detect the eyes and mouth. face_bush.jpg edge map
Image with large background clutter and non uniform texture

child1.jpg

edge map (white color denote edge pixel)

26

2. Image Compression
In order to exploit the spatial redundancies of the pixel value, the image is transformed to the frequency domain. In JPEG image compression, two dimensional DCT transform is used. In the experiment you will discover that some DCT coefficients are more important than the other because these coefficients have effect on the reconstructed image.
DCT coefficients matrix 10x10

Image I
10x10

Forward
2D- DCT transform J(1,1) J(1,2) ….J(10,10)
J(2,1) J(2,2) …..J(2,10)
.
.
.
J(10,1) ……….. J(10,10)

Inverse
2D- DCT transform Reconstructed image I’

J(1,1) represent the coefficient for the DC value (lowest frequency) of the image (average pixel value). If the DCT coefficient is J(m,n) then the larger value for m represents the larger vertical frequency component whereas the larger value for n represents the larger horizontal frequency component. The main aim of image compression is to reduce the image size as much as possible but at the same time tries to preserve the image quality of the reconstructed image. The reconstructed image
I’ is considered good if it appear similar to the original image I. There maybe some slight distortion to the pixel value in I’ and thus I’ is not exactly similar to a I. However since the distortion is not detected by the human eye, this is acceptable.

2.1 Forward 2D- DCT transform
The forward 2D-DCT transform represent the image in frequency domain and this information is stored in a matrix of DCT coefficients. In JPEG, the image is commonly divided into an 8x8 sub image blocks. The forward 2D-DCT transform is applied to the 8x8 image blocks of the image.
The following code creates a set of simple images that varies in spatial frequency. Observe the
DCT coefficients for the image with different vertical and horizontal frequency. A peak value at the bottom right corner of the DCT coefficients matrix indicate that the image is filled with fast changing pixel value on both the horizontal and vertical direction.
% The following code create the image with different spatial frequency
A1
A2
A3
A4
A5

=
=
=
=
=

ones(10,10); ones(5,10); ones(10,5); ones(2,10); ones(10,2);

M1
M2
M3
M4

=
=
=
=

A1 ; % dc value only
[A2 ; A2*0 ] % low vertical freq image
[A3 A3*0 ] % low horizontal freq image
[A4 ; A4*0 ; A4 ; A4*0 ; A4] ; % high vertical freq image

M5 =

[A5

A5*0

A5

A5*0

A5] ;% high vertical freq image

M6 = M4 & M5 ; % image with both vertical and horizontal freq

27 imwrite(M1,'M1.jpg','jpg'); imwrite(M2,'M2.jpg','jpg'); imwrite(M3,'M3.jpg','jpg'); imwrite(M4,'M4.jpg','jpg'); imwrite(M5,'M5.jpg','jpg'); imwrite(M6,'M6.jpg','jpg');
Im1
Im2
Im3
Im4
Im5
Im6

=
=
=
=
=
=

imread('M1.jpg'); imread('M2.jpg'); imread('M3.jpg'); imread('M4.jpg'); imread('M5.jpg'); imread('M6.jpg'); subplot(6,1,1), subplot(6,1,2), subplot(6,1,3), subplot(6,1,4), subplot(6,1,5), subplot(6,1,6), imshow imshow imshow imshow imshow imshow (Im1);
(Im2);
(Im3);
(Im4);
(Im5);
(Im6);

% Apply DCT Transform onto the Image and Observe the DCT coefficients
J1
J2
J3
J4
J5
J6

=
=
=
=
=
=

dct2(Im1); dct2(Im2); dct2(Im3); dct2(Im4); dct2(Im5); dct2(Im6); figure(2); subplot(6,1,1),imshow(abs(J1),[]), subplot(6,1,2),imshow(abs(J2),[]), subplot(6,1,3),imshow(abs(J3),[]), subplot(6,1,4),imshow(abs(J4),[]), subplot(6,1,5),imshow(abs(J5),[]), subplot(6,1,6),imshow(abs(J6),[]),

colormap(jet), colormap(jet), colormap(jet), colormap(jet), colormap(jet), colormap(jet), colorbar colorbar colorbar colorbar colorbar colorbar 28

Result
Image with different vertical and horizontal frequency
10x10 Images

10x10 DCT Coefficients Matrix
2000
1000
0

1000
500

Image with low vertical frequency 0

1000
500

Image with low horizontal frequency 0

1000

Image with higher vertical frequency 0

1000

0

Image with both high vertical and horizontal spatial frequency

1200
1000
800
600
400
200

29
Observe the distribution of the DCT coefficients stored in the variable J1, J2 and J6
J1 =

J2 =

1.0e+003 *

1.0e+003 *

Columns 1 through 7

Columns 1 through 7

2.5500
-0.0000
-0.0000
-0.0000
-0.0000
-0.0000
-0.0000
-0.0000
-0.0000
-0.0000

-0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000

-0.0000
0.0000
0.0000
0.0000
-0.0000
0.0000
0.0000
0.0000
-0.0000
-0.0000

-0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000

0.0000
-0.0000
-0.0000
-0.0000
-0.0000
0.0000
0.0000
0.0000
-0.0000
0.0000

-0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000

0.0000
0.0000
0.0000
-0.0000
0.0000
0.0000
0.0000
-0.0000
-0.0000
0.0000

1.2750
1.1473
-0.0044
-0.3941
-0.0032
0.2490
0.0010
-0.2045
-0.0032
0.1872

-0.0000
-0.0000
0.0000
0.0000
0.0000
-0.0000
-0.0000
0.0000
0.0000
-0.0000

-0.0000
-0.0000
0.0000
0.0000
-0.0000
-0.0000
-0.0000
0.0000
0.0000
0.0000

Columns 8 through 10

Columns 8 through 10

-0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000

-0.0000
-0.0000
0.0000
0.0000
0.0000
-0.0000
-0.0000
0.0000
0.0000
-0.0000

-0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000

-0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000

J6 =
1.0e+003 *
Columns 1 through 7
1.3238
-0.0002
0.0847
-0.0003
0.1882
-0.0006
-0.1373
-0.0007
-0.0284
0.0004

-0.0001
-0.0011
-0.0007
0.0003
-0.0013
-0.0002
0.0018
0.0015
-0.0020
-0.0054

0.0854
-0.0010
0.1405
-0.0005
0.3132
0.0001
-0.2265
0.0014
-0.0477
-0.0041

Columns 8 through 10
0.0001
-0.0026
-0.0020
0.0009
0.0017
0.0003
-0.0040
-0.0035
0.0023
0.0030

-0.0270
0.0010
-0.0446
-0.0014
-0.1015
0.0001
0.0760
0.0020
0.0127
-0.0010

0.0017
-0.0026
0.0017
-0.0007
0.0014
0.0000
-0.0011
0.0004
-0.0004
0.0006

-0.0000
0.0005
-0.0008
-0.0003
-0.0024
-0.0002
-0.0009
-0.0016
0.0027
0.0023

0.1875
-0.0005
0.3126
-0.0024
0.6900
0.0000
-0.5063
-0.0026
-0.1000
-0.0009

0.0002
-0.0006
0.0003
-0.0005
0.0002
-0.0002
-0.0002
0.0002
-0.0003
0.0002

-0.1359
-0.0039
-0.2293
0.0028
-0.5004
-0.0001
0.3615
-0.0035
0.0776
0.0062

-0.0000
-0.0000
-0.0000
-0.0000
0.0000
-0.0000
-0.0000
0.0000
0.0000
-0.0000

-0.0000
-0.0000
0.0000
0.0000
0.0000
-0.0000
-0.0000
0.0000
0.0000
-0.0000

-0.0000
-0.0000
0.0000
0.0000
0.0000
-0.0000
-0.0000
0.0000
0.0000
-0.0000

0.0000
-0.0000
0.0000
0.0000
0.0000
0.0000
-0.0000
0.0000
0.0000
-0.0000

-0.0000
-0.0000
0.0000
0.0000
0.0000
-0.0000
-0.0000
0.0000
0.0000
-0.0000

0.0000
-0.0000
0.0000
0.0000
0.0000
-0.0000
-0.0000
0.0000
-0.0000
-0.0000

30

2.2 Inverse 2D- DCT transform
The following code converts the DCT coefficients to zero if the magnitude is less than 10. The modified DCT coefficients are then used for reconstructing back the image. Compare the original image and the reconstructed image.

% Apply Inverse DCT figure(3); J6(abs(J6)

Similar Documents

Free Essay

Image Processing

...INTRODUCTION The term image refer to two dimensional light intensity function f(x, y), where x and y denote spatial coordinates and the value of at any point (x, y) is proportional to the brightness (or gray level) of the image at that point. A digital image is an image f(x, y) that has been discretized both in spatial coordinate and brightness. A digital image can be considered a matrix whose row and column indices identify a point in the image and the corresponding matrix element value identifies the gray level at that point. The elements of such a digital array are called image elements, picture elements. The term digital image processing refers to processing of a two dimensional picture by a digital computer. An image given in the form of a transparency, slide, photograph or chart is first digitized and stored as matrix of binary digits in computer memory. This digitized image can be processed and/or displayed on a high resolution monitor. Segmentation subdivides an image into its constituent regions or objects .The level to which the segmentation is carried depends on the problem being solved. Segmentation is carried until we are able to distinguish the object from its backgrounds. Image segmentation is based on one of the two basic properties of intensity values: discontinuity and similarity. Thresholding is a similarity approach of segmenting an image. The simplest thresholding technique is to partition the image histogram by...

Words: 560 - Pages: 3

Free Essay

Digital Image Processing

...ENGINEERING COMPUTER ENGINEERING DEPARTMENT Digital Image Processing Lab Manual No 03 Dated: 31st August, 2015 to 04th September, 2015 Semester: Autumn 2015 Digital Image Processing Session:-2012 Computer Lab Instructor:-Engr. Farwa UNIVERSITY OF ENGINEERING AND TECHNOLOGY, TAXILA FACULTY OF TELECOMMUNICATION AND INFORMATION ENGINEERING COMPUTER ENGINEERING DEPARTMENT Objectives:The objectives of this session is to understand following.     Image Resizing Image Interpolation Relationships between pixels Distance Transform Image Resizing:Resizing an image consists of enlarging or shrinking it, using nearest-neighbor, bilinear, or bicubic interpolation. Both resizing procedures can be executed using the imresize function. Let us first explore enlarging an image. Enlarge the cameraman image by a scale factor of 3. By default, the function uses bicubic interpolation. I=imread('cameraman.tif'); I_big1 = imresize(I,3); figure, imshow(I), title(’Original Image’); figure, imshow(I_big1), interpolation’); title(‘Enlarged Image using bicubic Use the imtool function to inspect the resized image, I_big1. Scale the image again using nearest-neighbor and bilinear interpolations. I_big2 = imresize(I,3,’nearest’); I_big3 = imresize(I,3,’bilinear’); figure, imshow(I_big2),title(‘Resized interpolation’); figure, imshow(I_big3), interpolation’); Digital Image Processing Session:-2012 Computer using title(‘Resized ...

Words: 1187 - Pages: 5

Free Essay

Digital Image Processing

...Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic) DIGITAL IMAGE PROCESSING DIGITAL IMAGE PROCESSING PIKS Inside Third Edition WILLIAM K. PRATT PixelSoft, Inc. Los Altos, California A Wiley-Interscience Publication JOHN WILEY & SONS, INC. New York • Chichester • Weinheim • Brisbane • Singapore • Toronto Designations used by companies to distinguish their products are often claimed as trademarks. In all instances where John Wiley & Sons, Inc., is aware of a claim, the product names appear in initial capital or all capital letters. Readers, however, should contact the appropriate companies for more complete information regarding trademarks and registration. Copyright  2001 by John Wiley and Sons, Inc., New York. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic or mechanical, including uploading, downloading, printing, decompiling, recording or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without the prior written permission of the Publisher. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 605 Third Avenue, New York, NY 10158-0012, (212) 850-6011, fax (212) 850-6008, E-Mail: PERMREQ @ WILEY.COM. This publication is designed...

Words: 173795 - Pages: 696

Free Essay

Information on Digital Image Processing

...Digital Image Processing Spring 2007 Sankalp Kallakuri elsanky@gmail.com Books refererenced – Digital Image Processing by Gonzalez and Woods Fundamentals of Digital Image Processing by A K Jain Digital Picture Processing By Rosenfeld and Kak Syllabus • • • • • • • • Fundamentals Image Enhancement [spatial] Image Enhancement [frequency] Sampling and Quantization Image Restoration Color Image Processing Image Compression Image Reconstruction Syllabus • Grading: Assignments - 40% Homework Mid Term Final - 10% - 20% - 30% • Assignments: Matlab and C/C++ IP 101 • • • • • Colour images Grey level images File formats JPG BMP TIFF 2D representations Examples of Fields that use IP X-Rays, UV Imaging, IR Imaging, Satellite Images, Astronomy, License plates, Water Marking, Microwaves, MRI, sonograms, TEMs Image Processing System network Image Displays Processors Mass storage Hard Copy IP software Specialized IP Hardware Image Sensors Problem domain From Gonzalez and Woods Human Eye Vision Details • • • • • • Lens Iris Pupil Cornea Retina Rods / Cones [distribution number use] Blind spot Photopic[bright]/ Scotopic[dim] Brightness adaptation Weber Ratio Ic I Examples of Brightness perception Figures from Gonzalez and Woods Light and EM Spectrum • • • • Wavelength = C/ frequency Energy = h * frequency Reflected light Radiance is total amount of energy that flows from the light source • Luminance is the perceived from light...

Words: 692 - Pages: 3

Free Essay

Image Processing Controlled Sentry

...T.C. BAHÇEŞEHİR UNIVERSITY VISION BASED TARGET TRACKING CONTROLLED SENTRY Capstone Project Fikret Taygun Duvan İSTANBUL, 2011 T.C. BAHÇEŞEHİR UNIVERSITY FACULTY OF ENGINEERING DEPARTMENT OF MECATRONICS ENGINEERING VISION BASED TARGET TRACKING CONTROLLED SENTRY Capstone Project Fikret Taygun Duvan Advisor: Dr. Khalid Abidi İSTANBUL, 2010 T.C. BAHÇEŞEHİR UNIVERSITY FACULTY OF ENGINEERING DEPARTMENT OF MECATRONICS ENGINEERING Name of the project: Vision Based Target Tracking Controlled Sentry Name/Last Name of the Student: Fikret Taygun Duvan Date of Thesis Defense: 23/01/2011 I hereby state that the graduation project prepared by Your Name (Title Format) has been completed under my supervision. I accept this work as a “Graduation Project”. Dr. Khalid ABIDI I hereby state that I have examined this graduation project by Your Name (Title Format) which is accepted by his supervisor. This work is acceptable as a graduation project and the student is eligible to take the graduation project examination. Asst. Prof. Yalçın Çekiç Head of the Department of Mechatronics Engineering We hereby state that we have held the graduation examination of Your Name and agree that the student has satisfied all requirements. THE EXAMINATION COMMITTEE Committee Member 1. Khalid ABIDI 2. ………………………….. 3. ………………………….. Signature ……………………….. ……………………….. ……………………….. ACADEMIC HONESTY PLEDGE In keeping with Bahçeşehir University Student Code of...

Words: 8189 - Pages: 33

Premium Essay

Importance Of Digital Image Processing

...2.6 Image Processing Image processing is a term which indicates the processing on the image which is taken as input and the result set of processing is may be a set of related parameters of an image. The purpose of image processing is visualization which is to observe the objects that are not visible. There are two types of image processing techniques are used which are analog image processing and digital image processing [22]. Analog image processing can be used for hard copies like printouts and photographs. Image analysts use various fundamentals of interpretation while using these analog techniques. Digital image processing technique will discuss in section 2.6.1. 2.6.1 Digital Image Processing Digital image processing offers more complex...

Words: 851 - Pages: 4

Free Essay

Digital Image Processing Book

...Communication Engineering Visvesvaraya National Institute of Technology Nagpur- 440010 (India) April, 2012 Program to Detect Number plate of a car %Program to extracting particular things from an image. %This is for detecting vehicle number plate segmentation extraction % input - give the image file name as input. eg :- gau 1 (1).jpg clc; clear all; k=input('Enter the image file name','s'); % input color image of car having number plate. im=imread('num_plate'); im1=rgb2gray(im); im1=medfilt2(im1,[3 3]); %filtering is use to remove noise from image . BW = edge(im1,'sobel'); %this is for finding edges. [imx,imy]=size(BW); msk=[0 0 0 0 0; 0 1 1 1 0; 0 1 1 1 0; 0 1 1 1 0; 0 0 0 0 0;]; %This is the mask we use for detection B=conv2(double(BW),double(msk)); %Smoothing image to reduce the number of connected components L = bwlabel(B,8);% Calculating connected components mx=max(max(L)) % There will be mx connected components.Here U can give a value between 1 and %mx for L or in a loop you can extract all connected components % If you are using the attached car image, by giving 17,18,19,22,27,28 to L %you can extract the number plate completely. [r,c] = find(L==17); rc = [r c]; [sx sy]=size(rc); n1=zeros(imx,imy); for i=1:sx x1=rc(i,1); y1=rc(i,2); n1(x1,y1)=255; end % Storing the extracted image in an array figure,imshow(im); figure,imshow(im1); figure,imshow(B); figure,imshow(n1,[]);...

Words: 279 - Pages: 2

Premium Essay

Image Processing

...* Simplification of Tracking DEFINITION : * Tracking can be defined as the problem of estimating the trajectory of an object in the image plane as it moves around a scene * Three steps in video analysis: 1. Detection of interesting moving objects 2. Tracking of such objects from frame to frame 3. Analysis of object tracks to recognize their behavior 1) Applications : * motion-based recognition * human identification based on gait, automatic object detection, etc * automated surveillance * monitoring a scene to detect suspicious activities or unlikely events * video indexing * automatic annotation and retrieval of the videos in multimedia databases * human-computer interaction * gesture recognition, eye gaze tracking for data input to computers, etc. * traffic monitoring * real-time gathering of traffic statistics to direct traffic flow * vehicle navigation * video-based path planning and obstacle avoidance capabilities. Challenges : * loss of information caused by projection of the 3D world on a 2D image * noise in images * complex object motion * nonrigid or articulated nature of objects * partial and full object occlusions * complex object shapes * scene illumination changes * real-time processing requirements Simplifing the tracking : * imposing constraints on the motion and/or appearance of objects * almost all tracking...

Words: 657 - Pages: 3

Free Essay

Automatic Part Program Generation

...experts' knowledge about image processing techniques, and is capable of solving given vision problems. As a problem domain, we choose vision algorithms for a parts-feeder, which determines the attitude of mechanical parts on a conveyor-belt and rejects parts with inappropriate attitudes. The expert system for parts feeder is designed to consist of three components: FSE (Feature selection expert), IPE (Image processing expert), DTG (Decision tree generator). The knowledge for vision algorithm design to determine parts attitude is discussed. A framework to represent knowledge for finding solutions for pattern classificationproblem is established. 1. Introduction Recently, several expert systems for image processing have been investigated[1][2][3]. Their knowledge-basesinclude human experts' knowledge about image processing techniques, and can generate a sequence of image processing operations to solve the given problem. DIA-Expert system (Digital-Image-Analysis Expert System)[4] is a typical example. When an input image and the goal of analysis are given, it continues to decompose the goal into subgoals until the sequence of executable image processing modules could be found. Here, we apply this framework to the algorithm design for an industrial machine vision[5]. Machine vision systems for advanced automation must solve problems such as the discriminationor positioning of mechanical parts and the detection of their surface flaws. The contemporary image analysis and pattern recognition...

Words: 970 - Pages: 4

Free Essay

Fuzzy and Cla Based Edge Detection Method

...i CERTIFICATE ii COMPLIANCE CERTIFICATE iii THESIS APPROVAL CERTIFICATE iv DECLARATION OF ORIGINALITY v Acknowledgment vi Table of Contents vii List of Figures x Abstract xiii Chapter 1 Introduction 1 1.1 Edge Detection: Analysis 3 1.1.1 Fuzzy Logic in Image Processing 4 1.1.2 Fuzzy Logic for Edge Detection 5 1.1.3 Cellular Learning Automata 6 Chapter 2 Literature Review 7 2.1 Edge Detection: Methodology 7 2.1.1 First Order Derivative Edge Detection 7 2.1.1.1 Prewitts Operator 7 2.1.1.2 [pic] Sobel Operator 8 2.1.1.3 Roberts Cross Operator 11 2.1.1.4 Threshold Selection 11 2.1.2 Second Order Derivative Edge Detection 11 2.1.2.1 Marr-Hildreth Edge Detector 11 2.1.2.2 Canny Edge Detector 12 2.1.3 Soft Computing Approaches to Edge Detection 13 2.1.3.1 Fuzzy Based Approach 14 2.1.3.2 Genetic Algorithm Approach 14 2.1.4 Cellular Learning Automata 15 Chapter 3 Fuzzy Image Processing 18 3.1 Need for Fuzzy Image Processing 19 3.2 Introduction to Fuzzy sets and Crisp sets 20 3.2.1 Classical sets (Crisp sets) 20 3.2.2 Fuzzy sets 21 3.3 Fuzzification 22 3.4 Membership Value Assignment 22 3.5 Defuzzification 23 3.6 Enhancing Edges Using Cellular Learning Automata 26 3.6.1 Divide the Edgy Image Into Overlapping 3 × 3 Windows 26 3.6.2 Penalty and Rewards. 27 Chapter 4 Implementation 30 4.1 Simple algorithm for edge detection using Fuzzy Logic 30 Chapter 5 Conclusion 38 References 39 Appendix A : Acronyms 41 Appendix B : Review Card 42 Appendix C:...

Words: 9151 - Pages: 37

Free Essay

Stereo Acoustic Perception Based on Real Time Video Acquisition

...Shetty&, Chinmai$ , Rajeshwari Hegde@ #,*,&,, Department of Telecommunication Engineering, @ Guide and faculty BMS College of Engineering, Bangalore, India # supreethkrao@gmail.com arpithaprasad@gmail.com & anushree.shetty12@gmail.com $ cpchinmai@gmail.com * Abstract— A smart navigation system based on an object detection mechanism has been designed to detect the presence of obstacles that immediately impede the path, by means of real time video processing. This paper is discussed keeping in mind the navigation of the visually impaired. A video camera feeds images of the surroundings to a Da-Vinci Digital Media Processor, DM642, which works on the video, frame by frame. The processor carries out image processing techniques whose result contains information about the object in terms of image pixels. The algorithm aims to select that object, among all others, that poses maximum threat to the navigation. A database containing a total of three sounds is constructed. Hence, each image translates to a beep, where every beep informs the navigator of the obstacles directly in front of him. This paper implements a more efficient algorithm compared to its predecessor, NAVI. Keywords— Navigation, Edge Detection, Flood Function, Object Detection, DM642, Acoustic Transformation I. INTRODUCTION Assistance for the blind or visually impaired can range from simple measures, such as a white cane or a guide dog, to a very sophisticated computer technology...

Words: 2605 - Pages: 11

Free Essay

Gender Recognition

...Based Image Retrieval Using Extended Local Tetra Patterns Aasish Sipani1*, Phani Krishna2 and Sarath Chandra3 1*, 2, 3 Department of Computer Science and Engineering, National Institute of Technology, Warangal, India, aasishsipani@gmail.com, ppkk1992@gmail.com, chandra.3162@gmail.com www.ijcaonline.org Received: Oct/26/2014 Revised: Nov/09/2014 Accepted: Nov/20/2014 Published: Nov/30/2014 Abstract— In this modern world, finding the desired image from huge databases has been a vital problem. Content Based Image Retrieval is an efficient method to do this. Many texture based CBIR methods have been proposed so far for better and efficient image retrieval. We aim to give a better image retrieval method by extending the Local Tetra Patterns (LTrP) for CBIR using texture classification by using additional features like Moment Invariants and Color moments. These features give additional information about the color and rotational invariance. So an improvement in the efficiency of image retrieval using CBIR is expected. Keywords— Content Based Image Retrieval (CBIR), Local Tetra Patterns (LTrP), Gabor Filters, Histogram Equalization, Moment Invariants I. INTRODUCTION The explosive growth of digital libraries due to Web cameras, digital cameras, and mobile phones equipped with such devices is making the database management by human annotation an extremely tedious and clumsy task. Thus, there exists a need for developing an efficient approach to search for the desired images form...

Words: 3578 - Pages: 15

Free Essay

Image Dithering

...Image Dithering Image and its parameters An image is a two-dimensional representation of a rectilinear array of pixels. A pixel is sample point in the image that has coordinates location it. These samples are obtained from the continuous function CCD cells in a camera, photoreceptors in the eye and rays in a virtual camera. There are three types of image resolution. The first resolution is where each pixel has only the depth bits for color or intensities. The second type of resolution is the spatial resolution which involves the width and the heights of the image only. Lastly, another image resolution is the temporal resolution, and this kind does the monitoring of the image brightness at a particular frequency only. Image dithering Dithering is the approximation of color from a mixture of several colors by the computer program when the desired color is not available. An example if dithering is the when a certain color in the Web page for a particular browser is not supported by the other. In that case, the browser will attempt to replace the desired color by approximating two colors that it can generate. The same may or may not be accepted by the designer of graphics and may appear grainy because it is composed of varied pixel intensities instead of one intensity. Dithering has applications in color image output and for artistic purposes as well. Secondly, dithering is used in web design for reducing images from high color counts to low color counts. It is also used in...

Words: 970 - Pages: 4

Free Essay

Segmentation

...Aim: To study methods of segmentation In computer vision, segmentation refers to the process of partitioning a digital image into multiple segments (sets of pixels, also known as super pixels). The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain visual characteristics. The result of image segmentation is a set of segments that collectively cover the entire image, or a set of contours extracted from the image (see edge detection). Each of the pixels in a region are similar with respect to some characteristic or computed property, such as color, intensity, or texture. Adjacent regions are significantly different with respect to the same characteristic(s). Some of the methods of segmentation are described as follows. Clustering methods The K-means algorithm is an iterative technique that is used to partition an image into K clusters. The basic algorithm is: Pick K cluster centers, either randomly or based on some heuristic Assign each pixel in the image to the cluster that minimizes the variance between the pixel and the cluster center Re-compute the cluster centers by averaging all of the pixels in the cluster Repeat steps 2 and 3 until...

Words: 586 - Pages: 3

Free Essay

Change Detection

...detection methods.5 I. Post-Classification Comparison..........................5 II. Direct Classification...................................6 III. Principal Component Analysis (PCA)......................6 IV. Image Differencing......................................8 V. Change Vector Analysis (CVA)............................9 Relative accuracy of the most commonly used change detection methods........................................................9 I. Post-Classification Comparison.........................10 II. Direct Classification..................................11 III. Principal Component Analysis (PCA).....................11 IV. Image Differencing.....................................12 V. Change Vector Analysis (CVA) Conclusion....................................................14 References....................................................15 Introduction Remote sensing change detection has been defined as the process of identifying change in the state of an object or phenomena through the detection of differences between two or more sets of images taken of the same area on different dates (Wang, 1993). The underlying assumption is that changes on the ground cause significant changes in image pixel values (Zhang et al., 2002). Change detection is a vital technique in remote sensing because it plays a role in monitoring and managing natural resources and urban development providing quantitative analysis of the spatial...

Words: 3616 - Pages: 15