Free Essay

Wavelets and Edge Detection

In:

Submitted By baila
Words 1789
Pages 8
Wavelets and Edge Detection CS698 Final Project

Submitted To: Professor Richard Mann Submitted By: Steve Hanov Course: CS698 Date: April 10, 2006

INTRODUCTION
Wavelets have had a relatively short and troubled history. They seem to be forever confined to footnotes in textbooks on Fourier theory. It seems that there is little that can be done with wavelets that cannot be done with traditional Fourier analysis. Stephane Mallat was not the father of wavelet theory, but he is certainly an evangelist. His textbook on the subject, A Wavelet Tour of Signal Processing [1], contains proofs about the theory of wavelets, and a summation about what is known about them with applications to signal processing. One of his many papers, Characterization of Signals from Multiscale Edges [2], is frequently cited as a link between wavelets and edge detection. Mallat’s method not only finds edges, but classifies them into different types as well. Mallat goes on to describe a method of recovering complete images using only the edges, but we will not implement it in this project. In this project, we study this paper, and implement the method of Mallat to multiscale edge detection and analysis. We will first present a short background on wavelet theory. Then we will describe the different types of edges that exist in images, and how they can be characterized using a Lipschitz constant. Next, we describe the algorithm for the wavelet transform, from the Mallat paper. Finally, we show the results of applying the algorithm to a test image, and a real image.

wave with the signal. When the results high valued, the coefficients of the Fourier transform will be high. Where the signal or the wave is close to zero, the coefficients will be low. Fourier analysis has a big problem, however. The sine and cosine functions are defined from - to . The effects of each frequency are analyzed as if they were spread over the entire signal. For most signals, this is not the case. Consider music, which is continuously varying in pitch. Fourier analysis done on the entire song tells you which frequencies exist, but not where they are. The short time Fourier transform (STFT) is often used when the frequencies of the signal vary greatly with time. [3] In the JPEG image encoding standard, for example, the image is first broken up into small windows with similar characteristics. The Fourier transform is not applied to the entire image, but only to these small blocks. The disadvantage of this technique can be seen at high compression ratios, when the outlines of the blocks are clearly visible artifacts. A second disadvantage is in resolution of analysis. When larger windows are used, lower frequencies can be detected, but their position in time is less certain. With a smaller window, the position can be determined with greater accuracy, but lower frequencies will not be detected. The wavelet transform helps solve this problem. Once applied to a function f(t), it provides a set of functions Wsf(t). Each function describes the strength of a wavelet scaled by factor s at time t. The wavelet extends for only a short period, so its effects are limited to the area immediately surrounding t. The wavelet transform will give information about the strengths of the frequencies of a signal at time t. In the first pages of his treatise [1], Mallat defines a wavelet as a function of zero average,


WAVELET ANALYSIS
THEORY It is best to describe wavelets by showing how they differ from Fourier methods. A signal in the time domain is described by a function f(t), where t is usually a moment in time. When we apply the Fourier transform to the signal, we obtain a function F(ω) that takes as input a frequency, and outputs a complex number describing the strength of that frequency in the original signal. The real part is the strength of the cosine of that frequency, and the imaginary part is the strength of the sine. One way to obtain the Fourier transform of a signal is to repeatedly correlate the sine and cosine

ψ (t )dt = 0

−∞

which is dilated with scale parameter s, and translated by u:







ψ u , s (t ) =

Unlike the sine and cosine functions, wavelets move toward quickly zero as their limits approach to +/-∞. In [2], Mallat notes that the derivative of a smoothing function is a wavelet with good properties. Such a wavelet is shown in Figure 1.
1 0.9 0.8
0.2 0.4 0.3

0.7 0.6 0.5 0.4 0.3
-0.2 0.1 0 -0.1

0.2 0.1 0
-0.3 -0.4

0

100

200

300

400

500

600

0

100

200

Figure 1: A smoothing function, and its corresponding wavelet.

By correlating the signal with this function at all possible translations and scales, we obtain the continuous wavelet transform. The transformation also increases the dimension of the function by one. Since we have both a scaling and position parameter, a 1-D signal will have a 2D wavelet transform. As an illustration, in Figure 2 we show the wavelet transform of a single scan line of an image, calculated using the algorithm in [2] (See Appendix A). The frequencies decrease from top to bottom, and pixel position increases from left to right. The edges in the signal result in funnel-shaped patterns in the wavelet transform.

  
Figure 2: The 512th scanline of the famous Lena image, and its wavelet transform. Pixel position increases from left to right, and frequency increases from bottom to top. Only nine scales were used, but they are stretched to simulate a continuous transform, which is more illustrative.
300 400 500 600

1 t −u ψ s s
  

Like the Fourier transform, the wavelet transform is invertible. However, it is easier to throw away information based on position. In the Fourier domain, if you were to try to eliminate noise by simply throwing away all of the information in a certain frequency band, you would get back a smoothed signal, with rounded corners, because all frequencies contribute to larger structures in all parts of the signal. With the wavelet transform, however, it is possible to selectively throw out high frequencies in areas where they do not contribute to larger structures. Indeed, this is the idea behind wavelet compression. Here is the scan line from the Lena image, with the highest frequency wavelet coefficients removed: reconstructed 250 200 150 100 50 0

0

100

200

300

400

500

600

The signal keeps the same structure as the original, but is smoother. Here is the same signal with the three highest dyadic1 frequency bands removed: rec ons truc ted 250

200

150

100

50

0

0

100

200

300

400

500

600

The signal is smoother, but the edges are rounder. So far, this frequency removal is equivalent to smoothing the signal with a Gaussian. The true power of the wavelet transform is revealed, however, when we selectively remove wavelet coefficients from the first three dyadic frequency bands only in positions where they are weak (in this case, less than +/-20): rec ons truc ted 250

In the dyadic wavelet transform, we use only scales that are powers of two. With the careful choice of an appropriate wavelet, this covers the entire frequency domain. At the scale s=1, the image is smoothed by convolving it with a smoothing function. At scale s=2, the smoothing function is stretched, and the image is convolved with it again. The process is repeated for s=4, s=8, etc., until the smoothing function is as large as the image. At each level, the wavelet transform contains information for every position t in the image. This method is used by Mallat. Most applications today, however, use an even more optimal method. Since the image is smoothed at each step by a filter, the image only contains half of the frequency information, and needs half as many samples. So the number of samples in the image is reduced at each stage as well. As a result, the wavelet transform is reduced to the same size as the original signal. Mallat avoids this optimization because he needs the redundant information to recover the image using only its modulus maxima (edges).

200

150

100

50

0

CHARACTERIZATION OF EDGES
0 100 200 300 400 500 600

Here, the signal retains much of its original character. Most edges remain sharp. This simple algorithm for noise removal could be improved further if it did not change the weak coefficients in areas where they contribute to the larger structure. To do this, one would need to consider the coefficients across all scales, and determine the positions of the edges of the signal. In his paper, Mallat presents a way to do just that.

WAVELET TRANSFORM TYPES
There are numerous types of wavelet transforms. The first is the continuous wavelet transform (CWT). Dispite its name, the wavelet transform can be calculated on discrete data. All possible scaling factors are used, starting at 1 and increasing to the number of samples in the signal. However, the CWT is computationally expensive, and for most applications, a dyadic method is used instead.

When the wavelet transform is used with a smoothing function, it is equivalent to Canny edge detection [4]. The derivative of a Gaussian is convolved with the image, so that local maxima and minima of the image correspond to edges. Note Figure 2, in which large drops are characterized by black funnels, and leaps result in white funnels. It is clear that by examining the wavelet transform, we can extract a lot of information about the edges. For example, we can see whether it is a gradual change or a leap, or whether it is a giant cliff, or a momentary spike, by looking only at the wavelet representation. Edges are characterized mathematically by their Lipschitz regularity. We say that a function is uniformly Lipschitz α, over the interval (a,b), if, and only if, for every x, x0 in the interval, there is some constant K such that:

f ( x ) − f ( x0 ) ≤ K x − y

α

1

Dyadic: based on successive powers of two

The area over the interval will never have a slope that is steeper than the constant K. [5] Mallat shows that the Lipschitz continuity is related to the wavelet transform, and that if the wavelet transform is Lipschitz α, the function is also Lipschitz α:

W2 j f ( x ) ≤ K (2 j )α
The conclusions are summarized in the following table. α constraint Meaning Impact on Wavelet transform Amplitude decreases with scale. Amplitude remains the same across scales. Amplitude decreases with scale.

0 < α

Similar Documents

Free Essay

Areviewofwavelet

...A Review of Wavelet-domain Watermarking Techniques Mukul Salhotra, Shivam Maurya, Shashvat Jaiswal, Amit Kumar Singh Department of CS/ICT, Jaypee University of Information Technology Abstract--As networking continues to grow exponentially, the Intellectual Property Rights (IPR) can be obtained and reproduced easily. These threats create a high demand for a content protection technique such as digital watermarking; which is one of the most efficient ways to protect the digital properties in recent years. Image watermarking techniques are frequently applied in the transform and spatial domains to achieve desired secure and robust protection. This paper provides an overview of the wavelet-based watermarking techniques available for medical images until now. In this paper the major methods have been analyzed along with their advantages & disadvantages. Keywords: Discrete Wavelet Transform, Medical image watermarking, ROI, NROI, I. INTRODUCTION D igital watermarking has emerged as a research area that was originally focused on copyright protection. Also, it has been implemented in lot of domains, such as video, audio, image, and 3D graphic model. Despite the fact there are only a few medical oriented watermarking studies in the literature to date, digital watermarking will be a valuable tool for copyright protection with medical confidentiality protection, patient and examination-related information hiding, data integrity control and source identification, in Hospital Information...

Words: 3893 - Pages: 16

Free Essay

All Kind

...Computer Vision Class test questions CT# 1 1. Define: Digital Image, Computer Vision Application, Human Vision Application, Image Analysis, Digital image processing, Computer imaging, Nyquist criterion, Vector image, Raster image, Color Model, Pixel. 2. Why is computer vision difficult? 3(a). Describe the four basic types of digital images. (b) Consider a 512*512 color image. Calculate the number of bytes used in this image where each pixel is associated with 3 bytes of color information. 4(a). List out the broad subclasses into which image processing algorithms can be subdivided. 4(b). Discuss PGM file format. 5(a). Write the steps of the image processing task to obtain the postal codes from envelopes by an automatic process. 5(b). Write some applications of digital image processing. CT# 2 1(a). Define: difference image, cumulative difference image, moment, central moment, scaled central moment, normalized un-scaled central moment (b) write a set of seven moments invariant to translation, scale image, mirroring (within a minus sign) 2(a). Discuss the arithmetic operations on images. (b) Why histogram equalization is performed? Write the MATLAB codes for viewing the histogram of an image and the effect of histogram equalization on an image. 3(a). What are separable filters? Write the following 3*3 operator/filter/mask: i) average ii) laplacian iii) identity (b) Discuss with theory and MATLAB code examples (only 1): i) Unsharp...

Words: 390 - Pages: 2

Free Essay

I.R.I.S- Iris Recognition & Information System

...Abstract. In computer systems, there is an urgent need for accurate authentication techniques to prevent unauthorized access. Only biometrics, the authentication of individuals using biological identifiers, can offer true proof of identity. This paper presents software for recognition and identification of people by using iris patterns. The system has been implemented using MATLAB for its ease in image manipulation and wavelet applications. The system also provides features for calculating the technical details of the iris image (Centre & Radius, Color Recognition). The system is based on an empirical analysis of the iris image and it is split in several steps using local image properties. Graphical user interface (GUI) has been introduced for easier application of the system. The system was tested and segmentation result came out to be 100% correct segmentation. The experimental results showed that the proposed system could be used for personal identification in an efficient and effective manner. Keyword: iris recognition, authentication, biometrics, Haar wavelet, GUI, MATLAB, image processing. 1. INTRODUCTION To control the access to secure areas or materials, a reliable personal identification infrastructure is required. Conventional methods of recognizing the identity of a person by using passwords or cards are not altogether reliable, because they can be forgotten or stolen. Biometric technology, which is based on physical and behavioral features of human body such as face...

Words: 2531 - Pages: 11

Premium Essay

Literature Review On Diabetic Retinopathy

...In order to do future research for existing systems need to improve and new solutions to the problem should be found out. Table 1. Performance comparison of segmentation methods References [1] Dharitri Deka, Jyoti Prakash Medhi, S. R. Nirmala, ?Detection of macula and fovea for disease analysis in color fundus images,? in IEEE 2nd International Conference on Recent Trends in Information System, 2015 [2] T. Ruba, K. Ramalakshmi, ?Identification and Segmentation of exudates using SVM classifier,? IEEE Sponsored 2nd International Conference on Innovation Embedded and Communication Systems, 2015 [3] Jyothis Jose, Jinsa Kuruvilla, ?Detection of red lesions and hard exudates in color fundus images,? Internation Journal of Engineering and Computer Science ISSN:2319-7242 Volume 3 Issue 10 October, 2014 page No. 8583-8588 [4] JayakumarLachure, A. V. Deorankar, Sagar Lachure, Swati Gupta, Romit Jadhav, ? Diabetic retinopathy using morphological operations and machine learning,? IEEE International Advance Computing Conference, 2015 [5] Sawmitra Kumar Kuri, ?automatic diabetic retinopathy detection using gabor filter with local entropy thresholding,? IEEE 2nd...

Words: 2529 - Pages: 11

Premium Essay

Nt1310 Unit 4 Visual Analysis

...Numerous efforts have been done for text location in videos and pictures. Candidate text regions are located by corner points detection. “Corner” is a 2D point in a picture has high curvature with the region boundary. deleting isolate corners that have few neighbor corners, and merging the remaining corners if they are close to each other. Candidate text regions are then decomposed to get candidate text lines by similar method to the projection profiles analysis that utilizes vertical and horizontal sobel edge maps of the candidate text regions. At last, eliminate some false alarms by making text box verification based on Fill Factor, and Center Offset Ratio Constraints that also utilizing sobel vertical and horizontal edge maps of the detected...

Words: 1143 - Pages: 5

Free Essay

Image

...Analysis of SVD based Image Fusion: A review xxxxxxxxxxx xxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxx ABSTRACT: In this paper, we have reviewed different types of Image fusion techniques based on Singular Value Decomposition (SVD) technique. Basically, Image fusion can be described as a technique which is used to generate a single good quality image from one or more images. Image fusion can be applied at many levels viz. pixel level, feature level, signal level and decision level. Image fusion can be applied in many areas like recognition of patterns, to enhance visual features, detection of objects, area surveillance etc.[2] The techniques that are used mostly are Intensity-Hue-Saturation (IHS), high pass filtering, principal component analysis (PCA), different arithmetic combinations, multi resolution analysis based methods (pyramid algorithm and wavelet transform), Artificial Neural Networks(ANN), Singular Value Decomposition (SVD) etc.[4] Nowadays, SVD is becoming very popular technique for image fusion due to many factors like conceptuality, stability and it is also a robust and reliable orthogonal decomposition technique. A huge advantage of SVD is that it can also adjust the variations that are present in the local statistics of an image[2]. In this paper, we have compared and reviewed different types of modifications that can be added to the basic SVD technique. Keywords: Singular Value Decomposition, Tensors, Image fusion. 1. INTRODUCTION Fusion...

Words: 2109 - Pages: 9

Free Essay

Robotics

...Robotic Assistants for Aircraft Inspectors Mel Siegel, Priyan Gunatilake, and Gregg Podnar Intelligent Sensor, Measurement, and Control Laboratory The Robotics Institute -- School of Computer Science -- Carnegie Mellon University Pittsburgh, Pennsylvania 15213-3891 ABSTRACT Aircraft flight pressurization/depressurization cycling causes the skin to inflate and deflate, stressing it around the rivets that fasten it to the airframe. The resulting strain, exacerbated by corrosion, drives the growth of initially microscopic cracks. To avoid catastrophe, aircraft are inspected periodically for cracks and corrosion. The inspection technology employed is ~90% naked-eye vision. We have developed and demonstrated robotic deployment of both remote enhanced 3D-stereoscopic video instrumentation for visual inspection and remote eddy current probes for instrumented inspection. This article describes the aircraft skin inspection application, how robotic deployment may alleviate human performance problems and workplace hazards during inspection, practical robotic deployment systems, their instrumentation packages, and our progress toward developing image enhancement and understanding techniques that could help aircraft inspectors to find cracks, corrosion, and other visually detectable damage. KEYWORDS: automated robot visual NDI inspection enhanced remote stereoscopic multiresolution 1. INTRODUCTION Pressurization and de-pressurization of an airplane during each takeoff and landing cycle causes...

Words: 10694 - Pages: 43

Premium Essay

The Man By Paul Ekman Analysis

...The facial feature extraction in this case (for still images) is divided into two approaches: holistic approach and local approach. The holistic approach relates to the concept of wholes, concept of a complete system that cannot be taken by components, cannot be dissected. For this approach the feature extraction can be done using Principal Component Analysis (PCA) algorithm, which we will use and later on describe it and make it more understandable for the reader. Feature extraction can also be done using Gabor wavelet. In this specific approach, Gabor wavelet is used in the training and recognition part. After the face is detected, using Gabor wavelet and then PCA algorithm we can have a facial recognition. The Gabor filters are very helpful in the extraction of features from images and they have applications in image processing (for example fingerprint recognition, eye...

Words: 1853 - Pages: 8

Free Essay

Tonality Estimation Using Wavelet Packet Analysis

...UNIVERSITY OF MIAMI TONALITY ESTIMATION USING WAVELET PACKET ANALYSIS By Vaibhav Chhabra A Research Project Submitted to the Faculty of the University of Miami in partial fulfillment of the requirements for the degree of Master of Science Coral Gables, Florida May 2005 UNIVERSITY OF MIAMI A research project submitted in partial fulfillment of the requirements for the degree of Master of Science TONALITY ESTIMATION USING WAVELET PACKET ANALYSIS Vaibhav Chhabra   Approved:    ________________ Ken Pohlmann Professor of Music Engineering _________________ Dr. Edward Asmus Associate Dean of Graduate Studies ________________ Colby Leider Assistant Professor of Music Engineering _________________ Dr. Paul Mermelstein Professor of Electrical Engineering DEDICATION They say that one’s experience is what defines an individual. After all, you are what you are because of your experiences. On that note I would like to dedicate this work to all those who have contributed to my experience in this journey. For what I have learned has laid the foundation for what I will learn. I would also like to thank my family who has always been supportive of me, my brother Ruchir who is a natural send-master, Papa and Ma thanks for keeping the faith. All the Chacha’s, Chachi’s and cousins, thank you all for the support. Next on my thank you list are my Tae Kwon Do buddies. Sensei Jeff thanks for all of your advice, some day I’ll be teacher like you. Rico, training...

Words: 17208 - Pages: 69

Premium Essay

Dental X-Ray Analysis

...treatment purpose such as root canal treatment, detection of different factor similar to the...

Words: 3714 - Pages: 15

Premium Essay

Paper

...Engineering, University of Calcutta, India E-mail:skb1@vsnl.com Abstract Mammography is at present one of the available method for early detection of masses or abnormalities which is related to breast cancer. The most common abnormalities that may indicate breast cancer are masses and calcifications. The challenge lies in early and accurate detection to overcome the development of breast cancer that affects more and more women throughout the world. Masses appear in a mammogram as fine, granular clusters, which are often difficult to identify in a raw mammogram. Digital mammogram is one of the best technologies currently being used for diagnosing breast cancer. Breast cancer is diagnosed at advanced stages with the help of the digital mammogram image. In this paper, a method has been developed to make a supporting tool. This will make it easy and less time consuming for identification of abnormal masses in digital mammography images. The identification technique is divided into two distinct parts i.e. Formation of Homogeneous Blocks and Color Quantization after preprocessing. The type of masses, orientation of masses, shape and distribution of masses, size of masses, position of masses, density of masses, symmetry between two pair etc. are clearly sited after proposed method is executed on raw mammogram, for easy and early detection of abnormalities....

Words: 4564 - Pages: 19

Free Essay

Optical Character Recognition

...ISSN: 2277-3754 ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 2, Issue 7, January 2013 Recognition for Handwritten English Letters: A Review Nisha Sharma, Tushar Patnaik, Bhupendra Kumar Abstract -- Character recognition is one of the most interesting and challenging research areas in the field of Image processing. English character recognition has been extensively studied in the last half century. Nowadays different methodologies are in widespread use for character recognition. Document verification, digital library, reading bank deposit slips, reading postal addresses, extracting information from cheques, data entry, applications for credit cards, health insurance, loans, tax forms etc. are application areas of digital document processing. This paper gives an overview of research work carried out for recognition of hand written English letters. In Hand written text there is no constraint on the writing style. Hand written letters are difficult to recognize due to diverse human handwriting style, variation in angle, size and shape of letters. Various approaches of hand written character recognition are discussed here along with their performance. Fig 1.Major Steps of an OCR System Index Terms— Offline Hand written Character Recognition, Pre-Processing, Feature Extraction, Classification, Post Processing. I. INTRODUCTION Optical Character Recognition (OCR) is one of the most fascinating and challenging areas of pattern...

Words: 2982 - Pages: 12

Free Essay

Digital Image Processing

...Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic) DIGITAL IMAGE PROCESSING DIGITAL IMAGE PROCESSING PIKS Inside Third Edition WILLIAM K. PRATT PixelSoft, Inc. Los Altos, California A Wiley-Interscience Publication JOHN WILEY & SONS, INC. New York • Chichester • Weinheim • Brisbane • Singapore • Toronto Designations used by companies to distinguish their products are often claimed as trademarks. In all instances where John Wiley & Sons, Inc., is aware of a claim, the product names appear in initial capital or all capital letters. Readers, however, should contact the appropriate companies for more complete information regarding trademarks and registration. Copyright  2001 by John Wiley and Sons, Inc., New York. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic or mechanical, including uploading, downloading, printing, decompiling, recording or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without the prior written permission of the Publisher. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 605 Third Avenue, New York, NY 10158-0012, (212) 850-6011, fax (212) 850-6008, E-Mail: PERMREQ @ WILEY.COM. This publication is designed...

Words: 173795 - Pages: 696

Free Essay

Artificial Neural Network for Biomedical Purpose

...ARTIFICIAL NEURAL NETWORKS METHODOLOGICAL ADVANCES AND BIOMEDICAL APPLICATIONS Edited by Kenji Suzuki Artificial Neural Networks - Methodological Advances and Biomedical Applications Edited by Kenji Suzuki Published by InTech Janeza Trdine 9, 51000 Rijeka, Croatia Copyright © 2011 InTech All chapters are Open Access articles distributed under the Creative Commons Non Commercial Share Alike Attribution 3.0 license, which permits to copy, distribute, transmit, and adapt the work in any medium, so long as the original work is properly cited. After this work has been published by InTech, authors have the right to republish it, in whole or part, in any publication of which they are the author, and to make other personal use of the work. Any republication, referencing or personal use of the work must explicitly identify the original source. Statements and opinions expressed in the chapters are these of the individual contributors and not necessarily those of the editors or publisher. No responsibility is accepted for the accuracy of information contained in the published articles. The publisher assumes no responsibility for any damage or injury to persons or property arising out of the use of any materials, instructions, methods or ideas contained in the book. Publishing Process Manager Ivana Lorkovic Technical Editor Teodora Smiljanic Cover Designer Martina Sirotic Image Copyright Bruce Rolff, 2010. Used under license from Shutterstock.com First published March, 2011 Printed in...

Words: 43079 - Pages: 173

Free Essay

Digital Media Fundamentals Revision

...1  Digitization:  Image:  Image representation:  ­­­ visual perception:       The distinction between Human eye and photo camera 成像 imagation formation:      Photo camera:     * Lens has fixed local lenth.  * Adjust the focal length by varying the distance between the lens and the imaging plane.    Human eye:  * Distance between the lens and retina is fixed.  * Focal length for proper focus obatained by varing the shape of the lense.        Question:  All the center squares hv exactly the same intensity. However, they appear to the eye to  become progressively darker as the background becomes lighter. Why? Other example?    Answer:  The human visual system tends to undershoot or overshoot around the boundary of regions of  different intensities.    e.g. Although the intensity of the stripes is constant, we actually perceive a brightness pattern  that is strongly scalloped, especially near the boundaries. Therefore, perceived intensity is  different from the actually intensity.    e.g: a piece of paper that seems to be white when lying on a desk, but can appear totally  black when used to shield the eyes while looking directly at a bright sky.   2  ­­­ Matrix and Pixel:  ●      Matrix is the result of ​ sampling​ quantization​  and ​ ,                   which is a matrix of real numbers.      ●      Each element of the matrix array is called an image element, picture element,  pixel, or pel.    Image representation:  ● An im...

Words: 5974 - Pages: 24