Free Essay

Survey on Coding Algorithm

In:

Submitted By shameerarahman
Words 4955
Pages 20
S.Bhavani et. al. / (IJCSE) International Journal on Computer Science and Engineering Vol. 02, No. 05, 2010, 1429-1434

A Survey On Coding Algorithms In Medical Image Compression
S.Bhavani
Department of ECE Sri Shakthi Institute of Engineering & Technology Coimbatore, India
Abstract— In medical imaging, lossy compression schemes are not used due to possible loss of useful clinical information and as operations like enhancement may lead to further degradations in the lossy compression. Hence there is a need for efficient lossless schemes for medical image data. Several lossless schemes based on linear prediction and interpolations have been proposed. Currently, context based approach has gained popularity since it can enhance the performance of above schemes due to the exploitation of correlation within the frame. Since the conception of Mesh based Compression from early 1990’s many algorithms has been proposed and the research literature on this topic has experienced a rapid growth which is hard to keep track of. This paper gives a brief description about the various coding algorithms and advancements in this field. Keywords- Image compression, prediction, medical image, context based modeling, object based coding, wavelet transform.

Dr.K.Thanushkodi
Director Akshaya College of Engineering & Technology Coimbatore, India image compression schemes, the probability assignment is generally divided as i) A prediction step, in which a deterministic value is guessed for the next pixel based on a subset of available past sequence. ii) The context is determined as a function of past sub sequence. iii) A probabilistic model for the prediction of residue conditioned on the context. The optimization of the above steps has inspired the ideas of universal modeling [1].In this scheme, the prediction step is accomplished with an adaptively optimized, context-dependant linear predictor, and the statistical modeling is performed with an optimized number of parameters(variable-size quantized contexts)The modeled prediction residuals are arithmetic encoded to attain the ideal code length[2].In [3] some the optimizations performed in [1] are avoided with no deterioration in the compression ratios. Compared with previous survey papers, this work has attempted to achieve the following objective of Comprehensive date coverage and analysis and comparison of coding performance and complexity. Coding efficiency is compared between different schemes to help practical engineers in the selection of schemes based on application requirements. The compression methods can be classified into four types as Statistical compression, coding the image based on the gray levels of the pixels in the whole image, e.g., Binary coding system. Huffman coding, Shift codes etc., Spatial compression, coding an image based on the spatial relationship between the pixels in the whole image,eg., Run length coding, Quantizing compression, where the compression takes place by reducing the resolution or gray levels available and finally the fractal compression where the coding of the image is based on the set of parameters to fractal generating functions. The coding algorithms for image compression of medical images and error metrics are briefed below followed by the summary giving the comparison of the tests and results. Finally the concluding remark is followed by acknowledgement and references. II. CODING ALGORITHMS

I.

INTRODUCTION

Compression methods are important in many medical applications to ensure fast interactivity through large sets of images (e.g. volumetric data sets, image databases), for searching context dependant images and for quantitative analysis of measured data. Medical data are increasingly represented in digital form. Imaging techniques like magnetic resonance (MR), computerized tomography (CT) and positron emission tomography (PET) are available. The limitations in transmission bandwidth and storage space on one side and the growing size of image datasets on the other side has necessitated the need for efficient methods and tools for implementation. Lossless compression includes Run length coding, dictionary coding, transform coding, entropy coding The entropy coding includes Huffman coding which is a simple entropy coding and commonly used as the final stage of compression, arithmetic coding, golomb coding which is a simple entropy coding for infinite input data with a geometric distribution and finally the universal coding which is also an entropy coding for infinite input data with an arbitrary distribution. Lossless compression includes Discrete cosine transform, fractal compression, wavelet compression, vector quantization, linear predictive coding. Lossless image compression schemes often consist of two distinct and independent components which are modeling and coding. The modeling part can be formulated as one in which an image is observed pixel by pixel in some predefined order. In state-of-art lossless

A. Need for Coding Need for coding algorithms: A common characteristic of most images is that the neighboring pixels are correlated and therefore contain redundant information. The foremost task

ISSN : 0975-3397

1429

S.Bhavani et. al. / (IJCSE) International Journal on Computer Science and Engineering Vol. 02, No. 05, 2010, 1429-1434 then is to find less correlated representation of the image. Two fundamental components of compression are redundancy and irrelevancy reduction. Redundancy reduction aims at removing duplication from the signal source (image/video). Irrelevancy reduction omits parts of the signal that will not be noticed by the signal receiver, namely the Human Visual System. In general, three types of redundancy can be identified    Spatial Redundancy or correlation between neighboring pixel values. Spectral Redundancy or correlation between different color planes or spectral bands. Temporal Redundancy or correlation between adjacent frames in a sequence of images (in video applications). compression. The fractal compression technique relies on the fact that in certain images, parts of the image resemble other parts of the same image. Many algorithms have been proposed to compress 3D meshes efficiently since early 1990s. In this survey paper, we examine 3D mesh compression technologies developed over the last decade, with the main focus on triangular mesh compression technologies. Single-rate compression is a typical mesh compression algorithm which encodes connectivity data and geometry data separately. Most early work focused on the connectivity coding. Then, the coding order of geometry data is determined by the underlying connectivity coding. However, since geometry data demand more bits than topology data, several methods have been proposed recently for efficient compression of geometry data without reference to topology data.The existing single-rate connectivity compression algorithms are classified into six classes namely, the indexed face set, the triangle strip, the spanning tree, the layered decomposition, the valence-driven approach, and the triangle conquest. The state-of-the-art connectivity coding schemes require only a few bits per vertex, and their performance is regarded as being very close to the optimal. In contrast, geometry coding received much less attention in the past. Since geometry data dominate the total compressed mesh data, more focus has been shifted to geometry coding recently. All the single-rate mesh compression schemes encode connectivity data losslessly, since connectivity is a discrete mesh property. However, geometry data are generally encoded in a lossy manner. To exploit high correlation between the positions of adjacent vertices, most single-rate geometry compression schemes follow a three-step procedure: prequantization of vertex positions, prediction of quantized positions, and entropy coding of prediction residuals. The valence (or degree) of a vertex is the number of edges incident on that vertex. It can be shown that the sum of valences is twice the number of edges Thus, in a typical triangular mesh, the average vertex valence is 6.When reporting the compression performance, some papers employ the measure of bits per triangle (bpt) while others use bits per vertex (bpv). For consistency, we adopt the bpv measure exclusively, and convert the bpt metric to the bpv metric by assuming that a mesh has twice as many triangles as vertices. Most 3D mesh compression algorithms focus on triangular meshes. To handle polygonal meshes, they triangulate polygons before the compression task. However, there are several disadvantages in this approach. First, the triangulation process imposes an extra cost in computation and efficiency. Second, the original connectivity information may be lost. Third, attributes associated with vertices or faces may require duplicated encoding. To address these problems, several algorithms have been proposed to encode polygonal meshes directly without pre-triangulation. The field of volume visualization has received much attention and made substantial progress recently. Its main applications include medical diagnostic data representation and physical phenomenon modeling. Tetrahedral meshes are popularly used to represent volume data, since they are suitable for irregularly sampled data and facilitate multiresolution analysis and visibility sorting. A tetrahedral mesh is typically represented by two tables: the

The Compression techniques are classified as Lossy/Lossless Compression and Predictive/Transform Compression. a) Lossless vs. Lossy Compression: In lossless compression schemes, the reconstructed image, after compression, is numerically identical to the original image. However lossless compression can only achieve a modest amount of compression. An image reconstructed following lossy compression contains degradation relative to the original. Often this is because the compression scheme completely discards redundant information. However, lossy schemes are capable of achieving much higher compression. Under normal viewing conditions, no visible loss is perceived (visually lossless) b) Predictive vs. Transform coding: In predictive coding, information already sent or available is used to predict future values, and the difference is coded. Since this is done in the image or spatial domain, it is relatively simple to implement and is readily adapted to local image characteristics. Differential Pulse Code Modulation (DPCM) is one particular example of predictive coding. Transform coding, on the other hand, first transforms the image from its spatial domain representation to a different type of representation using some well-known transform and then codes the transformed values (coefficients). This method provides greater data compression compared to predictive methods, although at the expense of greater computation. III. FRACTAL CODING

JPEG Coding is commonly used standard method of compressing images. However in its decoded images, quantization noise is sometimes visible in high frequency regions, such as the edges of the objects. Fractal image compression can reproduce such image elements with high compression rate. Fractal image compression methods exploits the self similarity among image elements among various scales to implement compression through formation of a partitioned iterated function system[28].Fractal compression is a lossy image compression method to achieve high levels of

ISSN : 0975-3397

1430

S.Bhavani et. al. / (IJCSE) International Journal on Computer Science and Engineering Vol. 02, No. 05, 2010, 1429-1434 vertex table that records the position and the attributes (such as the temperature or the pressure) of each vertex, and the tetrahedron table that stores a quadruple of four vertex indices for each tetrahedron. A tetrahedral mesh often requires an enormous storage space even at a moderate resolution. The huge storage requirement puts a great burden on the storage, communication, and rendering systems. Thus, efficient compression schemes are necessary. The general polygonal mesh compression schemes often fail to yield satisfying performance. A context-based, adaptive, lossless image codec (CALIC) that obtains higher lossless compression of continuous-tone images than other techniques is reported in the literature[18]. The high coding efficiency is accomplished with relatively low time and space complexities. CALIC puts heavy emphasis on image data modeling. A unique feature of CALIC is the use of a large number of modeling contexts to condition a non-linear predictor and make it adaptive to varying source statistics. The non-linear predictor adapts via an error feedback mechanism. In this adaptation process, CALIC only estimates the expectation of prediction errors conditioned on a large number of contexts rather than estimating a large number of conditional error probabilities. IV. REVERSIBLE CODING According to the organization of the source model as static, semi-adaptive, or adaptive methods have been proposed for reversible coding. Magnetic resonance (MR) images have different statistical characteristics in the foreground and the background and separation is thus a promising path for reversible MR image compression as addressed in [25].In object based coding different objects that are present in a scene are assigned priorities in the encoding process, based on their importance in the framework of the considered application[4].A prior knowledge about the image contents makes such approaches particularly suitable for medical images. The object based algorithms are also suitable for being combined with modeling techniques. Medical images usually consists of a region representing the part of a body under investigation (i.e. heart in a CT, MRI chest scan etc.,)on an often noisy background which is of no diagnostic interest. Hence it seems to be natural to process such datas in object based framework by assigning high priority to objects of interest to be retrieved losslessly and low priority to irrelevant object. Even though some authors have addressed the task of object based coding for medical images [5]-[7], such an approach still deserves some investigation. A fully threedimensional (3-D) object-based coding system exploiting the diagnostic relevance of the different regions of the volumetric data for bit rate allocation is addressed in [4] where the data’s are first decorrelated via a 3-D discrete wavelet transform and the implementation via the lifting steps scheme allows mapping integer-to-integer values, enabling lossless coding, facilitating the definition of the object-based inverse transform. The coding process assigns disjoint segments of the bit stream to the different objects, which can be independently accessed and reconstructed at any up-to-lossless quality. V. WAVELET CODING

Wavelet coding is a form of data compression well suited for image compression and the goal is to store image data in as little space as possible in a file. Wavelet compression can be either lossless or lossy. Using a wavelet transform the wavelet compression methods are adequate for representing transients, such as percussion sounds in audio, or high-frequency components in two-dimensional images, for example an image of stars on a night sky. This means that the transient elements of a data signal can be represented by a smaller amount of information than would be the case if some other transform, such as the more widespread discrete cosine transform, had been used First a wavelet transform is applied. This produces as many coefficients as there are pixels in the image (i.e.: there is no compression yet since it is only a transform). These coefficients can then be compressed more easily because the information is statistically concentrated in just a few coefficients. This principle is called transform coding. After that, they are quantized and the quantized values are entropy coded and/or run length coded.JPEG 2000 is a wavelet based coding scheme. In encoding the image and its components are decomposed into rectangular tiles. Wavelet transform is applied on each tile. After quantization sub bands of coefficients are collected into rectangular array of code blocks.Cetrain ROI is encoded with high image quality Markers are added in the bit stream to avoid error resilience Research on JPEG coding [15] has proved that JPEG-LS is simple and easy to implement. It consumes less memory and is faster than JPEG 2000 though JPEG 2000 supports progressive transmission. Given index of image of interest along Z axis, only concerned portion of the bit-stream is decoded at desired quality. Selective data to access can be improved by splitting image into regions corresponding objects. A scheme based on the three-dimensional (3-D) discrete cosine transform (DCT) has been proposed for volumetric data coding [21]. These techniques fail to provide lossless coding coupled with quality and resolution scalability, which is a significant drawback for medical applications. Hence new compression methods evolved exploiting the quad tree and block-based coding concepts, layered zero-coding principles, and context-based arithmetic coding. Additionally, a new 3-D DCT-based coding scheme is designed and used for benchmarking. The proposed wavelet-based coding algorithms produce embedded data streams that can be decoded up to the lossless level and support the desired set of functionality constraints. Moreover, objective and subjective quality evaluation on various medical volumetric datasets shows that the proposed algorithms provide competitive lossy and lossless compression results when compared with the state-ofthe-art. A wavelet based coding system featuring object based 3D encoding with 2D decoding capabilities was proposed in [8]. In this the improvement in coding efficiency provided by 3D algorithms can be obtained at a lower computational cost where each object is encoded independently to generate a self contained segment of bit stream. The implementation of the DWT via lifting steps scheme in the non linear integer version and the embedding of the encoded information allow reconstructing each object of each image at a progressive up to

ISSN : 0975-3397

1431

S.Bhavani et. al. / (IJCSE) International Journal on Computer Science and Engineering Vol. 02, No. 05, 2010, 1429-1434 lossless quality. Bordered artifacts are avoided by encoding some extra coefficients for each object.A new image compression algorithm, based on independent embedded block coding with optimized truncation of the embedded bit-streams (EBCOT)was proposed where the algorithm [20] exhibits state-of-the-art compression performance while producing a bit-stream with a rich set of features, including resolution and SNR scalability together with a “random access” property. The algorithm has modest complexity and is suitable for applications involving remote browsing of large compressed images. The algorithm lends itself to explicit optimization with respect to MSE as well as more realistic psycho visual metrics, capable of modeling the spatially varying visual masking phenomenon. Table 1.MSE and PSNR for 3D SPIHT for cameraman Wavelet filter Bior4.4 Sym4 Haar Db4 Coif4 Dmey Rbio4.4 MSE 18.07 19.48 20.34 20.39 20.74 21.45 23.58 PSNR 35.56 35.24 35.05 35.04 34.96 34.82 34.41 tree wavelet coder, which includes arithmetic entropy coding. The method of quantization allows for progressive transmission, which aside from avoiding buffer control problems is very attractive in medical imaging applications. VII CONTEXT BASED CODING Content-based means that the search will analyze the actual contents of the image. The term 'content' in this context might refer to colors, shapes, textures, or any other information that can be derived from the image itself. The existing schemes for 3-D magnetic resonance (MR) images, such as block matching method and uniform mesh-based scheme, are inadequate to model the motion field of MR sequence because deformation within a mesh element may not all be similar hence MR image coding using content based mesh and context [22] was proposed. It also addressed a simple scheme to overcome aperture problem at edges where an accurate estimation of motion vectors is not possible. By using context-based modeling, motion compensation yields a better estimate of the next frame and hence a lower entropy of the residue. Twodimensional (2-D) mesh-based motion compensation preserves neighboring relations (through connectivity of the mesh) as well as allowing warping transformations between pairs of frames which effectively eliminates blocking artifacts that are common in motion compensation by block matching . However, available 2-D mesh models, whether uniform or non-uniform, enforce connectivity everywhere within a frame, which is clearly not suitable across occlusion boundaries [19]. In occlusion-adaptive forward-tracking mesh model, the connectivity of the mesh elements (patches) across covered and uncovered region boundaries are broken. This is achieved by allowing no node points within the background to be covered (BTBC) and refining the mesh structure within the model failure (MF) region(s) at each frame. The proposed content-based mesh structure enables better rendition of the motion (compared to a uniform or a hierarchical mesh), tracking is necessary to avoid transmission of all node locations at each frame. A. Error Metrics Two of the error metrics used to compare the various image compression techniques are the Mean Square Error (MSE) and the Peak Signal to Noise Ratio (PSNR). The MSE is the cumulative squared error between the compressed and the original image, whereas PSNR is a measure of the peak error. The mathematical formulae for the two are Error E = Original image – Reconstructed image MSE = E / (Size of Image) (1) P S N R =2 0 lo g 10 (2 5 5/√MSE) (2) A lower value for MSE means lesser error, and as seen from the inverse relation between the MSE and PSNR, this translates to a high value of PSNR. Logically, a higher value of PSNR is good because it means that the ratio of Signal to Noise is higher. Here, the 'signal' is the original image, and the 'noise' is the error in reconstruction. So, a compression scheme having a lower MSE (and a high PSNR), can be recognized as a better one. Wavelet-based coding is more robust under transmission and decoding errors, and also facilitates progressive transmission of images In addition, they are better

Table 2.Compression and PSNR for EZW for cameraman Bits per pixel(bpp) 0.3 0.13 Compression Ratio 1.139 1.141 PSNR 24.43 22.82

Table 3.Compression and PSNR for EZW for Lena 256×256 Bits per pixel(bpp) 0.31 0.12 0.05 Compression Ratio 0.9838 0.9838 0.9838 VI PSNR 26.21 23.53 21.01

INTERFRAME CODING

As far as MRI is concerned although a significant amount of redundancy exists between successive frames of MRI data, the structure of cross dependence is more complicated.MRI data contain a large quantity of noise which is uncorrelated from frame to frame. Until now, attempts in using inter frame redundancies for coding MR images have been unsuccessful. The authors believe that the main reason for this is twofold: unsuitable inter frame estimation models and the thermal noise inherent in magnetic resonance imaging (MRI). In Inter frame coding method for magnetic resonance (MR) images [24] the inter frame model used a continuous affine mapping based on (and optimized by) deforming triangles. The inherent noise of MRI is dealt with by using a median filter within the estimation loop. The residue frames are quantized with a zero-

ISSN : 0975-3397

1432

S.Bhavani et. al. / (IJCSE) International Journal on Computer Science and Engineering Vol. 02, No. 05, 2010, 1429-1434 matched to the Human Visual System characteristics. Because of their inherent multi resolution nature [29], wavelet coding schemes are especially suitable for applications where scalability and tolerable degradation are important. Tables 1, and 2 summarizes the test results for different coding techniques. CONCLUSION In this paper, we performed a survey on various compression methods, coding techniques by classifying major algorithms, describing main ideas behind each category, and comparing their strength and weakness. Roos et al.; [10] modeled deformation as “motion” and employed BMA. This has resulted in reduction in performance as these schemes assume deformation due to translation only. But deformation of MR sequences is more complex than mere displacement. Hence scheme should be based on spatial transformation [19] as proposed by Teklap.As far as Mesh based compression is concerned research on single-rate coding seems to be mature except for further improvement on geometry coding. Progressive coding has been thought to be inferior to singlerate coding in terms of the coding gain. However, highperformance progressive codecs have emerged these days and they often outperform some of the state-of-the art single-rate codecs. In other words, a progressive mesh representation seems to be a natural choice, which demands no extra burden in the coding process. There is still room to improve progressive coding to provide better performance at a lower computational cost.. Another promising research area is animated-mesh coding and hybrid coding that was overlooked in the past but is getting more attention recently. For 3D medical data formed by image sequences large amount of storage space is required. Existing schemes for 3D Magnetic Resonance (MR) images, such as block matching method and uniform mesh based scheme are inadequate to model the motion field of the MR sequence as deformation within a mesh element may not be similar. Hence combination of Adaptive meshes [9] as proposed by Srikanth and object based 3D encoding with 2D decoding [8 ] can be used and a hybrid scheme can be proposed and the coding can be done in such a way that it supports progressive transmission to achieve effective compression of MR Image sequences. Each of these schemes finds use in different applications owing to their unique characteristics. Though there a number of coding schemes available, the need for improved performance and wide commercial usage, demand newer and better techniques to be developed Acknowledgment We thank those individuals, institutions who have contributed and motivated to come up with this literature survey successfully. This study would not be possible if compression researchers did not routinely place their implementation techniques and papers on the Internet for public access.
[3]

REFERENCES
[1] .J.Weinberger,J.Rissanen,and Arps, .Arps.,”Applications of universal context modeling to lossless compression of grayscale images,” IEEE Trans.Image processing,April 1996. J.Rissanen.,”Universalcoding,information,prediction,and estimation,”IEEE Trans. Inform. Theory,vol.IT-30,pp.629-636,July 1984. X.Wu,N.Memon,K.Sayood.,”A context-based,adaptive,lossless/nearlylossless coding scheme for continous tone images,”ISO/IEC JTC 1.29.12,1995 G.Meganez, and J.P Thiran.,”Lossy to lossless object based coding of 3D MRI Data,”IEEE Trans.Image process.,vol.11.no.9,pp.639647,Sep.2002. V.Vlahakis,R.I Kitney.,”ROI approach to wavelet based hybrid compression of MR images,”in the sixth International conference on Image processing and its Applications,1997,vol.2,pp.833-837. G.Minami, Z.Xiong,A.Wang,P/A.Chou,S.Mehrothra.,” 3d wavelet coding of video with arbitrary regions of support”in proc.of the International conference on Image processing. A.czhio,G.Cazuguel,B.Solaiman,C.Roux.,”Medical Image compression using Region Of Interest quantization, “in Proc of the International Conference of the IEEE Engineering in Medicine and Biology Society(EMBS),1998,vol.20-3,pp.1277-1280. G.Meganez, L.Grewe.,”3D/2D object based coding of head MRI data,”in Proc. Int. Conf.Image processing(ICIP),vol.1,2004,pp.181-184. R.Srikanth,and A.G.Ramakrishnan.,”MR image coding using content based Mesh and context”presented at Int. Symp. Signal processing and applications,Paris,France,July 1-4. P.Roos,M.A.Viergever.,”Reversible 3D correlation of medical Images,”IEEE trans. Med. Imag.,vol.12 no.3,pp.413-420,Sep 1993. R.Srikanth,A.G.Ramakrishnan.,”Context based Interframe coding of MR Images,” Marcelo J. Weinberger, Gadiel Seroussi,Guillermo Sapiro, HP Labs, Palo Alto., CA94304 presented paper on,” LOCO-I: A low Complexity, Context based, and Lossless image Compression Algorithm. Huakai Zhang, Jason Fritts.,” EBCOT co processing architecture for JPEG2000,” ISO/IEC 10918-1 Digital image compression and coding of continuous tone still images. ISO/IEC 14495-1 Lossless and near lossless coding of continuous tone still images. David A.Clunie.,”Lossless compression of Gray scale medical imagesEffectiveness of Traditional and State of Art approaches,’ S. Wong, L. Zaremba,D. Gooden., “Applying wavelet transforms with arithmetic coding to radiological image compression ,“in Sep/Oct 1995 Xia olin Wu, Memon, N.,“Context-based, adaptive, lossless image coding”,in April 1997. Y.Altunbasak ,A.M.Teklap.,” Occlusion-adaptive, content based mash design and Forward tracking,” IEEE Trans. Imag. Process.,vol 6,n 9,pp.1270-1280,Sep 1997. D.Taubman.,”High performance scalable image compression with EBCOT,”IEEE Trans. Image Process., vol.9,no.5,pp. 1158-1170,May 1997. Schelkens, PMunteanu, A.Barbarien, J.Galca, M.Giro-Nieto, X. Cornelis.J., “Wavelet coding of volumetric image datasets”IEEE Trans. Med. Imag.,vol.22,no.3,pp.441-458, March 2003. J. Clerk Maxwell, A Treatise on Electricity and Magnetism, 3rd ed., vol. 2. Oxford: Clarendon, 1892, pp.68–73. I. S. Jacobs and C. P. Bean, “Fine particles, thin films and exchange anisotropy,” in Magnetism, vol. III, G. T. Rado and H. Suhl, Eds. New York: Academic, 1963, pp. 271–350. K. Elissa, “Title of paper if known,” unpublished.

[2]

[4]

[5]

[6]

[7]

[8] [9]

[10] [11] [12]

[13] [14] [15] [16] [17] [18] [19]

[20]

[21]

[22] [23]

[24]

ISSN : 0975-3397

1433

S.Bhavani et. al. / (IJCSE) International Journal on Computer Science and Engineering Vol. 02, No. 05, 2010, 1429-1434
[25] R. Nicole, “Title of paper with only first word capitalized,” J. Name Stand. Abbrev., in press. [26] Y. Yorozu, M. Hirano, K. Oka, and Y. Tagawa, “Electron spectroscopy studies on magneto-optical media and plastic substrate interface,” IEEE Transl. J. Magn. Japan, vol. 2, pp. 740–741, August 1987 [Digests 9th Annual Conf. Magnetics Japan, p. 301, 1982]. [27] M. Young, The Technical Writer's Handbook. Mill Valley, CA: University Science, 1989.

AUTHORS PROFILE S.Bhavani born in Coimbatore in the year 1968 received B.E degree in Electronics and Communication Engineering from V.L.B Janaki Ammal College of Engineering and Technology, Coimbatore Tamilnadu in the year 1990 and M.E in Applied Electronics from Maharaja Engineering College, Coimbatore, Tamilnadu in the year 2006. Since 1992, she was working in various disciplines in Park group of institutions, Coimbatore. Currently working as Assistant Professor in Sri Shakthi Institute of Engineering and Technology, Coimbatore. She is also a research Scholar (Part-time) in the Department of EEE at Anna University, Coimbatore Since 1997.She is also a life Member of ISTE and National Merit scholarship holder.

Dr. K. Thanushkodi, born in Theni District, TamilNadu State, India in 1948, received the BE in Electrical and Electronics Engineering from Madras University, Chennai. MSc (Engg) from Madras University, Chennai and PhD in Electrical and Electronics Engineering from Bharathiar University, Coimbatore in 1972, 1976 and 1991 respectively. His research interests lie in the area of Computer Modeling and Simulation, Computer Networking and Power System. He has published 26 technical papers in National and International Journals

ISSN : 0975-3397

1434

Similar Documents

Premium Essay

Fuzzy Inference System Case Study

...CHAPTER 2 LITERATURE SURVEY CHAPTER 2 : LITRETURE SURVEY 2.1 SURVEY IN MEDICAL DIAGNOSIS USING ARTIFICIAL INTELLIGENCE TECHNIQUES : - Many intelligent systems have been developed for the purpose of enhancing health-care and provide better health care facilities, reduce cost and etc. 2.2 OVERALL ARCHITECTURE OF MEDICAL DIAGNOSIS USING ARTIFICIAL INTELLIGENCE TECHNIQUES:- The Centralized databases and www (World Wide Web) shares the patient data among the different cities used by doctor /practitioners for diagnosis. Single database system was only accessible to that city but the centralized database is accessed by all the cities. Fig2 shows the sharing knowledge between different cities where...

Words: 2581 - Pages: 11

Free Essay

Gene Recognition

...Gene Recognition A project report submitted to M S Ramaiah Institute of Technology An Autonomous Institute, Affiliated to Visvesvaraya Technological University, Belgaum in partial fulfillment for the award of the degree of Bachelor of Engineering in Computer Science & Engineering Submitted by Mudra Hegde 1MS07CS052 Nakul G V 1MS07CS053 Under the guidance of Veena G S Assistant Professor Computer Science and Engineering M S Ramaiah Institute of Technology [pic] DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING M.S.RAMAIAH INSTITUTE OF TECHNOLOGY (Autonomous Institute, Affiliated to VTU) BANGALORE-560054 www.msrit.edu May 2011 Gene Recognition A project report submitted to M. S. Ramaiah Institute of Technology An Autonomous Institute, Affiliated to Visvesvaraya Technological University, Belgaum in partial fulfillment for the award of the degree of Bachelor of Engineering in Computer Science & Engineering Submitted by Mudra Hegde 1MS07CS052 Nakul G V 1MS07CS053 Under the guidance of Veena G S Assistant Professor Computer Science and Engineering M S Ramaiah Institute of Technology [pic] DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING M. S. RAMAIAH INSTITUTE OF TECHNOLOGY (Autonomous Institute, Affiliated to VTU) BANGALORE-560054 www.msrit.edu May 2011 Department of Computer Science...

Words: 8197 - Pages: 33

Free Essay

Routing Approaches of Delay Tolerant Networks

...©2010 International Journal of Computer Applications (0975 - 8887) Volume 1 – No. 17 Routing Approaches in Delay Tolerant Networks: A Survey R. J. D'Souza National Institute of Technology Karnataka, Surathkal, India Johny Jose National Institute of Technology Karnataka, Surathkal, India ABSTRACT Delay Tolerant Networks (DTNs) have evolved from Mobile Ad Hoc Networks (MANET). It is a network, where contemporaneous connectivity among all nodes doesn’t exist. This leads to the problem of how to route a packet from one node to another, in such a network. This problem becomes more complex, when the node mobility also is considered. The researchers have attempted to address this issue for over a decade. They have found that communication is possible in such a challenged network. The design of routing protocol for such networks is an important issue. This work surveys the literature and classifies the various routing approaches. discontinuity in the network. There are also methods that have employed additional mobile nodes, to provide better message delivery. Researchers are even exploring how the social interaction of humans can be utilized for routing in a DTN. This survey has made an extensive study of the various routing strategies taken by the researchers in the past few years. We have classified them based on the type of knowledge used for routing. 2. FLOODING BASED APPROACHES Knowledge about the network helps in deciding the best next hop. It can happen that the...

Words: 6818 - Pages: 28

Free Essay

Steganography: a Review of Information Security Research and Development in Muslim World

...Steganography: A Review of Information Security Research and Development in Muslim World Abstract Conveying secret information and establishing hidden relationship has been a great interest since long time ago. Therefore, there are a lot of methods that have been widely used since long past. This paper reviewed one of the methods for establishing hidden communication in information security and has gained attraction in recent years that is Steganography. Steganography is the art and science of hiding a secret message in a cover media such as image, text, signals or sound in such a way that no one, except the intended recipient knows the existence of the data. In this paper, the research and development of steganography from three years back starting from 2010 until recently, 2013 in Muslim world are reviewed. The future research in the field of Steganography is briefly discussed. Keywords Cover Image, Stego Image, Cryptography, Steganography, Information Hiding, Information Security, Muslim World 1 Introduction In today’s information technology era, the internet has played a vital part in the communication and information sharing. Due to the rapid development in Information Technology and Communication and the Internet, the security of the data and the information has raised concerned. Every day, confidential data has been compromised and unauthorized access of data has crossed the limits. Great measures should be taken to protect the data and information [5,...

Words: 3746 - Pages: 15

Premium Essay

Cognitive Radio Network

...POWER ALLOCATION FOR THE NETWORK CODED COGNITIVE COOPERATIVE NETWORK by Major Awal Uddin Ahmed (ID: 1003) Major Md Shariful Islam(ID: 1004) Major K M Hasnut Zamil (ID: 1006) A Project Report submitted to the department of Electrical Electronic and Communication Engineering in partial fulfillment of the requirements for the degree of Bachelor of Engineering in Electrical Electronic and Communication Engineering Advisor: M. Shamim Kaiser Military Institute of Science and Technology Mirpur Cantonment, Dhaka December 2010 To Our Beloved Parents ii DECLARATION This thesis is a presentation of my original research work. Wherever contributions of others are involved, every effort is made to indicate this clearly, with due reference to the literature, and acknowledgement of collaborative research and discussions. The work was done under the guidance of Dr. M. Shamim Kaiser, at the Mililary Institute of Science and Technology (MIST), Mirpur Cantonment, Dhaka. (Major Awal Uddin Ahmed (ID: 1003)) (Major Md Shariful Islam(ID: 1004)) (Major K M Hasnut Zamil (ID: 1006)) iii CERTIFICATE This is to certify that the thesis entitled POWER ALLOCATION FOR THE NETWORK CODED COGNITIVE COOPERATIVE NETWORK and submitted by Major Awal Uddin Ahmed (ID: 1003), Major Md Shariful Islam(ID: 1004), Major K M Hasnut Zamil (ID: 1006) for the degree of Bachelor of Engineering in Electrical Electronics and Communication Engineering. They embody original work under my supervision...

Words: 9257 - Pages: 38

Free Essay

Hostel Management

...------------------------------------------------- Data compression From Wikipedia, the free encyclopedia   (Redirected from Video compression) "Source coding" redirects here. For the term in computer programming, see Source code. In digital signal processing, data compression, source coding,[1] or bit-rate reduction involves encoding information using fewer bits than the original representation.[2]Compression can be either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is lost in lossless compression. Lossy compression reduces bits by identifying unnecessary information and removing it.[3] The process of reducing the size of a data file is referred to as data compression. In the context of data transmission, it is called source coding (encoding done at the source of the data before it is stored or transmitted) in opposition to channel coding.[4] Compression is useful because it helps reduce resource usage, such as data storage space or transmission capacity. Because compressed data must be decompressed to use, this extra processing imposes computational or other costs through decompression; this situation is far from being a free lunch. Data compression is subject to a space–time complexity trade-off. For instance, a compression scheme for video may require expensive hardware for the video to be decompressed fast enough to be viewed as it is being decompressed, and the option to decompress the video...

Words: 12347 - Pages: 50

Premium Essay

Sibm

...Tushar Suresh Bhavsar 2/305, Rakshak Nagar-2, Date of Birth: 23rd May, 1983 Kharadi, Pune 411038. Mobile: 91-9503162041 Email:tusharbhavasar@gmail.com Professional Synopsis ← 7 years of total experience with 2 years of software project management exposure. ← On-site coordination experience from client site based in San Francisco USA. ← Requirement understanding and project initiation activities at on-site location based in New York, Cincinnati and San Francisco, USA. ← Proven abilities in leading complete SDLC entitling analysis, design, development, testing, planning, estimations and delivery. ← An effective communicator with excellent relationship management skills. Strong analytical, problem solving & analytical abilities. Possess a flexible & detail oriented attitude. ← Leading and managing team of size 20+. ← Consistently top performer. Skills Set Technical Operating System : Windows NT/2K/XP/2K3/2K8/Vista/7. Microsoft Tech. : .NET Framework, Visual C# .Net, VB .NET, ASP .Net, ADO .Net, framework 1.0, 1.1, 2.0, 3.5, web services, Windows services, Remoting, Entity framework, Linq, Share point – WSS exposure, Sales Force integration, Dial up USA. Ajax toolkit extender, Enterprise library, Crystal reports, Membership providers,In-home install api, Google api integration API integration : IBM SPSS, Google api federated login and white listing, Lumi , in-home install, Trumpia SMS. Database : RDBMS...

Words: 2307 - Pages: 10

Free Essay

Video Compression: an Examination of the Concepts, Standards, Benefits, and Economics

...Video Compression: An Examination of the Concepts, Standards, Benefits, and Economics ITEC620 April 14, 2008 To accommodate the increased demand for digital video content, compression technology must be used. This paper examines the most commonly used compression formats, the MPEG-1, MPEG-2 and MPEG-4 video compression formats, their relative benefits and differences, the delivery methods available for digital video content and the economics of video content delivery. Every time a digital video disc is played, a video is watched on YouTube, an NFL clip is viewed on a Sprint-based cellular phone, or a movie is ordered through an on-demand cable television video service, the viewer is watching data that is not in the state it which it originated. Video in an unmodified state is comprised of vast quantities of data (Apostopoulos & Wee, 2000). In order to make effective and efficient usage of video data, some method of reducing the quantity of data is necessary. Apostopoulos and Wee, in their 2000 paper, “Video Compression Standards” explain this succinctly and well, “For example, consider the problem of video transmission within high-definition television (HDTV). A popular HDTV video format is progressively scanned 720x1280 pixels/frame, 60 frames/s video signal, with 24-bits/pixel (8 bits for red, green, and blue), which corresponds to a raw data rate of about 1.3 Gbits/sec. Modern digital communication systems can only...

Words: 5707 - Pages: 23

Premium Essay

Swot Analysis Of Hotel

...Seed dependency pattern is used for extracting candidate features. These seed patterns along with the prior sentiment knowledge are applied to extract candidate features, and in turn these features can generate new dependency patterns. Clustering algorithm is used to retain more infrequent features and form a more compact aspect structure .For a candidate feature , its most similar feature is found and grouped them together if their similarities larger than a threshold .In order to remove false features, weighting algorithm is used and the cluster with light ones is removed. They used positive-like adjectives (PAs) and negative-like adjectives(NA-s) for disambiguating sentiment orientations. Finally, clause sentiment analysis is performed to get the...

Words: 1725 - Pages: 7

Free Essay

A Survey of Ocr Applications

...International Journal of Machine Learning and Computing, Vol. 2, No. 3, June 2012 A Survey of OCR Applications Amarjot Singh, Ketan Bacchuwar, and Akshay Bhasin Abstract—Optical Character Recognition or OCR is the electronic translation of handwritten, typewritten or printed text into machine translated images. It is widely used to recognize and search text from electronic documents or to publish the text on a website. The paper presents a survey of applications of OCR in different fields and further presents the experimentation for three important applications such as Captcha, Institutional Repository and Optical Music Character Recognition. We make use of an enhanced image segmentation algorithm based on histogram equalization using genetic algorithms for optical character recognition. The paper will act as a good literature survey for researchers starting to work in the field of optical character recognition. Index Terms— Genetic algorithm, bimodal images, Captcha, institutional repositories and digital libraries, optical music recognition, optical character recognition. I. INTRODUCTION Highlight in 1950’s [1], applied throughout the spectrum of industries resulting into revolutionizing the document management process. Optical Character Recognition or OCR has enabled scanned documents to become more than just image files, turning into fully searchable documents with text content recognized by computers. Optical Character Recognition extracts the relevant information...

Words: 3379 - Pages: 14

Premium Essay

Hai, How Are U

...UNIVERSITY OF KERALA B. TECH. DEGREE COURSE 2008 ADMISSION REGULATIONS and I  VIII SEMESTERS SCHEME AND SYLLABUS of COMPUTER SCIENCE AND ENGINEERING B.Tech Comp. Sc. & Engg., University of Kerala 2 UNIVERSITY OF KERALA B.Tech Degree Course – 2008 Scheme REGULATIONS 1. Conditions for Admission Candidates for admission to the B.Tech degree course shall be required to have passed the Higher Secondary Examination, Kerala or 12th Standard V.H.S.E., C.B.S.E., I.S.C. or any examination accepted by the university as equivalent thereto obtaining not less than 50% in Mathematics and 50% in Mathematics, Physics and Chemistry/ Bio- technology/ Computer Science/ Biology put together, or a diploma in Engineering awarded by the Board of Technical Education, Kerala or an examination recognized as equivalent thereto after undergoing an institutional course of at least three years securing a minimum of 50 % marks in the final diploma examination subject to the usual concessions allowed for backward classes and other communities as specified from time to time. 2. Duration of the course i) The course for the B.Tech Degree shall extend over a period of four academic years comprising of eight semesters. The first and second semester shall be combined and each semester from third semester onwards shall cover the groups of subjects as given in the curriculum and scheme of examination ii) Each semester shall ordinarily comprise of not less than 400 working periods each of 60 minutes duration...

Words: 34195 - Pages: 137

Free Essay

Highly-Available, Homogeneous Theory for Reinforcement Learning

...incompatible, but rather on constructing new extensible symmetries (Hoopoo). Of course, this is not always the case. Table of Contents 1) Introduction 2) Related Work 3) Architecture 4) Implementation 5) Results 5.1) Hardware and Software Configuration 5.2) Experimental Results 6) Conclusion 1 Introduction Many systems engineers would agree that, had it not been for online algorithms, the visualization of Scheme might never have occurred. The usual methods for the synthesis of SCSI disks do not apply in this area. A technical question in cryptoanalysis is the deployment of suffix trees. The private unification of link-level acknowledgements and the Turing machine would minimally degrade autonomous communication. It might seem counterintuitive but has ample historical precedence. Contrarily, hierarchical databases might not be the panacea that system administrators expected. We emphasize that our framework runs in Θ( logn ) time. For example, many methodologies visualize pseudorandom archetypes. Combined with semantic symmetries, this explores an analysis of erasure coding. To our knowledge, our work here marks the first method investigated specifically for the evaluation of XML. we emphasize that our method observes forward-error correction. The basic tenet of this solution is the construction of the producer-consumer problem. Thusly, we see no reason not to use the evaluation of the...

Words: 2316 - Pages: 10

Premium Essay

The Importance Of Evaluation

...and animation systems can categorize a mix of systems that were never evaluated into three; positive results that are not statistically significant, positive with significant results and those that did not yield positive results (Sorva et al., 2013). 25. About half the evaluations of visualization were only about usability; and a third of the evaluations were informal, “with little contribution to future improvements” according to a survey (Urquiza-Fuentes e al., 2009). 26. Study found that animation was no more effective than text explanation for learning the semantics of C++ pointers based from the evaluation of Specialized visualization systems that were built for specific programming constructs (Kumar,2009). 27. When learning expression evaluation, a study found that graphic visualization alone is worse than graphic visualization with text explanation (Kumar,2005). 28. It is proven that step-by-step explanation of the correct answer are really help students learn as they gave a feedback with more correct answers. (Kumar,2006). 29. Dual coding theory recommended the simultaneous presentation of the same information in text and visual forms but graphic visualization only clarifies the order of evaluation of operators but not concepts such as coercion and errors. (Paivio,...

Words: 991 - Pages: 4

Premium Essay

Pert

...Computer science From Wikipedia, the free encyclopedia Jump to: navigation, search Computer science or computing science (abbreviated CS) is the study of the theoretical foundations of information and computation and of practical techniques for their implementation and application in computer systems.[1][2] Computer scientists invent algorithmic processes that create, describe, and transform information and formulate suitable abstractions to model complex systems. Computer science has many sub-fields; some, such as computational complexity theory, study the fundamental properties of computational problems, while others, such as computer graphics, emphasize the computation of specific results. Still others focus on the challenges in implementing computations. For example, programming language theory studies approaches to describe computations, while computer programming applies specific programming languages to solve specific computational problems, and human-computer interaction focuses on the challenges in making computers and computations useful, usable, and universally accessible to humans. The general public sometimes confuses computer science with careers that deal with computers (such as information technology), or think that it relates to their own experience of computers, which typically involves activities such as gaming, web-browsing, and word-processing. However, the focus of computer science is more on understanding the properties of the programs used to implement...

Words: 5655 - Pages: 23

Premium Essay

Stress Reduction Strategies

...This test is a nonparametric test, the results allows one to conclude whether what is observed would have in fact be what is expected to occur by chance. For example, the responses on the ATSBI are in categories resembling a Likert scale, the score represents the level of burnout/stress the student feels at the time of completing the survey; the expected outcome would be that during higher stress times for the students the scores would reflect higher stress. The null hypothesis would be rejected by determining the degrees of freedom and the significance level of the particular study. Statistical significance will be set at p≤.05 to prove statistical significance. The questions at hand would include: Stress interventions are not associated with a lower level of stress and burnout; One of the interventions is associated with a lower level of stress and burnout; What relationship if any exists, if at all, between stress and burnout and a stress application?; Does a stress application have a relationship with stress and burnout levels? Nominal variables would include the four or...

Words: 1651 - Pages: 7