Free Essay

Mpeg-4 Technology

In:

Submitted By omkardoutulwar
Words 3399
Pages 14
VIDEO FORMATS

(MPEG-4)

A

SEMINAR REPORT

II

TABLE OF CONTENTS

PAGE NO ABSTRACT 1
1 INTRODUCTION 2 1.1 ABOUT MPEG-4 3

2 THE LAYER STRUCTURE FOR MPEG-4 4 TERMIAL

3 OVERALL SYSTEM ARCHITECTURE 6

4 CLIENT –SERVER MODEL 4.1 THE MPEG-4 SERVER 10 4.2 THE MPEG-4 CLIENT 13

5 APPLICATIONS OF MPEG-4 15

6 MPEG-4 ADDRESSES THE NEED FOR 16

7 REQURIMENTS FOR MPEG-4 VIDEO 18 STANDARD

8 CONCLISION 21

BIBLIOGRAPHY 22

APPENDIX – A POWER POINT SLIDES

III

ABSTRACT

The Multimedia Technology Research Center (MTrec) is one of the leading research centers in the world which was engaged in MPEG-4 Research. MPEG-4 is mainly targeted for interactive multimedia applications & became the international standard in 1998 .MPEG-4 makes it possible to construct content such as movie , song , or animations out of multimedia objects. MPEG-4 is the global multimedia standard, delivering professional quality audio and video streams over a wide range of bandwidths, from cell phone to broadband and beyond . MPEG-4 interactive client-server applications are expected to play an important role in online multimedia services.

1
1. Introduction

(The Moving Picture Experts Group (MPEG) is a working group under ISO/IEC in charge of the development of international standards for compression, decompression, processing and coded representation of moving pictures, audio and their combination.

( In August 1993 the MPEG group released the so-called MPEG-1 standard for “Coding of moving pictures and associated audio at up to about 1.5 Mbit/s” . It was mainly targeted for CD-ROM applications . [1]

( In 1990 MPEG started the so-called MPEG-2 standardization phase . The MPEG-2 standard addresses substantially higher quality for audio and video with video bit rates between 2 Mbits/s and 30 Mbits/s, primarily focusing on requirements for digital TV and HDTV applications .

( Anticipating the rapid convergence of telecommunications industries, computer and TV/film industries, the MPEG group officially initiated a new MPEG-4 standardization phase in 1994 - with the mandate to standardize algorithms for audio-visual coding in multimedia applications, allowing for interactivity, high compression and/or universal accessibility and portability of audio and video content .

( Bit rates targeted for the video standard are between 5-64 kbits/s for mobile applications and up to 2 Mbits/s for TV/film applications . 1. About MPEG-4 Most of the multimedia services consist of a single audio or natural 2D video stream . MPEG-4 which is an ISO/IEC standard , provides a broad framework for the joint description , compression ,storage, and transmission of natural and synthetic audio-visual data . It defines improved compression algorithms for audio and video signals, and efficient object – based representation of audio-video scenes. There are 3 main features of MPEG-4 that distinguish it from other technologies: object based nature, interactivity and a high degree of compression. MPEG-4 is different from MPEG-2 in a number of ways: 1. It is not designed to be either just a video or an audio specification. It's an entire multimedia protocol, with standards for how to stream video, how to synchronize multimedia, and how to manage different data types. 2. It doesn't treat these multimedia scenes as a single entity. Instead, it breaks the picture down further. The sequences can be segmented in objects, and the audio/video objects are then sent in independent streams .

2 The Layer Structure For MPEG-4 Terminal In MPEG-4 , audio-video objects are encoded separately into their own Elementary Streams (ES). The Scene Description (SD),also referred to as the Binary Format for Scene (BIFS),defines the spatio-temporal features of these objects in the final scene to be presented to the end user .Object Descriptors(ODs) are used to associate scene description components to the actual elementary streams that contain the corresponding coded media data. ODs carry information on the hierarchical relationships, locations and properties of ESs. The Command Descriptor Framework (CDF) , provides a means to associate commands with media objects in the SD.
The MPEG-4 standard defines a three layer structure for an MPEG-4 terminal : [2] 1. The Compression Layer 2. The Synchronization Layer 3. The Delivery Layer
1. The Compression Layer : The Compression Layer processes individual audio-video media streams and organizes them in Access Units(AU), the smallest elements that can be attributed individual timestamps. The compression layer can be made to react to the characteristics of a particular delivery layer such as the path –MTU or loss characteristics. 2. The Synchronization Layer : The Sync Layer primarily provides the synchronization between streams. Aus are here encapsulated in SL packets. In case that the AU is larger than the SL packet, it will be fragmented across multiple SL packets . The SL produces an SL- packetized stream i.e. sequences of SL packets. The SL-packets headers contain timing , sequencing and other information necessary to provide synchronization at the remote end. The packetized streams are then sent to the Delivery Layer.
3. The Delivery Layer : In the MPEG-4 standard, a delivery framework referred to as the Delivery Multimedia Integration Framework (DMIF) is specified at the interface between the MPEG-4 synchronization layer and the network layer. DMIF provides an abstraction between the core MPEG-4 system components and the retrieval methods .
Two levels of primitives are defined in DMIF. 1.One is for communication, between the application and the delivery layer to handle all the data and control flows.
2. The other one is used to handle all the message flows in the control plane between DMIF peers.

3 Overall System Architecture
The system architecture is shown in figure 1. It consist of 1. An MPEG-4 server ,which stores encoded multimedia objects and produces MPEG-4 content streams. 2. An MPEG-4 client, which serves as the platform for the composition of an MPEG-4 presentation as requested by the end user . 3. An IP network that will transport all the data between the server and the client. The essence of Mpeg-4 lies in its object oriented structure. Each object forms an independent entity that may or may not be linked to other object , spatially and temporally. The SD, ODs, the media objects, and the CDs are transmitted to the client through separate streams. Because of this the end user at the client side get the tremendous flexibility to interact with the multimedia presentation and manipulate the different media objects. End users can change the spatio-temporal relationships among media objects ,turn on or shut down media objects, or even specify different perceptual quality requirements for different media objects dependent upon the associated command descriptors for each object or group of objects. This results in more difficult and complicated session management and control architecture. The design targets a flexible session management scheme with efficient and adaptive encapsulation of data for Q0s provisioning. User interactivity consist of three levels of interactivity that correspond to what type of control is desired:
1. Presentation Level Interactivity : In which a user makes changes to the scene by controlling an individual object or group of objects . It also includes presentation creation .
2. Session Level Interactivity : In which a user controls the playback process of the presentation.
3. Local Level Interactivity: In which a user makes changes that can be taken care of locally , e. g ., changing the position of an object on the screen ,volume control etc.

[pic]
The server maintains a database or a list of available MPEG-4 content and provides WWW access to it. An end user at a remote client side retrieves information regarding the media objects the he/she is interested in, and composes a presentation based upon what is available and desired . The system operation , after the end user has completed the composition of presentation is summarized as follow: 1 The client requests a service of submitting the description of the presentation to the Data Controller (DC) at the server side.
2. The DC on the server side , controls the Encoder/ Producer module to generate the corresponding SD, ODs, CDs and other media streams based upon the presentation description information submitted by the end user at the client side . The DC then triggers the Session Controller (SC) on the server side to initiate a session.
3 The SC on the server side is responsible for session initiation , control and termination . it passes the stream information that is obtained from the DC to the Q0S Controller(QC) that manages in conjunction with the Packer , the creation of the corresponding transport channels with the appropriate Q0S provisions.
4. Messenger Module (MM) on the sever side, which handles the communication of control and signaling data, then signals to the client the initiation of the session and network resource allocation .The encapsulation formats and other information generated by the Packer when processing the “packing” of the SL- packetized streams are also signaled to the client to enable it to unpack the data.

5. The actual stream delivery commences after the client indicates that it is ready to receive and streams flow from the server to the client .After the decoding and composition procedures, the MPEG-4 presentation authored by the end user is rendered on his or her display .

4 Client–Server Model
4.1 The MPEG-4 Server
Upon receiving a new service request from a client , the MPEG-4 starts a thread for the client and setup a session with the client. The server maintains a list of sessions established with clients and a list of associated transport channels and their Q0S characteristics. Fig 2 shows the components of the MPEG-4 Server. The Encoder / Decoder compresses raw video sources in real time or reads out MPEG-4 content stored in MP4 files . The elementary streams produced by the Encoder/Producer are packetized by the SL-Packetizer . The SL -Packetizer adds SL –Packet headers to the AUs in the elementary streams to achieve intra-object stream synchronization . The headers contain the information such as decoding and composition time stamps ,clock references , padding indication , etc . The whole process is scheduled and controlled by the DC . The DC is responsible for several functions : 1. It responds to control messages that it gets from the client side DC . These messages include the description of the presentation composed by the user at the client side and the presentation level control commands issued by the remote client DC resulting from user interactions.
2. It communicates with the SC to initiate a session . It also sends SC the session update information as it receives user interactivity commands and makes the appropriate SD and OD changes.
3. It controls the Encoder/Producer and the SL-Packetizer to generate and packetize the contents as requested by the client .
4. It schedule audio-visual objects under resource constraints . With reference to the System Decoding Model , the AUs must at the client terminal before their decoding time . Efficient scheduling must be applied to meet this timing requirement and also satisfy the delay tolerances and delivery priorities of the different objects. [pic]

The SC is responsible for several functions :
1. When triggered by the DC for session initiation , it will coordinate with the QC to set-up and maintain the numerous transport channels associated with the SL packetized streams.
2. It maintains session state information and updates this whenever it receives changes from the DC resulting from user interactivity.
3. It responds to control messages sent to it by the client side SC. These massages include the VCR type commands that the user can use to control the session .

4.2 The MPEG-4 Client
The architectural design of the MPEG-4 client is based upon the MPEG-4 System Decoder Model (SDM) , which is defined to achieve media synchronization , buffer management , and timing , when reconstructing the compressed media data . Fig 3 illustrates the components of the MPEG-4 client . The SL Manager is responsible for binding the received ESs to decoding buffers. The SL-Depacketizer extracts the ESs received from the Unpacker and passes them to the associated decoding buffers . The corresponding decoders then decode the data in the decoding buffers and produce Composition Units (CUs) , which are then put into composition memories to be processed by the compositor . The User Event Handler module handles the user interactivity . It filters the user interactivity commands and passes the messages along to the DC and the SC for processing . The DC at the client side has the following responsibilities : 1. It controls the decoding and composition process . It collects all the necessary information , e.g. , the size of the decoding buffers which is specified in decoder configuration descriptors and signaled to the client via the OD , the appropriate decoding time and composition time which is indicated in the SL packet header , etc. , for the decoding process .
2. It also maintains the flow of control and data information , controls the creation of buffers and associates them with the corresponding decoders .
3. It relays user presentation level interactivity to the server side DC and processes both session level and local level interactivity to manage the data flows on the client terminal .

[pic]
The SC at the client side communicates with the SC at the server side exchanging session status information and session control data. The User Event Handler will trigger the SC when session level interactivity is detected . The SC then translates the user action into the appropriate session control command
.
5 APPLICATIONS OF MPEG-4
( MPEG-4 makes it possible to construct content such as a movie, song, or animation out of multimedia objects. That's done in Hollywood studios today using specialized equipment at a cost of hundreds of thousands of dollars .
( A final key difference is that MPEG-4 can handle slower data rates. Unlike the older approach, MPEG-4 can handle data rates ranging down to 5 Kbps and up to 4 Mbps. That means that it's possible to create data channels running over standard dial-up Internet connections that carry video and audio.
( The object orientation of MPEG-4 makes it easier to implement things like interactive television .
( Another possible use is in mobile applications, such as cell phones and pagers. Thanks to the ability to gracefully handle low bandwidths, MPEG-4 technology may be especially suited to the coming generation of Web-enabled phones. MPEG-4 needs only 128 Kbps bandwidth, half that demanded by MPEG-1, to provide CD-quality audio .

6 MPEG-4 ADDRESSES THE NEED FOR
( Universal accessibility and robustness in error prone environments ( Multimedia audio-visual data need to be transmitted and accessed in heterogeneous network environments, possibly under severe error conditions (e.g. mobile channels). Although the MPEG-4 standards will be network (physical-layer) independent in nature, the algorithms and tools for coding audio-visual data need to be designed with awareness of network peculiarities. [3]

( High interactive functionality ( Future Multimedia applications will call for extended interactive functionalities to assist the user's needs. In particular the flexible, highly interactive access to and manipulation of audio-visual data will be of prime importance. It is envisioned that - in addition to conventional playback of audio and video sequences - the user need to access "content" of audio-visual data to present and manipulate/store the data in a highly flexible way.

( Coding of natural and synthetic data ( Next generation graphics processors will enable Multimedia terminals to present both pixel based audio and video data together with synthetic audio/speech and video in a highly flexible way. MPEG-4 will assist the efficient and flexible will assist the efficient and flexible coding and representation of both natural (pixel based) as well as synthetic data. meaning a good quality of the reconstructed data, is required. Improved coding efficiency, in particular at very low .

( Compression efficiency ( For the storage and transmission of audio-visual data a high coding efficiency, meaning a good quality of the reconstructed data, is required. Improved coding efficiency, in particular at very low bit rates below 64 kbits/s, continues to be an important functionality to be supported by the MPEG-4 video standard.

7 REQUIREMENTS FOR THE MPEG-4 VIDEO STANDARD

|Functionality |MPEG-4 Video-Requirements |
|Content-Based Interactivity |
|Content-Based Manipulation and Bitstream Editing |Support for content-based manipulation and bitstream editing |
| |without the need for transcoding. |
|Hybrid Natural and Synthetic Data Coding |Support for combining synthetic scenes or objects with natural |
| |scenes or objects. |
| |The ability for compositing synthetic data with ordinary video, |
| |allowing for interactivity. |
|Improved Temporal Random Access |Provisions for efficient methods to randomly access, within a |
| |limited time and with fine resolution, parts, e.g. video frames or |
| |arbitrarily shaped image content from a video sequence. This |
| |includes 'conventional' random access at very low bit rates. |
|Compression |
|Improved Coding Efficiency |MPEG-4 Video shall provide subjectively better visual quality at |
| |comparable bit rates compared to existing or emerging standards. |
|Coding of Multiple Concurrent Data Streams |Provisions to code multiple views of a scene efficiently. For |
| |stereoscopic video applications, MPEG-4 shall allow the ability to |
| |exploit redundancy in multiple viewing points of the same scene, |
| |permitting joint coding solutions that allow compatibility with |
| |normal video as well as the ones without compatibility constraints.|
|Universal Access |
|Robustness in Error-Prone Environments |Provisions for error robustness capabilities to allow access to |
| |applications over a variety of wireless and wired networks and |
| |storage media. Sufficient error robustness shall be provided for |
| |low bit rate applications under severe error conditions (e.g. long |
| |error bursts). |
|Content-Based Scalability |MPEG-4 shall provide the ability to achieve scalability with fine |
| |granularity in content, quality (e.g. spatial and temporal |
| |resolution), and complexity. In MPEG-4, these scalabilities are |
| |especially intended to result in content-based scaling of visual |
| |information. |

8 CONCLUSION

For a transport infrastructure to support interactive multimedia presentations , which enable end users to choose available MPEG-4 media content to compose their own presentations , control the delivery of such media data and interact with the server to modify the presentation in real-time . The initial design and implementations of a transport infrastructure for an IP based network will support a client-server system which enables end user to: 1. Author their own MPEG-4 presentations 2. Control the delivery of the presentations and, 3. Interact with the systems to make changes to the presentations in real time. It is foreseen that MPEG-4 will be an important component of multimedia applications on IP-based networks in the future.

BIBLIOGRAPHY

1. Thomas Sikora ,”The MPEG-4 Video Standard Verification Model” , Affiliation Of Author , Heinrich-Hertz-Institute (HHI) for Communication Technology, Berlin, FRG. http://wwwam.hhi.de/mpeg-video/papers/sikora/final.htm .
2. Haining Liu, Xiaoping Wei and Magda El Zarki “ A Transport Infrastructure Supporting Real Time Interactive MPEG-4 Client-Server Applications over IP Networks”, Department of Information and Computer Science , University of California, IRvinc .
3. T. Sikora and L. Chiariglione “ MPEG-4 Video and its Potential for Future Multimedia Services” , Heinrich-Hertz-Institute (HHI), Einsteinufer 37, D-10587 Berlin, Germany. http:// wwwam.hhi.de/mpeg-video/papers/sikora/iscas.htm .
4. Lights, Camera ..… The Latest in Multimedia Technology By Hank Hogan
5. MPEG-4 : A Multimedia Standard for the Third Millenium , Part2 Stefano Battista bsoft Franco Casalino Ernst and Young Consultants , Claudio Lande CSELT.
6. Thomas Sikora ,”MPEG Video Webpage” , Affiliation Of Author , Heinrich-Hertz-Institute (HHI) for Communication Technology, Berlin, FRG.

22

-----------------------

To
MESSENGER

CONTROL FLOW

DATA FLOW

Structure of the MPEG-4 Client

FROM
UNPACKER

COM-POSI-TOR

USER
EVENT
HANDLER

TO /
FROM
MESSE-NGER

DATA
CONTROLLER

COMPOSITOR
BUFFER

COMPOSITOR
BUFFER

OD

SD GRAPH

OD DECODER

BIFS DECODER

MEDIA OBJECT DECODER

MEDIA OBJECT DECODER

MEDIA OBJECT
DECODING BUFFER

MEDIA OBJECT DECOING BUFFER

OD DECODING
BUFFER

BIFS DECODING
BUFFER

SL
DE-
PACKET-IZER

SL
MANAGER

SESSION
CONTROLLER

Fig 3

Fig 2

TO/ FROM Q0S CONYROLLER

TO / FROM MESSENGER

FROM MESSENGER

STRUCTURE OF THE MPEG-4 SERVER

TO PACKER

DATA FLOW

CONTROL FLOW

LOCAL MP4
FILES

RAW VIDEO
RESOURCES

SL-
PACKETIZER

ENCODING/
PRODUCER

SESSION
CONTROLLER

DATA
CONTROLLER

Client

Server

MPEG-4 APPLICMATION

DECODER/
ENCODER

CONTROL

DATA

CON- -TROL

DATA

MPEG-4 APPLICATION

DELIVERY

SYSTEM ARCHITECTURE

DELIVERY

IP
NETWORK

USER
EVENT
HANDLER

SL
MESSEN-
GER

DATA CONTRO-
LLER

SESSION
CONTRO-
LLER

SL-
DEPACKET-
IZER

UN-
PACKER

MESSENGER

SESSION
CONTRO
LLER

PACKET

Q0S
CONTRO-
LLER

MESSENGER

SL-
PACKET
IZER

ENCODER /
DECODER

DATA CON-
TROLLER

Fig 1

Similar Documents

Free Essay

Video Compression: an Examination of the Concepts, Standards, Benefits, and Economics

...Video Compression: An Examination of the Concepts, Standards, Benefits, and Economics ITEC620 April 14, 2008 To accommodate the increased demand for digital video content, compression technology must be used. This paper examines the most commonly used compression formats, the MPEG-1, MPEG-2 and MPEG-4 video compression formats, their relative benefits and differences, the delivery methods available for digital video content and the economics of video content delivery. Every time a digital video disc is played, a video is watched on YouTube, an NFL clip is viewed on a Sprint-based cellular phone, or a movie is ordered through an on-demand cable television video service, the viewer is watching data that is not in the state it which it originated. Video in an unmodified state is comprised of vast quantities of data (Apostopoulos & Wee, 2000). In order to make effective and efficient usage of video data, some method of reducing the quantity of data is necessary. Apostopoulos and Wee, in their 2000 paper, “Video Compression Standards” explain this succinctly and well, “For example, consider the problem of video transmission within high-definition television (HDTV). A popular HDTV video format is progressively scanned 720x1280 pixels/frame, 60 frames/s video signal, with 24-bits/pixel (8 bits for red, green, and blue), which corresponds to a raw data rate of about 1.3 Gbits/sec. Modern digital communication systems can only...

Words: 5707 - Pages: 23

Free Essay

Mpeg4

...MPEG-4 The new standard for multimedia on the Internet, powered by QuickTime. MPEG-4 is the new worldwide standard for interactive multimedia creation, delivery, and playback for the Internet. What MPEG-1 and its delivery of full-motion, full-screen video meant to the CD-ROM industry and MPEG-2 meant to the development of DVD, MPEG-4 will mean to the Internet. What Is MPEG-4? MPEG-4 is an extensive set of key enabling technology specifications with audio and video at its core. It was defined by the MPEG (Moving Picture Experts Group) committee, the working group within the International Organization for Standardization (ISO) that specified the widely adopted, Emmy Award–winning standards known as MPEG-1 and MPEG-2. MPEG-4 is the result of an international effort involving hundreds of researchers and engineers. MPEG-4, whose formal designation is ISO/IEC 14496, was finalized in October 1998 and became an international standard in early 1999. The components of MPEG-4 Fact Sheet MPEG-4 Fact Sheet MPEG-4 2 Multimedia beyond the desktop The MPEG committee designed MPEG-4 to be a single standard covering the entire digital media workflow—from capture, authoring, and editing to encoding, distribution, playback, and archiving. The adoption of the MPEG-4 standard is not just critical for desktop computers, but is increasingly important as digital media expands into new areas such as set-top boxes, wireless devices, and game consoles. Member companies of the MPEG-4...

Words: 1851 - Pages: 8

Premium Essay

Sfddfdg

...formats supported: H.264 video up to 1080p, 30 frames per second, High Profile level 4.1 with AAC-LC audio up to 160 Kbps, 48kHz, stereo audio in .m4v, .mp4, and .mov file formats; MPEG-4 video up to 2.5 Mbps, 640 by 480 pixels, 30 frames per second, Simple Profile with AAC-LC audio up to 160 Kbps per channel, 48kHz, stereo audio in .m4v, .mp4, and .mov file formats; Motion JPEG (M-JPEG) up to 35 Mbps, 1280 by 720 pixels, 30 frames per second, audio in ulaw, PCM stereo audio in .avi file format | Headphones | * Apple Earphones with Remote and Mic * Frequency response: 20Hz to 20,000Hz * Impedance: 32 ohms | Mail Attachment Support | Viewable Document Types.jpg, .tiff, .gif (images); .doc and .docx (Microsoft Word); .htm and .html (web pages); .key (Keynote); .numbers (Numbers); .pages (Pages); .pdf (Preview and Adobe Acrobat); .ppt and .pptx (Microsoft PowerPoint); .txt (text); .rtf (rich text format); .vcf (contact information); .xls and .xlsx (Microsoft Excel) | * Video out support at 576p and 480p with Apple Component AV Cable; 576i and 480i with Apple Composite AV Cable (cables sold separately) * Video formats supported: H.264 video up to 1080p, 30 frames per second, High Profile level 4.1 with AAC-LC audio up to 160 Kbps, 48kHz, stereo audio in .m4v, .mp4, and .mov file formats; MPEG-4 video up to 2.5 Mbps, 640 by 480 pixels, 30 frames per second, Simple Profile with AAC-LC audio up to 160 Kbps per channel, 48kHz, stereo audio in .m4v, .mp4, and .mov file formats;...

Words: 380 - Pages: 2

Free Essay

Computer Terms

...Resource Under Seized. * 3G - 3rd Generation. * GSM - Global System for Mobile Communication. * CDMA - Code Divison Multiple Access. * UMTS - Universal Mobile Telecommunication System. * SIM - Subscriber Identity Module. * AVI = Audio Video Interleave * RTS = Real Time Streaming * SIS = Symbian OS Installer File * AMR = Adaptive Multi-Rate Codec * JAD = Java Application Descriptor * JAR = Java Archive * JAD = Java Application Descriptor * 3GPP = 3rd Generation Partnership Project * 3GP = 3rd Generation Project * MP3 = MPEG player lll * MP4 = MPEG-4 video file * AAC = Advanced Audio Coding * GIF = Graphic Interchangeable Format * JPEG = Joint Photographic Expert Group * BMP = Bitmap * SWF = Shock Wave Flash * WMV = Windows Media Video * WMA = Windows Media Audio * WAV = Waveform Audio * PNG = Portable Network Graphics * DOC = Document (Microsoft Corporation) * PDF = Portable Document Format * M3G = Mobile 3D Graphics * M4A = MPEG-4 Audio File * NTH = Nokia Theme (series 40) * THM = Themes (Sony Ericsson) * MMF = Synthetic Music Mobile Application File * NRT = Nokia Ringtone * XMF = Extensible Music File * WBMP = Wireless Bitmap Image * DVX = DivX Video * HTML = Hyper Text Markup Language * WML = Wireless Markup Language * CD - Compact Disk. * DVD - Digital Versatile Disk. * CRT - Cathode Ray Tube. * DAT - Digital Audio Tape. * DOS - Disk Operating System. * GUI - Graphical User Interface. * HTTP - Hyper Text Transfer Protocol. * IP - Internet...

Words: 299 - Pages: 2

Free Essay

An Analysis of Xml Database Solutions for the Management

...An Analysis of XML Database Solutions for the Management Of MPEG-7 Media Descriptions MOHSIN ARSALAN SZABIST KARACHI, PAKISTAN Abstract: MPEG-7 based applications will be set up in the near future due to its promising standard for the description of multimedia contents. Therefore management of large amount of mpeg-7 complaint media description is what is needed. Mpeg documents are XML documents with media description schemes defined in xml schema. In this paper are mentioned some of the solutions that help make a management system for mpeg-7. Furthermore these solutions are compared and analyzed. The result shows how limited today’s databases are, when comes the case of mpeg-7. Keywords: MPEG-7, XML, SCHEMA 1. INTRODUCTION The paper is organized in the following manner that sec1 is the introduction to what mpeg-7 is. Then the solutions for its management are mentioned. Then the solutions are analyzed and finally in the end the conclusion is mentioned. 1.1. MPEG-7: Mpeg (moving picture expert group) is the creator of the well known mpeg1 and mpeg2 and mpeg4. Mpeg-7 is an ISO standard which is developed by mpeg group. The mpeg-7 standard formally named “multimedia content description interface” provides a rich of standardized tools which describes a multimedia content. Humans as well as automatic systems process audiovisual information which is within the mpeg7 scope. Audiovisual description tools (the metadata elements and their structure and relationships...

Words: 1061 - Pages: 5

Free Essay

Blockbuster Technology Plan

...3D STREAMING TECHNOLOGY AND LIMITATIONS 3D films are not a new technology, but only recently have movies been brought into our homes thanks to the high capacity of Blu-Ray discs and because of new televisions that feature high refresh rates which show films at double the frame rate required for our eyes to distinguish motion. A 3D movie is really two entire movies shown at the same time, one that your left eye sees, and one for your right eye. Special glasses are required to switch between the two in a way that your brain perceives as three-dimensional. Because of this, twice as much data is needed for a 3D movie compared to a 2D one, which presents certain issues when trying to send this data over a medium such as the Internet. Thankfully, several technologies are emerging that will make this possible in the near future and that we believe Blockbuster is well positioned to take advantage of in order to offer 3D content streaming to their customers. The biggest factor of sending a movie over the Internet is the amount and type of compression that is used. It would be impossible to send a fully uncompressed high definition movie directly to your home, whether or not it is 3D, because the amount of data is staggering. In order to make the files smaller and able to be sent in small pieces, companies like Netflix use compression formats such as MPEG or AVI which use special technologies that result in smaller files. This involves limiting the bit depth for colors in a way...

Words: 969 - Pages: 4

Free Essay

Computers

...non-graphical calculations. An example of GPUs being used non-graphically is the generation of Bitcoins, where the graphical processing unit is used to solve puzzles. In addition to the 3D hardware, today's GPUs include basic 2D acceleration and framebuffer capabilities (usually with a VGA compatibility mode). Newer cards like AMD/ATI HD5000-HD7000 even lack 2D acceleration, it has to be emulated by 3D hardware. [edit]GPU accelerated video decoding The ATI HD5470 GPU (above) features UVD 2.1 which enables it to decode AVC and VC-1 video formats- GPU from Vaio E series laptop Most GPUs made since 1995 support the YUV color space and hardware overlays, important for digital video playback, and many GPUs made since 2000 also support MPEG primitives such as motion compensation and iDCT. This process of hardware accelerated video decoding, where portions of the video decoding process and video post-processing are offloaded to the GPU hardware, is commonly referred to as "GPU accelerated video decoding", "GPU assisted video decoding", "GPU hardware accelerated video decoding" or "GPU hardware assisted video decoding". More...

Words: 385 - Pages: 2

Premium Essay

Yxcvyxcv Sadf

...Your current notebook will be replaced in November 2011. After the exchange, please immediately check if all of your local data has been successfully copied to your new notebook. In case some of your locally stored data is missing, we are able to restore it within five days after the exchange. It will not be possible to retrieve any data from your old notebook after this date. 1. Backup data information Please note: * You are responsible for the backup of your own local data. * Media files such as MP3 / MP4 files and related software (e.g. iTunes) will not be copied * It is not allowed to copy local private data such as audio and video files to our network drives (e.g. R-Drive) for backup purposes. Doing so would lead to a storage capacity shortage. * Aura and MyClient databases will not be copied and you need to create new local replicas on your new notebook. GTS will copy the following data to your new PC: * All files in “C:\Documents and Settings\[your short sign]\My Documents” * Lotus Notes settings and local databases * Favorites * Desktop If you have stored additional business-related data in different locations on your PC, please add the path/location to the section below. | | | | | Business data to copy to the new notebook (path/location) |       | | | | | | 2. Software information * LoS and/or team-specific software will be installed on your new Windows 7 notebook. * User specific...

Words: 457 - Pages: 2

Free Essay

A New Dct-Based Perturbation Scheme for High Capacity Data Hiding in H.264/Avc Intra Frame

...Kuo-Liang Chung, Po-Chun Chang, and Wei-Jen Yang National Taiwan University of Science and Technology, Taipei, Taiwan 10672, R. O. C. E-mail: {D9715002, k.l.chung, D9915014, wjyang}@mail.ntust.edu.tw Abstract—Currently, Laroche et al. presented an efficient method to eliminate redundant predictors for intra coding in H.264/AVC. Their proposed method has a bitrate advantage reduction under a low bitrate environment. In this paper, we present a fast training-based redundant predictor elimination scheme to enhance the execution-time performance while preserving similar bitrates. We first develop a new statistic training approach to construct a set of most similar predictor-pairs and determine the priority of each predictor. Based on the constructed predictor-pair set and the determined predictor priorities, we thus can efficiently eliminate the redundant predictors and preserve more frequently used ones, leading to the advantages of bitrate reduction and computation-saving. The results of experiments on the sixteen standard Video Coding Experts Group (VCEG) test sequences turn out that under similar bitrates, the average execution-time improvement ratio of the proposed scheme over Laroche et al.’s method can be more than 16.77%. I. I NTRODUCTION H.264/advanced video coding (AVC) [1], [6], [8], [10], established by the Joint Video Team (JVT) of ISO/IEC Moving Picture Experts Group (MPEG) and ITU-T Video Coding Experts Group (VCEG), has become the state-of-the-art video coding standard...

Words: 4068 - Pages: 17

Free Essay

Mpeg

... BRANCH : TELECOM ROLL NO : 527 PREFACE The acronym MPEG stands for Moving Picture Expert Group, which worked to generate the specifications under ISO, the International Organization for Standardization and IEC, the International Electrotechnical Commission. What is commonly referred to as "MPEG video" actually consists at the present time of two finalized standards, MPEG-1 and MPEG-2, with a third standard, MPEG-4, in the process of being finalized . The MPEG-1 & -2 standards are similar in basic concepts. They both are based on motion compensated block-based transform coding techniques, while MPEG-4 deviates from these more traditional approaches in its usage of software image construct descriptors, for target bit-rates in the very low range, < 64Kb/sec. Because MPEG-1 & -2 are finalized standards and are both presently being utilized in a large number of applications, this case study concentrates on compression techniques relating only to these two standards. MPEG 3- it was originally anticipated that this standard would refer to HDTV applications, but it was found that minor extensions to the MPEG-2 standard would suffice for this higher bit-rate, higher resolution application, so work on a separate MPEG-3 standard was abandoned. CONTENTS *Introduction *History *Video Compression *Video Quality *MPEG *MPEG Standards *MPEG Video Compression Technology *MPEG Specification *System -Elementary Stream -System Clock Referance -Program...

Words: 5683 - Pages: 23

Free Essay

Prabhakar

...White paper h.264 video compression standard. New possibilities within video surveillance. table of contents 1. introduction 2. Development of h.264 3. how video compression works 4. h.264 profiles and levels 5. Understanding frames 6. Basic methods of reducing data 7. efficiency of h.264 8. Conclusion 3 3 4 5 5 6 7 9 1. introduction The latest video compression standard, H.264 (also known as MPEG-4 Part 10/AVC for Advanced Video Coding), is expected to become the video standard of choice in the coming years. H.264 is an open, licensed standard that supports the most efficient video compression techniques available today. Without compromising image quality, an H.264 encoder can reduce the size of a digital video file by more than 80% compared with the Motion JPEG format and as much as 50% more than with the MPEG-4 Part 2 standard. This means that much less network bandwidth and storage space are required for a video file. Or seen another way, much higher video quality can be achieved for a given bit rate. Jointly defined by standardization organizations in the telecommunications and IT industries, H.264 is expected to be more widely adopted than previous standards. H.264 has already been introduced in new electronic gadgets such as mobile phones and digital video players, and has gained fast acceptance by end users. Service providers such as online video storage and telecommunications companies are also beginning to adopt H.264. In the video surveillance industry...

Words: 3156 - Pages: 13

Free Essay

Reort on Airtel

...A Report On Market Developing of DTH Services Submitted By:- Bhuvaneshwar S. Kushwah 12BSP0292 Bharti Airtel Ltd. (DTH Division) Contents Introduction 3 History of Airtel Digital T.V (DTH) 3 Hardware Information 4 How digital TV works 4 Objective of the Project 6 Methodology 7 Competitors of Airtel Digital T.V 7 Mystery shopping 9 Importance of mystery shopping 9 Half of the week I was visiting one Airtel digital T.V Outlet 9 Creating new outlet of Airtel digital T.V. (DTH) 10 Why we were targeting these shops? 10 Problems face by me during the task 10 (Sales Promotion event at courtyard) 10 Van Activity 11 Objective of company behind Van activity 12 Following are the location where I have done van activity 12 Limitations of the Van activity 13 What I learn during van activity? 13 References:- 14 Introduction Bharti Airtel Limited, commonly known as Airtel, is an Indian multinational telecommunications Services Company headquartered at New Delhi, India. It operates in 20 countries across South Asia, Africa and the Channel Islands. Airtel has GSM network in all countries in which it operates, providing 2G, 3G and 4G services depending upon the country of operation. Airtel is the world's third largest mobile telecommunications company with over 261 million subscribers across 20 countries as of August 2012. The Digital TV business provides Direct-to-Home (DTH) TV services across India under the brand name Airtel digital...

Words: 2637 - Pages: 11

Free Essay

File Formats

...audio-with-video playback. Like the DVD video format, AVI files support multiple streaming audio and video, although these features are seldom used. | Flash Video File (flv) | Is container file format used to deliver video over the internet using Adobe Flash Player version 6-11. | iTunes Video File (m4v) | Is a video format and file extension extremely resembling traditional and common MP4 (MPEG-4 Video File). M4V video file format is developed by Apple Inc., who uses it to flag as a video file and attach it to iTunes, by encoding TV episodes, movies, and music videos in the iTunes Store under official DRM copy protection. | Apple QuickTime Movie (mov) | Is a common multimedia format often used for saving movies and other video files, using a proprietary compression algorithm developed by Apple Computer, compatible with both Macintosh and Windows platforms. | MPEG-4 Video File (mp4) | Is a container format allowing a combination of audio, video, subtitles and still images to be held in the one single file. It also allows for advanced content such as 3D graphics, menus and user interactivity. | MPEG Video File (mpg) | Is a standard for lossy compression of...

Words: 1237 - Pages: 5

Free Essay

Multimedia Compression(Not Mine)

...format that takes up less capacity when it is stored or transmitted. Video compression (or video coding) is an essential technology for applications such as digital television, DVD-Video, mobile TV, videoconferencing and internet video streaming. Standardising video compression makes it possible for products from different manufacturers (e.g. encoders, decoders and storage media) to inter-operate. An encoder converts video into a compressed format and a decoder converts compressed video back into an uncompressed format. Recommendation H.264: Advanced Video Coding is a document published by the international standards bodies ITU-T (International Telecommunication Union) and ISO/IEC (International Organisation for Standardisation / International Electrotechnical Commission). It defines a format (syntax) for compressed video and a method for decoding this syntax to produce a displayable video sequence. The standard document does not actually specify how to encode (compress) digital video – this is left to the manufacturer of a video encoder – but in practice the encoder is likely to mirror the steps of the decoding process. Figure 1 shows the encoding and decoding processes and highlights the parts that are covered by the H.264 standard. The H.264/AVC standard was first published in 2003. It builds on the concepts of earlier standards such as MPEG-2 and MPEG-4 Visual and offers the potential for better compression efficiency (i.e. better-quality compressed video) and greater...

Words: 1593 - Pages: 7

Free Essay

Abo (Adaptive Binary Optimization) the New Innovation in Data Compression

...processors, binary code that can be executed on a computer, individual samples from a data acquisition system, etc. Typically, these easy-to-use encoding methods require data files about twice as large as actually needed to represent the information. Data compression is the general term for the various algorithms and programs developed to address this problem. A compression program is used to convert data from an easy – to - use format to one optimized for compactness. Likewise, an uncompressing program returns the information to its original form. World over, the rapid growth in information and communication technologies has led to an explosive demand for effective means to compress and store data. And this data is no longer simple text, but encompasses a variety of formats from text, images to moving pictures. Despite the continuous research in compression technologies and the emergence of compression standards for audio, video, images and text, the quest for the perfect compression algorithm is still on. Many companies have entered the gun lap in the race to define compression standards, especially for the most complex and demanding format of all, the moving pictures. Here we examine the new innovation ABO (Adaptive Binary Optimization). INTRODUCTION Compression approaches have been many and diverse. Most of these, especially when it comes to images, revolve around the technique of pointillism—of seeing images as a collection of dots or pixels. Images are stored inside...

Words: 3505 - Pages: 15