Free Essay

Asas

In:

Submitted By zanyzcy
Words 7263
Pages 30


3D Rendering in the Cloud

ˇ c Martin D. Carroll, Ilija Hadzi´ , and William A. Katsak

Many modern applications and window systems perform three-dimensional (3D) rendering. For a cloud system to support such applications, that 3D rendering must be performed in the cloud, because the end-user equipment cannot be relied upon to contain the necessary rendering hardware. All systems that perform 3D rendering in the cloud are faced with two fundamental and related problems: 1) How to enable an arbitrary number of users to produce rendered pixel streams, and 2) how to transfer those pixel streams out of the server’s frame buffers and into one or more encoders, for transmission to the user. We have implemented a new form of display virtualization that solves both of these problems in a low-level and transparent manner. Using our display virtualization (which we call the virtual cathode ray tube controller (VCRTC)), the cloud system can support an arbitrary number of pixel streams (bounded only by memory and bandwidth resources), and it can dynamically associate those streams with encoders. VCRTCs are completely transparent to the applications: No application needs to be modified, recompiled, or even relinked to use VCRTCs. Because they are low-level and transparent, VCRTCs are also a general mechanism with utility beyond cloud systems. © 2012 Alcatel-Lucent.

Introduction
Three-dimensional rendering is the process of transforming a model of a three-dimensional (3D) scene into a two-dimensional array of pixels. Pixels are typically displayed on a monitor but can also be stored for further rendering operations. Applications such as computer games and scientific visualization software continuously perform 3D rendering as part of their operation. Also, some windowing systems, such as Microsoft’s Desktop Window Manager [13] and the open source Compiz [4], use 3D rendering for desktop compositing. If a scene is animated, the process repeats for each frame, and if an application is interactive (for example, a game), the process also has real-time requirements. Rendering a single frame involves a series of mathematical operations and typically imposes significant load on the system. Modern personal computers (PCs) often include a graphics processing unit (GPU) whose primary purpose is to relieve the central processing unit (CPU) from computing-intensive tasks related to graphics.

Bell Labs Technical Journal 17(2), 55–66 (2012) © 2012 Alcatel-Lucent. Published by Wiley Periodicals, Inc. Published online in Wiley Online Library (wileyonlinelibrary.com) • DOI: 10.1002/bltj.21544

Panel 1. Abbreviations, Acronyms, and Terms 2D—Two-dimensional 3D—Three-dimensional AMD—Advanced Micro Devices API—Application programming interface CPU—Central processing unit CRT—Cathode ray tube CRTC—Cathode ray tube controller CUDA—Compute Unified Device Architecture DDX—Device dependent part of X DMA—Direct memory access DPMS—Display power management subsystem DRI—Direct rendering infrastructure DRM—Direct rendering manager DVI—Digital video interface GEM—Graphics-execution manager GLX—OpenGL extension for X GPU—Graphics processing unit GViM—GPU-accelerated virtual machines I/O—Input-output KMS—Kernel mode settings OpenGL—Open Graphics Library PC—Personal computer PCIe—Peripheral component interconnect express PCON—Pixel consumer RDP—Remote Desktop Protocol RGB—Red, green, blue TTM—Translation table manager UMS—User mode settings USB—Universal serial bus VCRTC—Virtual CRTC VCRTCM—VCRTC manager VGA—Video graphics array VM—Virtual machine VMM—VM monitor VNC—Virtual network computing

(For an introduction to the internal architecture and functioning of GPUs, see [6] and [11].) The GPU’s secondary function is to provide an interface to a display device. A typical GPU provides significant computing resources (that is, floating-point units and memory) but also consumes significant power. In the traditional PC model, the GPU must be dimensioned to meet the needs of the most graphicsintensive applications that the user intends to run. In the consumer market these are typically games. The GPU is underutilized when these applications are not running. As the complexity of applications increases, it becomes necessary for end users to periodically upgrade their GPUs. The same trend exists in the PC industry as a whole. Hence, the motivations for moving generalpurpose computing resources into the cloud (namely, sharing and scaling) apply to GPUs as well. In this paper we examine the technical challenges of virtualizing the GPU’s display circuitry, which is a necessary step towards a scalable system that can efficiently perform 3D rendering in the cloud and stream rendered pixels to the users at the edge. We also share our experiences and lessons learned from implementing a proof-of-concept prototype.

The rest of the paper is organized as follows. First we explain the general principles of how to move 3D rendering into the cloud. Next we provide background material on local 3D rendering, and we discuss the fundamental problem of how to access the contents of a frame buffer produced by a GPU. Then we present our novel mechanism for transparent frame buffer access, provide an overview of related work, and present our conclusions. We also include suggestions for future work.

Into the Cloud
A logical way to move 3D rendering into the cloud is to split the GPU’s rendering and display functions, and to implement the former on a server farm that connects, over a network, to the display function at the user’s location. At first glance this might look like a traditional remote-display problem, for which many technologies exist (Virtual Network Computing (VNC), Remote Desktop Protocol (RDP), and X Windows, to name a few). However, a closer examination reveals that we are dealing with a harder problem. First, few existing remote-display technologies enable the remotely-accessed applications to fully use

56

Bell Labs Technical Journal

DOI: 10.1002/bltj

the hardware capabilities of a GPU. Second, different applications, written for different operating systems, rely on different application programming interfaces (APIs) and different abstract models of the rendering pipeline. As a specific example, consider two widely used APIs for rendering: Open Graphics Library (OpenGL*) and DirectX* [10]. It does not matter to the end user which API an application was written for, as long as the cloud service can deliver the application with satisfactory performance. Supporting a heterogeneous environment that is presented to a user as one unified system is a major technical challenge. Third, sharing a single GPU among multiple users (who require mutual isolation) is, to our knowledge, an unsolved problem, as is the converse problem of dynamically “bonding” a selected set of GPUs into a single graphics resource that is temporarily assigned to a single user. Finally, a large-scale, multi-user system (as opposed to a single-user connection to one remote PC) involves dealing with extreme volumes of data, so efficient encoding and streaming technology is essential. Figure 1 is a simplistic view of a computing system in which the processor C and the GPU G have been moved into the cloud. The display D is connected to the cloud over a network N. The connections in the figure represent the logical flow of data rather than physical interfaces. The GPU stores rendered pixels in its memory (also known as the frame buffer) for access by the encoder E. To support efficient pixel transport, encoding must involve some kind of data compression. For example, a display with a resolution of 1600x1200, 32 bits per pixel, and streamed at 30 frames per second would consume over 2.7 Gb/s of network bandwidth. The encoding algorithm employed should have high compression efficiency and low latency. Latency is particularly important because it directly contributes to the end user’s sensory roundtrip time—that is, the time from which the user performs some action (for example, a key stroke) until the time at which the effect of that action becomes visible on the display. The total latency (which includes the network and all devices in the path) should be less than approximately 150 ms [1].

Streamer S is responsible for packetizing the encoded pixels and rate shaping. Rate shaping is important as it generally makes the network design easier and the system behavior more predictable. Using well-shaped packet streams results in lower jitter, less packet loss, and a better overall user experience. It is worth noting that functions analogous to the encoder and the streamer exist in the traditional local rendering model. There, the frame buffer is accessed by a circuit inside the GPU called the cathode ray tube controller (CRTC) [25]. (Note that the name is a legacy and does not imply that the screen must be based on CRT technology.) The CRTC constantly “scans out” the contents of the frame buffer and delivers the resulting pixel stream to the GPU’s internal encoder. The encoder in turn generates a signal that depends on the particular display technology—for example, an analog RGB signal in the case of a video graphics array (VGA) monitor or a digital stream in the case of a digital video interface (DVI) monitor. Finally, the video signal is delivered to the display device over a standard display connector. When the GPU is moved into the cloud, the encoder E and streamer S in Figure 1 respectively assume the roles of the GPU’s internal encoder and display connector. The user’s device must have a corresponding decoder, which decodes the data stream and regenerates

C

G

E

S

N

E

1

g

D

C—Application D—Display E—Encoder E 1—Decoder

g—Graphics display functionality G—Graphics processing unit N—Network S—Streamer

Figure 1. A simplistic view of rendering and display separation.

DOI: 10.1002/bltj

Bell Labs Technical Journal

57

the original pixels. This is shown as block E -1 in the figure. The device can also provide some small amount of graphics functionality, shown as block g in the figure. At a minimum, the user’s device must implement the frame buffer, video encoder, and display connector.

The Linux Graphics Stack
Because our work requires inspecting or modifying various internal aspects of the server’s graphic subsystem, we are implementing it on Linux* rather than a closed, proprietary operating system. With the exception of the source code, information and documentation about the design of the Linux graphics subsystem can be surprisingly difficult to find, and what does exist is often outdated and incomplete. Hence in this section we briefly describe the design of that subsystem. Figure 2 is an oversimplified diagram of the graphics stack found in a typical Linux configuration. At the top is the application (or a window system, which is itself an application) that is performing 3D rendering. The application uses some collection of

graphics APIs, which eventually call into the Mesa 3D Graphics Library, which is an open source software implementation of OpenGL. Mesa relays all rendering operations to the X server, Xorg, which in turn relays them to the graphics-device-dependent portion of X (DDX). (The user space portion of the GPU driver is split between Mesa and DDX.) Alternatively, rendering operations can bypass Xorg and DDX by using the direct rendering infrastructure (DRI). In the latter case, Mesa is responsible for translating OpenGL calls to device-specific rendering commands and compiling the so-called “shader” code. In either case, device-specific rendering commands wind up at the user space portion of the direct rendering manager (DRM), which forwards them to the corresponding kernel module. The DRM kernel module in turn forwards the received commands to the GPU kernel module, which handles their delivery to the GPU. In the process, the GPU kernel module translates all abstract references to buffer objects (referred to by the rendering commands) to actual

App Mesa Xorg DRI DDX libdrm DRM/GEM TTM GPU driver GPU
†Trademark

User space Kernel

of Linus Torvalds. GPU—Graphics processing unit libdrm—DRM library TTM—Translation table manager

DDX—Device dependent part of X DRM—Direct rendering manager GEM—Graphics execution manager

Figure 2. Linux† graphics stack.

58

Bell Labs Technical Journal

DOI: 10.1002/bltj

addresses in GPU memory, which is managed by the kernel. The GPU kernel module is also responsible for managing the CRTCs, detecting available display modes, setting up the display hardware for the requested display mode, managing the GPU’s power consumption, and synchronizing the rendering operations with the pixel scan-out. The last feature is essential for providing smooth visual effects without unpleasant frame tearing. (An informed reader may object that the GPU kernel module is not used to set up display hardware that has a user-mode settings (UMS) driver. However, UMS is now considered obsolete, and kernel mode settings (KMS) drivers are the only ones that will be supported in the future.) Graphics memory management is handled by the translation table manager (TTM) module and the graphics execution manager (GEM) components of the DRM module [2]. If an application or kernel module requires a buffer for any purpose (for example, frame buffer, texture buffer, mouse cursor buffer, or shader code), it must create a GEM object and refer to that object using methods and attributes assigned to that object. The GEM object is mapped to actual hardware resources by the kernel module. This mapping is normally hidden from the application, which typically uses OpenGL and X Windows objects; Mesa and DDX map all references to OpenGL and X Windows objects into GEM objects. The function of TTM and GEM partially overlap, and some GPUs— for example, those from AMD—require both TTM and GEM, while others—for example, those from Intel— rely on GEM only. Later in the paper we will provide a figure illustrating how we augmented the Linux graphics stack to implement 3D rendering in the cloud.

impractical, extreme is to tightly integrate the GPU with the encoder. Although this option allows the fullest degree of freedom in optimizing the G-to-E connection, it is in practice available only to the GPU vendor. A reasonable compromise is to access the frame buffer through the same input-output (I/O) interface used by the CPU C. In modern GPUs this interface is peripheral component interconnect express (PCIe) [3], which is a high bandwidth interface defined by an open standard. GPUs typically connect to the system using a 16-lane interface with each lane running at 4 Gb/s. The primary reason for such a fast interface is to minimize the texture-loading time during the rendering process. GPUs rarely run sustained transfers at the full interface rate, and when they do it is usually in the downstream—that is, the C-to-G—direction. Hence, upstream bandwidth is available for extracting the frame-buffer contents. Further, because the PCIe architecture is switched, it is possible to implement the encoder in hardware as a peer device on another port of the PCIe switching fabric. Our system supports two modes of frame-buffer access. The first is pull mode, in which the encoder device issues direct memory access (DMA) read requests from the GPU, and push mode in which the GPU uses DMA to write the frame-buffer contents to the encoder. A combination of pull and push modes, in which the GPU writes to system memory and the encoder device pulls from that memory, is also supported.

Display Virtualization
In contrast to local rendering, a GPU in the cloud is typically shared among multiple unrelated rendering contexts. One of the limiting factors in sharing a GPU in a scalable and application-transparent manner is its display subsystem. The number of displays that a given GPU can support is limited by the number of CRTCs and display connectors. If, for example, one application uses all the available displays, then all other applications must perform their rendering off screen. While it is possible to access off-screen frame buffers using the mechanisms described earlier (in “Frame-Buffer Access”), the applications themselves must often be (re)programmed to use the off-screen

Frame-Buffer Access
The link between the GPU G and the encoder E is a critical bottleneck in the system, primarily because it carries a large volume of uncompressed data. An obvious but largely impractical implementation is to connect the encoder to the display connector of the GPU. The main problem with this approach is that it is limited to the type and number of display connectors available on the particular GPU. The other, also

DOI: 10.1002/bltj

Bell Labs Technical Journal

59

model. This requirement makes it difficult to port to the cloud those applications that assume access to a physical display. Such applications include full-screen games and windowing systems. Alternatively, it is possible to intercept API calls into the graphics library and redirect drawable contexts to off-screen buffers from which pixels can be streamed [23]. This approach, however, suffers from two drawbacks: First, because it requires relinking, it can be made transparent only for dynamically linked applications. And second, it is typically tied to a specific framework (for example, applications based on OpenGL and X Windows). Ideally, a cloud-based rendering system should work with existing software, without requiring any modifications or recompiles. Hence, we have set as a goal the requirement that our system provide full application transparency. To support such transparency, it is necessary that the system provide an arbitrary number of virtual displays that applications can use in the same manner as physical displays. The number of virtual displays should be limited only by the amount of available memory and PCIe bandwidth. In our system we have implemented a new abstraction called a virtual CRTC (VCRTC). A VCRTC has all the properties of a physical CRTC, except that whereas a physical CRTC connects a given pixel stream to the GPU’s internal encoder and therefore to a local display, a VCRTC connects a given pixel stream to an encoder E in the system of Figure 1, and therefore to a remote display. To the rest of the system, a VCRTC is no different from a (physical) CRTC associated with a local display. A VCRTC is accessed through the same software abstraction and provides the same capabilities as a CRTC, including support for page flipping, vertical-blankinginterval synchronization (VBLANK), mouse-cursor overlay, the display-power-management subsystem (DPMS), and sub-windowing of the frame buffer. Applications and window managers cannot tell the difference between a CRTC and a VCRTC. Indeed, we can bring up an entire X Windows system running the GNOME* or KDE* window managers (to name two) without having to modify or even relink any program or library. (Of course, there is always a possibility that our system may expose bugs in existing software,
60 Bell Labs Technical Journal DOI: 10.1002/bltj

but we do not consider that a limitation of our approach. Indeed, we have found and fixed several bugs in Xorg and DDX, which were apparently formerly tested with only a small number of active CRTCs.) For maximum flexibility, the management system can, at any time, re-associate a given VCRTC with a different encoder or a different remote display, or de-associate a VCRTC from any encoder or remote display, without any effect on the running application. Redirecting the pixel stream is completely transparent to the application. The implementation of this dynamic VCRTCto-encoder association requires the involvement of the device drivers for both the GPU and the encoder. A naive implementation could potentially result in a device driver that is specific for a given GPU-encoder pair. Such a design would be impractical from a development and maintenance perspective. We have therefore factored the code into three separate modules: 1. The GPU driver. This driver is extended from the existing (non-VCRTC-aware) driver for a given GPU to support the VCRTC abstraction. 2. The encoder driver. This driver deals with the encoder hardware. 3. The VCRTC manager (VCRTCM). This kernel module tracks the VCRTC-to-encoder associations and provides data and control connections between the GPU and the encoder. The relationship between VCRTCM and the Linux graphics stack is shown in Figure 3. The GPU driver accesses the encoder through an abstract interface that the former exports to the VCRTCM; the GPU driver never has to deal with the details of the encoder implementation. For example, a GPU driver can (via VCRTCM) inform the associated encoder that rendering activity has occurred, and the encoder can use this information to decide whether it needs to get the new data from the GPU. Similarly, the encoder driver can feed information back to the GPU driver through an abstract interface that the former exports to the VCRTCM. One important use of the return path is the communication of VBLANK events. The GPU driver is responsible for creating VCRTCs. It does that by allocating control data structures for these CRTCs and calling DRM functions to

App

Mesa

Xorg DRI DDX libdrm DRM/GEM TTM GPU driver Encoder driver Encoder hardware VCRTCM

User space Kernel

GPU
†Trademark

of Linus Torvalds. GEM—Graphics execution manager GPU—Graphics processing unit Libdrm—DRM library TTM—Translation table manager VCRTC—Virtual CRTC VCRTCM—VCRTC manager

CRTC—Cathode-ray-tube controller DDX—Device dependent part of X DRM—Direct rendering manager

Figure 3. Our augmented Linux† graphics stack.

register these CRTCs with the system. The number of VCRTCs for each GPU instance is defined by a module parameter that may be set by the system administrator. The data structures passed to the DRM module contain pointers to functions that control the CRTCs. For physical CRTC, these functions directly manipulate the GPU’s CRTC hardware. In virtual CRTCs, these functions instead call into the VCRTCM module, which finds the attached encoder device (if one exists) and then calls the corresponding implementation function in that encoder’s driver. By virtue of normal call-chain semantics, the encoder’s implementation functions execute in the context of the GPU driver. The encoder driver is responsible for registering itself with the VCRTCM and providing pointers to all the implementation functions for the devices it controls. Here is an example of the typical call sequence for VCRTCs. Suppose that the GPU driver calls a particular DRM function f whose purpose is to provide to

the CRTC a pointer to the frame buffer that that CRTC should scan out. If that CRTC is virtual, then f calls a function f ‘ in the VCRTCM. The function f ‘ first finds the encoder driver that is currently associated with that VCRTC; f ‘ then calls the corresponding implementation function f “ in that encoder driver, passing f “ the pointer to the frame buffer. The pace of frame buffer access for VCRTCs is determined by the encoder and depends on the desired frame rate. On each pacing event, the encoder driver sends a VBLANK event to the GPU; the encoder driver can also grab the frame buffer from the GPU. To get to the GPU, the encoder driver uses VCRTCM functions, which find the associated GPU instance and redirect the call. This redirection is achieved via a set of callbacks that the GPU driver previously registered with the VCRTCM module. The association of the CRTC with the encoder, and the setting of the frame rate, are performed from user space and can be dynamically controlled by the
DOI: 10.1002/bltj Bell Labs Technical Journal 61

system administrator or system management software. The GPU driver provides a set of system calls that programs in user space can call to control VCRTCs, their association with encoders, and the desired frame rate. The bulk of the system implementation is in the VCRTCM module and the encoder drivers, which are new kernel-loadable modules. Modifications to the GPU driver are minimal and include creation of VCRTCs and their registration with the VCRTCM. Such a code organization makes it easy to keep our modules (the VCRTCM module and the encoder drivers) synchronized with upgrades to the open source GPU driver (which we obtain from the publicly available Linux kernel distribution) and the kernel itself.

desired association. Because VCRTCs are transparent, no system component—not the application, not the window manager, not the X server—is aware that this remoting is occurring, and not a single piece of software needs to be rewritten, recompiled, or relinked. Because VCRTCs are low-level and transparent, they can be used with almost any configuration of upper-layer software. For example, instead of using an X server and window manager, one could run applications on top of the Linux frame buffer device and have those applications remotely display—again without rewriting, recompiling, or relinking a single piece of code. The only requirement is that all involved kernel drivers be based on DRM, a de facto standard in Linux. This requirement excludes certain proprietary drivers supplied by GPU vendors.

Pixel-Stream Contents
A physical CRTC produces a pixel stream by scanning out the contents of a specified frame buffer (or a sub-window thereof). In the case of a virtual CRTC, the “scan out” is not performed by the VCRTC itself, but instead via the push or pull operations described earlier. For both physical CRTCs and virtual CRTCs, the pixel stream contains whatever the applications, window system, and graphics subsystem conspire to place into the frame buffer. Let us call a frame buffer on-screen if all or part of its content is being scanned out (by either a CRTC or a VCRTC). In the typical desktop configuration of Linux, all frame buffers, both on-screen and offscreen, are owned by the applications or the X server. The content of each on-screen buffer is determined by a window manager, which constructs the onscreen buffer by compositing the content of one or more off-screen buffers. The content of each offscreen buffer is determined by the application that is writing into it. If the user is running an application in full-screen mode, then that application, rather than the window manager, determines the content of the on-screen buffers. Hence, if we augment a typical desktop configuration of Linux with VCRTCs, we can remotely display any given “screen” produced by the window manager (or full-screen application) to any given remote display, simply by using the VCRTCM to create the

Screen Scraping
The act of reading the content of a fully rendered frame buffer and extracting information from it is referred to as screen scraping. Many remote display technologies (for example, x11vnc [20]) perform screen scraping in user space, an approach that often presents problems. Consider a user space process that is scraping a frame buffer. The easiest way to ensure that that process reads only fully rendered frames is for it to somehow detect the buffer swaps requested by applications. When an application issues a buffer-swap request, the actual swap will occur sometime later— specifically, when the next VBLANK event occurs. For physical CRTCs, VBLANK events are generated autonomously by the GPU’s display hardware, independently of application behavior. Hence it is difficult or impossible to ensure that the scraping process reads the frame buffer at precisely the right moments [24]. With VCRTCM, this problem goes away because the VCRTCM system controls screen scraping as well as VBLANK timing. For virtual CRTCs, scraping begins when the encoder hardware (or in certain cases its driver) issues the appropriate DMA (or software) read operation; scraping ends when the read operation completes, at which time the encoder hardware (or driver) issues the VBLANK event. This process ensures that scraping always reads a fully rendered buffer. The Linux graphics subsystem arranges for the buffer-swap requests issued by rendering applications

62

Bell Labs Technical Journal

DOI: 10.1002/bltj

to be synchronized to the VBLANK events generated by the GPU’s hardware. Because the Linux graphics subsystem cannot tell the difference between a VBLANK event that is generated by a physical CRTC and one that is generated by a virtual CRTC, applications that are rendering to virtual CRTCs are also properly synchronized to the (virtual) VBLANK events. With VCRTCM, the user space screen scraping process is eliminated, and there is no need to detect the buffer swaps of any application. By emulating the display hardware, as opposed to being a process that “steals” the display content from the hardware, the VCRTCM architecture provides an efficient and elegant solution for screen scraping.

Other Types of Targets
We stated earlier that a VCRTC connects a given pixel stream to an encoder. VCRTCs (and the VCRTCM) are actually more flexible than that. The pixel stream associated with a VCRTC can be sent to any software or hardware target in the same host. The only requirement is as follows: If the target is software, then that software must implement the target side of the VCRTCM API; if the target is hardware, then its driver must implement the target side of the VCRTCM API, and the hardware must implement the necessary push or pull functionality. We use the term pixel consumer (PCON) to refer to any VCRTC software or hardware target. The encoder driver and hardware shown in Figure 3 can be replaced by any PCON. To demonstrate the flexibility of our system, we used the VCRTCM infrastructure to provide 3D acceleration for universal serial bus (USB) DisplayLink* [5] devices. DisplayLink is a display technology that makes it possible to connect a monitor or other display device to a PC’s USB port, and by doing so, to increase the effective number of display connectors on a PC. One form of DisplayLink device is a USB-toDVI adapter. Linux support for DisplayLink is currently limited to instantiating a frame buffer in system memory and using the USB-to-DVI adapter to display the frame buffer’s content on the monitor. Hardware acceleration of 3D rendering is impossible using current publicly available drivers, and X Windows

support requires use of a “dumb” frame buffer DDX, which is limited in performance and functionality. We wrote a PCON for DisplayLink devices and made it available to the VCRTCM. Using a GPU driver that we augmented to support VCRTCM, we defined a VCRTC and attached it to the DisplayLink device using VCRTCM. We were thereby able to run X Windows and any graphical applications on either the GPU’s regular DVI connector or the new connector provided by the DisplayLink device. The GPU performed hardware-accelerated rendering for all displays, and VCRTCM and the PCON were responsible for relaying rendered pixels from the GPU to the USB port. Implementing the new PCON required approximately one person-month, and no other existing Linux software required modification. We currently support any GPU from the AMD Radeon* R600 series. Although DisplayLink is unrelated to cloud applications, this exercise demonstrated the power of low-level and transparent abstraction. With the VCRTCM infrastructure in place, it can be used to rapidly implement new features for the graphics subsystem. Besides streaming and compression (for cloud applications) and DisplayLink (for PC applications), we are currently writing and experimenting with several other PCONs, but their features and usage are outside the scope of this paper.

Relationship to Virtual Machines
VCRTCs virtualize the CRTCs contained within a GPU; VCRTCs do not virtualize the GPU itself for use by applications running in virtual machines (VMs). For an application running in a VM to be able to use a physical GPU of the underlying host machine, two approaches are possible: 1. The VM monitor (VMM) “presents” a virtualized GPU to the VM, which the VMM implements in whole or in part using the host’s physical GPUs. 2. The VM’s graphics stack is modified (“paravirtualized”) to “remote” its public API or one of its internal APIs to the underlying host for execution there. At least one VM system [7] implements a combined strategy involving both GPU virtualization and API remoting. VCRTCs perform neither GPU virtualization nor API remoting.

DOI: 10.1002/bltj

Bell Labs Technical Journal

63

Conversely, neither GPU virtualization nor API remoting addresses the problem of how to efficiently and transparently stream pixels out of a frame buffer and into an encoder or other target. Hence, if VMs are used in the implementation of a cloud server, then the implementer of that server must still find a way to perform pixel streaming—a task for which VCRTCs are ideally suited. It is not necessary to use VMs to implement a cloud server. If the intent of the cloud server is to present complete computer systems to the end users, then VMs are indeed a convenient approach. If, however, the intent of the cloud server is to present individual applications, then the heavyweight machinery of VMs are overkill, and a more lightweight approach involving some form of application compartmentalization is better suited. Hence, for systems that serve applications, VCRTCs suffice and neither GPU virtualization nor API remoting are needed.

Related Work
At the beginning of this paper we stated that few existing remote display technologies enable remotely-accessed applications to fully use the hardware capabilities of a GPU. One remote display system that does enable applications running on the server to use the server’s GPU is RemoteFX* [16]. RemoteFX, however, imposes a great number of significant restrictions that limit its applicability: RemoteFX requires that both the server and the client run a considerable amount of Microsoft software, that applications run in VMs (specifically, Hyper-V*), and that the server hardware meet many requirements (for example, all graphics cards on the server must be identical) that are not collectively conducive to building a scalable and flexible system [14, 17]. VCRTs, on the other hand, impose few requirements: Because VCRTs are low-level and transparent, they can be used with any software (Microsoft, Linux, or otherwise), with applications running inside or outside of VMs, and on a wide and flexible range of hardware. The only restriction imposed by VCRTC is that any of the server’s GPUs that are intended to be used by VCRTC (and not all need be used by VCRTC) must be of a type for which

a VCRTC driver has been written. Unlike RemoteFX, VCRTC GPUs need not be identical. Several systems exist that enable real-time interactive games running on a server and using the server’s GPUs to be streamed to clients. Three of the more widely known gaming-on-demand systems are OnLive* [18], OTOY* [19], and Gaikai* [8]. These systems all have graphics cards, encoders, and streamers of some form. But because these systems are proprietary, it is not feasible to determine exactly how they transfer pixels from the graphics card to the encoder, or from the encoder to the streamer. However, given our knowledge of the current state of the art for open source video drivers, we conjecture that these systems do not employ a transparent technology like that of VCRTC, although we cannot discount the possibility that equivalent proprietary technology has been independently developed. Stegmaier et al. [22, 23] have demonstrated a system for efficient remoting of 3D rendered content in the X Windows environment. Rather than using the traditional X Windows approach of encapsulating all OpenGL calls within the X Windows protocol and forwarding those calls to a remote terminal, they instead intercept all library calls related to the creation and management of drawable contexts—that is, all OpenGL extension for X (GLX) library calls—and redirect all 3D rendering to a GPU on the server, rendering into an off-screen buffer. A user-space process then extracts the rendered pixels and transmits them using a custom protocol that includes compression. The remote terminal runs a two-dimensional (2D) X server that combines windows and 2D graphics generated locally using the standard X Windows protocol. The 3D rendered content is received using the custom protocol. This approach works well for applications based on X Windows and GLX, but it does not meet the full transparency requirement that we set as a goal for ourselves. VirtualGL [24] is a publicly available open source system that is conceptually based on Stegmaier’s work. Because VirtualGL performs user-space screen scraping, it suffers from the VBLANK problem described earlier. Most of the recent work on virtualizing GPUrelated resources involves virtualizing the GPU for use

64

Bell Labs Technical Journal

DOI: 10.1002/bltj

by applications running in VMs [7, 9, 12, 15, 21]. Dowty and Sugerman [7] provide a useful taxonomy of such work and describe the particular GPU-virtualization architecture used in VMware Fusion*. As explained earlier (in “Relationship to Virtual Machines”), our system is fundamentally different from, and complementary to, those systems. The question mentioned earlier of whether to serve complete desktops or individual applications is outside the scope of this paper.

Gaikai is a registered trademark of Gaikai, Inc. GNOME is a registered trademark of GNOME Foundation. KDE is a registered trademark of KDE e.V. Linux is a trademark of Linus Torvalds. OnLive is registered trademark of OnLive, Inc. OpenGL is a registered trademark of Silicon Graphics International Corp. OTOY is a registered trademark of Julian M. Urbach. Radeon is a registered trademark of ATI Technologies Inc. VMware Fusion is a registered trademark of VMware, Inc.

Conclusions
VCRTCs offer a low-level and transparent mechanism that solves two of the fundamental problems that all cloud systems face. Further, by providing transparent access to rendered pixel streams, VCRTCs enable a wide range of applications that, without VCRTCs, are more difficult or effectively impossible to write. Much work remains to be done on 3D rendering in the cloud. GPU performance and PCIe switch performance in support of VCRTCs require much work. The design and implementation of the encoder and streamer modules are under active investigation. The management algorithms required in order to choose the best GPU-encoder-streamer connections still have to be designed, and the architecture for a largescale server optimized for rendering in the cloud is a major open research topic. Acknowledgements Thanks to Wim Acke, Mike Coss, Ron Sharp, and Peter Vetter for reviewing the paper and providing useful feedback. Chris Woithe provided invaluable technical input and also implemented several PCONs. Although these drivers were not discussed in this paper, the lessons learned from their implementation helped us shape the architecture and features of the VCRTCM system. *Trademarks AMD is a registered trademark of Advanced Micro Devices, Inc. CUDA is a registered trademark of NVIDIA Corporation. DirectX, Hyper-V, and RemoteFX are registered trademarks of Microsoft Corporation. DisplayLink is a registered trademark of DisplayLink Corp.

References [1] G. Armitage, “An Experimental Estimation of Latency Sensitivity in Multiplayer Quake 3,” Proc. 11th IEEE Internat. Conf. on Networks (ICON ‘03) (Sydney, Aus., 2003), pp. 137–141. [2] J. Barnes, “Linux DRM Developer’s Guide,” Intel Corporation, 2009. [3] R. Budruk, D. Anderson, and T. Shanley, PCI Express System Architecture, Addison-Wesley, Boston, 2003. [4] Compiz, http://www.compiz.org . [5] DisplayLink, http://www.displaylink.com . [6] M. Doggett, “Radeon HD 2900,” Proc. Graphics Hardware Conf. (GH ‘07) (San Diego, CA, 2007). [7] M. Dowty and J. Sugerman, “GPU Virtualization on VMware’s Hosted I/O Architecture,” ACM SIGOPS Operating Syst. Rev., 43:3 (2009), 73–82. [8] Gaikai, http://www.gaikai.com . [9] J. G. Hansen, “Blink: Advanced Display Multiplexing for Virtualized Applications,” Proc. 17th Internat. Workshop on Network and Operating Syst. Support for Digital Audio and Video (NOSSDAV ‘07) (UrbanaChampaign, IL, 2007). [10] C. Hecker, “An Open Letter to Microsoft: Do the Right Thing for the 3D Game Industry,” Game Developer, Apr.-May (1997), 14–21. [11] E. Kilgariff and R. Fernando, “The GeForce 6 Series GPU Architecture,” GPU Gems 2: Programming Techniques for HighPerformance Graphics and General-Purpose Computation (M. Pharr, ed.), Addison-Wesley, Upper Saddle River, NJ, 2005, Chapter 30. [12] H. A. Lagar-Cavilla, N. Tolia, M. Satyanarayanan, and E. de Lara, “VMMIndependent Graphics Acceleration,” Proc. 3rd Internat. Conf. on Virtual Execution Environments (VEE ‘07) (San Diego, CA, 2007), pp. 33–43.

DOI: 10.1002/bltj

Bell Labs Technical Journal

65

[13]

[14]

[15]

[16]

[17]

[18] [19] [20]

[21]

[22]

[23]

[24] [25]

Microsoft, “Desktop Window Manager,” http://msdn.microsoft.com/enus/library/aa969540(VS.85).aspx . Microsoft, “Hardware Considerations for RemoteFX,” http://technet.microsoft.com/ en-us/library/ff817602(v WS.10).aspx . Microsoft, “Hyper-V Server 2008 R2,” http://www.microsoft.com/hyper-v-server/ en/us/default.aspx . Microsoft, “Microsoft RemoteFX,” http:// technet.microsoft.com/en-us/library/ ff817578%28WS.10%29.aspx . Microsoft, “Requirements and Limits for Virtual Machines and Hyper-V in Windows Server 2008 R2,” http://technet.microsoft .com/en-us/library/ee405267(v WS.10) .aspx . OnLive, http://www.onlive.com . OTOY, http://www.otoy.com . K. Runge, “x11vnc: A VNC Server for Real X Displays,” http://www.karlrunge.com/ x11vnc/ . C. Smowton, “Secure 3D Graphics for Virtual Machines,” Proc. 2nd Eur. Workshop on Syst. Security (EUROSEC ‘09) (Nuremberg, Ger., 2009), pp. 36–43. S. Stegmaier, J. Diepstraten, M. Weiler, and T. Ertl, “Widening the Remote Visualization Bottleneck,” Proc. 3rd IEEE Internat. Symp. on Image and Signal Processing and Anal. (ISPA ‘03) (Rome, Ita., 2003), vol. 1, pp. 174–179. S. Stegmaier, M. Magallón, and T. Ertl, “A Generic Solution for Hardware-Accelerated Remote Visualization,” Proc. Symp. on Data Visualisation (VisSym ‘02) (Barcelona, Spn., 2002), pp. 87–94. The VirtualGL Project, “VirtualGL,” http:// www.virtualgl.org/About/Background . X.Org Foundation, “How VideoCards Work,” http://www.x.org/wiki/Development/Docu mentation/HowVideoCardsWork#CRTCs .

passive optical networks; storage area networks; field programmable gate array (FPGA) programming; and cloud computing. He received a B.S. degree in computer science from Fairleigh Dickinson University in Teaneck, New Jersey, and a Ph.D. in computer science from Rutgers University in New Brunswick, New Jersey. Dr. Carroll has published a number of technical papers and is coauthor of a book (Designing and Coding Reusable C ). He currently holds five patents.

ˇ ´ ILIJA HADZIC is a distinguished member of technical staff at Bell Labs in Murray Hill, New Jersey. His current research interests are in the areas of data, optical, and converged broadband access networks, all with an emphasis on hardware and system-level software. He has been the lead implementer of many hardware and software systems that are now part of commercially available Alcatel-Lucent products. He received a B.S. degree from the University of Novi Sad, Serbia, and M.S. and Ph.D. degrees in electrical engineering from the University of Pennsylvania. Dr. Hadˇ c has published numerous technical papers in zi´ leading journals and conferences, and he currently holds six patents. In recognition of his achievements, he was named a Bell Labs Fellow in 2010.

(Manuscript approved October 2011)
MARTIN D. CARROLL is a distinguished member of technical staff at Bell Labs in Murray Hill, New Jersey. His work experience spans a variety of topics including the design and implementation of C programming language tools, environments, and libraries; bandwidth-scheduling algorithms for cable modem and

WILLIAM A. KATSAK is a Ph.D. candidate in the computer science department at Rutgers University in New Brunswick, New Jersey. His current research topics include energy management and the use of green-energy sources in data centers and cloud computing environments, with a focus on real system implementations. He is also interested in many aspects of operating systems design, as well as systems infrastructure for 3D rendering. He received a B.S. degree in computer science from Bloomsberg University, Bloomsberg, Pennsylvania. Mr. Katsak has completed three consecutive summer internships at Bell Labs and currently works as a teaching assistant at Rutgers University. ◆

66

Bell Labs Technical Journal

DOI: 10.1002/bltj

Copyright of Bell Labs Technical Journal is the property of John Wiley & Sons, Inc. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.

Similar Documents

Free Essay

Asa Asa

...Last February 29, we had a chance to visit Hospicio de San Jose and made some community service. At first, I really thought that it would just be the same feeling I felt before when I was still a Rotaractor in the district of Manila but it is indeed different, the experience is priceless. The whole experience was so heart-warming and from that experience I’ve learned a lot. First, you can feel the Christmas spirit anytime of the year even if it is not yet on season. Having a community service there with some of the kids is like celebrating Christmas emphasizing the joy of loving and giving. The Christmas is in my heart; the nursing of those little kids, the playing with them, and the giving of gifts to them are just some of the affirmation of the true spirit of Christmas I felt while doing the service. Second, all that we have is indeed a great gift from our Loving Father. Through this activity, I’ve come to realized how lucky we are to have our loving parents with us in our lives. The kids made me more love my parents and appreciate every single thing they’re doing for us, their children. Third, these kids are not sinister instead they are so lucky. They are lucky for they have been saved from experiencing a more hurtful one and are now being taken cared of those people who are all so warm and caring. They are lucky for they have bunch of sisters and brothers who will heed them every time they needed them the most and unlike their parents, will never leave their side. Fourth...

Words: 375 - Pages: 2

Premium Essay

Asas

...* Lecture Notes CJ 110 Weeks 13 & 14 4/22 - 5/3 Video: The New Asylums * Arraignment * The first step in the criminal proceeding * Defendant appears before the judge to be advised of the charges and enter a plea * Bail * The amount of money or conditions set by the court to ensure that the defendant will appear for further criminal proceedings * Look at many factors: * Uncertainty, risk, overcrowding, ties to the community * Release on Recognizance (ROR) * Cash * Bail Bond * Personal property * Plea Bargaining * Defense and Prosecution form an agreement for some form of leniency (Ex. plead guilty for lesser charge, fewer charges, reduced sentence, etc.) * More than 90% of all cases * Advantages: * Avoid an expensive trial * Case is handled quickly * Less work for attorneys * Witnesses/jury not forced to appear for trial * Disadvantages? * Defendant is entitled to jury by peers and to face his/her accuser * May plead guilty out of fear (when innocent) * Overcharging by prosecutors as incentive * Not potentially getting full range of punishment/sentence for the crime * Pretrial Detention in Jail * Offenders are detained if considered a flight risk in order to assure appearance in court * Detained if considered dangerous * Preventive...

Words: 461 - Pages: 2

Premium Essay

Asas

...The Pharmaceutical industry in India is the world's third-largest in terms of volume and stands 14th in terms of value. The Indian pharmaceutical industry has become the third largest producer in the world and is poised to grow into an industry of $ 20 billion in 2015 from the current turnover of $ 12 billion The government started to encourage the growth of drug manufacturing by Indian companies in the early 1960s, and with the Patents Act in 1970.[5] However, economic liberalization in 90s by the former Prime Minister P.V. Narasimha Rao and the thenFinance Minister, Dr. Manmohan Singh enabled the industry to become what it is today. The lack of patent protection made the Indian market undesirable to the multinational companies that had dominated the market, and while they streamed out. Indian companies carved a niche in both the Indian and world markets with their expertise in reverse-engineering new processes for manufacturing drugs at low costs. Although some of the larger companies have taken baby steps towards drug innovation, the industry as a whole has been following this business model until the present. The number of purely Indian pharma companies is fairly low. Indian pharma industry is mainly operated as well as controlled by dominant foreign companies having subsidiaries in India due to availability of cheap labour in India at lowest cost. In 2002, over 20,000 registered drug manufacturers in India sold $9 billion worth of formulations and bulk drugs. 85% of...

Words: 2235 - Pages: 9

Free Essay

Asas

...Managerial Mathematics(QQM 1023) Tutorial 2 – Introduction to Function 1. Which of the following equations define y as a function of x? a) y = 3x + 1 b) y = 2x2 c) y = 5 d) y = 2x e) x = 3 f) y2 = x g) y = x3 h) y = [pic] i) y = x j) y = [pic] 2. Determine types of function for the following equations: a) f(x) = 2 b) g(x) = [pic] c) f(x) = 4 – x d) f(x) = 2x e) g(x) = x2 + 3x f) h(x) = 2x g) h(x) = ex h) g(x) = x2 i) h(x) = [pic] j) h(x) = [pic] 3. Find the values for each function based on the inputs given: a) f(x) = 3x – 5, f(-1) and f(0) b) g(x) = x2 – 3x, g(4) and g(-2) + g(0) c) f(t) = [pic], f(2) and f(0) d) g(t) = 2t , g(3) and g(0) – g(1) e) h(x) = [pic] , h(-1) and h(3) f) h(x) =[pic] , h(-3) and h(1) + h(0) g) g(x) = [pic] , g(1) and g(-1) h) f(x) = [pic] , f(1) and f(9) – f(-1) i) g(x) = [pic] , g(2) and g(0) + g(-2) j) f(x) = 5 , f(0) and f(-1) – f(5) 4. Find the domain of each function: a) f(x) = [pic] b) h(x) = [pic] c) g(x) = [pic] d) g(x) = [pic] e) h(x) = [pic] f) f(x) = [pic] g) f(x) =[pic] h) h(x) = [pic] i) f(t) = 4t2 – 5 j) f(x) = 4 k) g(x) = [pic] l) f(x) =[pic] m) f(x) = [pic] n) h(x) = [pic] 5. Based on the graphs, determine the domain and range of the given functions. a) y = x...

Words: 561 - Pages: 3

Free Essay

Asas

...Application For Naturalization USCIS Form N-400 Department of Homeland Security U.S. Citizenship and Immigration Services For USCIS Use Only Date Stamp Receipt OMB No. 1615-0052 Expires 09/30/2015 Action Block Remarks Type or print all your answers in black ink. Type or print "N/A" if an item is not applicable or the answer is "none" unless otherwise indicated. Failure to answer all of the questions may delay USCIS processing your Form N-400. NOTE: You must complete Parts 1. - 14. Part 1. Information About Your Eligibility (Check only one box or your Form N-400 may be delayed) Enter Your 9 Digit A-Number: ► A- You are at least 18 years old and 1. Have been a Permanent Resident of the United States for at least 5 years. 2. Have been a Permanent Resident of the United States for at least 3 years. In addition, you have been married to and living with the same U.S. citizen spouse for the last 3 years, and your spouse has been a U.S. citizen for the last 3 years at the time of filing your Form N-400. 3. Are a Permanent Resident of the United States, and you are the spouse of a U.S. citizen, and your U.S. citizen spouse is regularly engaged in specified employment abroad. (Section 319(b) of the Immigration and Nationality Act) 4. Are applying on the basis of qualifying military service. 5. Other (explain): Part 2. Information About You (Person applying for naturalization) 1. Your Current Legal Name (do...

Words: 2108 - Pages: 9

Premium Essay

Asas

...Computer Organization and Architecture CHAPTER 01: Basic Concepts of Architecture and Assembly Language CONTENTS: CHAPTER 1.1: Basic Concepts of Computer Architecture Computer Organization and Architecture CHAPTER 1.2: Basic Hardware Components of a Computer System John Vee MI P. Martinez, CSIT Instructor College of Information and Computing Sciences KING’S COLLEGE OF THE PHILIPPINES CHAPTER 1.3: Assembly Language CHAPTER 1.4: Programmer's View of a Computer System Computer Organization and Architecture CHAPTER 1.1: Basic Concepts of Computer Architecture Instructor: John Vee MI P. Martinez CHAPTER 1.1: Basic Concepts of Computer Architecture ASSIGNMENT #01: LEARNING OUTCOME #01: Next Learning Outcome: After engaging in each topic, students should have: 1) Differentiate Computer Organization and Computer Architecture?  ¼ Yellow Paper, to be submitted next meeting. LO-01: Distinguished the difference between Computer Architecture and Computer Organization, and discussed the different types of architecture. 1) 2) Computer Organization and Architecture Instructor: John Vee MI P. Martinez Computer Architecture vs. Computer Organization Types of Architecture Computer Organization and Architecture Instructor: John Vee MI P. Martinez LO 1.1 – Computer Architecture vs. Computer Organization LO 1.1 – Computer Architecture vs. Computer Organization COMPUTER ARCHITECTURE: COMPUTER ARCHITECTURE: ...

Words: 4567 - Pages: 19

Premium Essay

Asas

...BANGLADESH STOCK MARKET GROWING? KEY INDICATORS BASED ASSESSMENT M Khokan Bepari Assistant Professor Department of Cooperation and Marketing Faculty of Agricultural Economics & Rural Sociology Bangladesh Agricultural University, Mymensingh-2202, Bangladesh Phone: +88 01716601759 khokan552@yahoo.com Dr. Abu Mollik Senior Lecturer and Finance Discipline Leader School of Commerce and Marketing Division of Business and Informatics Central Queensland University abumollik@yahoo.com.au Abstract This paper focuses on the growth of Bangladesh stock market over time. The market trends in terms of market capitalization, market liquidity, market concentration, number of listings, volatility in the market index and foreign portfolio investment were considered. The study finds that key indicators are significantly correlated. Stock market growth index is constructed considering market capitalization ratio; turn over ratio, value traded to GDP ratio and volatility in market index. The findings of the study suggest that although Bangladesh stock market is growing over time, the growth has not yet assumed any stable and obvious trend. We conclude that Bangladesh stock market is still at an early stage of its growth path with a small market size relative to GDP and is characterized by poor liquidity and high market concentration. Introduction Demirguc-Kunt and Levine (1996), Singh (1997) and Levine and Zervos (1998) find that stock market growth plays an...

Words: 3720 - Pages: 15

Premium Essay

Asas

...Electric Circuits and Fields: Network graph, KCL, KVL, node and mesh analysis, transient response of dc and ac networks; sinusoidal steady-state analysis, resonance, basic filter concepts; ideal current and voltage sources. The venin's, Norton's and Superposition and Maximum Power Transfer theorems, two-port networks, three phase circuits; Gauss Theorem, electric field and potential due to point, line, plane and spherical charge distributions; Ampere's and Biot-Savart's laws; inductance; dielectrics; capacitance. Signals and Systems: Representation of continuous and discrete-time signals; shifting and scaling operations; linear, time-invariant and causal systems. Fourier series representation of continuous periodic signals; sampling theorem; Fourier, Laplace and Z transforms. Electrical Machines: Single phase transformer - equivalent circuit, phasor diagram, tests, regulation and efficiency; three phase transformers - connections, parallel operation; auto-transformer; energy conversion principles. DC machines - types, windings, generator characteristics, armature reaction and commutation, starting and speed control of motors; three phase induction motors - principles, types, performance characteristics, starting and speed control; single phase induction motors; synchronous machines - performance, regulation and parallel operation of generators, motor starting, characteristics and applications; servo and stepper motors. Power Systems: Basic power generation concepts;...

Words: 875 - Pages: 4

Free Essay

Asas

...10萬個冷笑話 300壯士系列 DC正義聯盟系列:亞特蘭蒂斯王座.閃點勃論.毀滅.戰爭.全民公敵.啟示錄 KANO X戰警系列 人肉麵線 大亨小傳 大法官 大英雄天團 女巫獵人 五星主廚快餐車 分歧者1~2 火雞總動員 出神入化 半野鬼上床夢殺 古墓奇兵系列 史蒂芬金之迷霧驚魂 弗蘭肯斯坦軍隊 末日之戰 末世殖民地 母難日 白兔玩偶 全民超人 全面突襲1~2 全面啟動 全面進化 全境擴散 冰雪奇緣 危機四伏 名嘴出任務 刺殺金正恩 地心引力 地獄怪客 地獄醫院 守護者 安那貝爾 死亡壽司 死雪禁地1~2 百萬種硬的方程式 羊男的迷宮 告白+渴望 忍者龜變種世代 忐忑 我的機器人是女友 汪洋血迷宮 決戰異世界1~4 沉默之丘1~2 私刑教育 侏儸紀公園1~3+世界 咒怨 終結的開始 孤兒怨 怪獸大學+怪獸電力公司 明日邊界 狗舍 玩具總動員1~3+驚魂夜 玩命關頭1~7 阿凡答 冒牌條子 屍變 怒火特攻隊 怒戰天神 星際爭霸戰1~2 星際效應 星際異攻隊 活人牲吃 珍珠港 科學怪人 屠魔大戰 穿著Prada的惡魔 突變第三型+極地詭變 紅翼行動 美國狙擊手 美國隊長1~2 飛機總動員1~2 哥吉拉2014 拳力突襲 捉迷藏 捍衛任務 捍衛聯盟 料理鼠王 海克力士 特攻聯盟1~2 特種部隊G I JOE 1~2 狸老屍卡好 閃靈悍將 飢餓遊戲1~3(上) 鬥陣俱樂部 鬼入鏡1~5+東京實錄 鬼地方 鬼宅 鬼敲門 啟動機械碼 控制 殺客同萌 深入絕地1~2 現代驅魔師 異形戰場1~2 終極戰士1~3 異形前哨 移動世界 移動迷宮 第九禁區 第十四道門 終極殺陣1~4 陰兒房1~3 頂尖對決 傑克 巨人戰記 傑森大戰佛萊迪 復仇者聯盟1 悲慘世界 惡之教典 惡鄰纏身 惡魔刑事錄 惡靈古堡(CG)惡化+詛咒 惡靈戰警1~2 森林戰士 渦輪方程式 無敵浩克 無敵破壞王 猩球崛起1~2 超人 鋼鐵英雄 超人高校 超時空攔截 超能輪胎殺人事件 越光寶盒 黑暗之光首部曲微光城市 黑影家族 黑魔女 沉睡魔咒 奧茲大帝 新空房禁地 極速快感 極樂世界 毀滅倒數28天 聖鬥士星矢 聖域傳說 聖誕壞樂 詭山 詭屋 雷神索爾1~2 馴龍高手1~2 實習大叔 熊麻吉1~2 瘋狂假面 瘋狂理髮師 碟仙(美) 福爾摩斯1~2 聚魔櫃 蒐屍魔1~2 蜘蛛人(新版)1~2 厲陰宅 蝙蝠俠三部曲+動畫 養鬼吃人1~9 機動戰士鋼彈 逆襲的夏亞 機器軍團 機動任務 機器戰警1~4 獨行俠 遺落戰境 鋼鐵人1~3 鋼鐵墳墓 鋼鐵擂台 錄到鬼1~4 龍捲風 殭屍 殭屍哪有這麼帥 闇魔怨 獵殺代理人 雞餓遊戲 攔截記憶碼 露西 魔女嘉莉 魔髮奇緣 環太平洋+環大西洋 魔戒1~3+哈比人1~3 顫慄異次元 變形金剛1~4 讓子彈飛 鯊捲風1~3 污物 3D驚天洞地 51區 B咖戰警 V怪客 一夜狂奔 七日地獄 人型海象 人形蜈蚣1~2 刀鋒戰士1~3 力王 十億追殺令 三分之一 逆轉賭局 大逃殺1~2 女孩閨房 小丑 不良鮮師 天使聖物 骸骨之城 天將雄師 天際浩劫 天魔異種 太陽浩劫 幻影殺機 牙狼劇場版...

Words: 601 - Pages: 3

Premium Essay

Minr

...Cisco AnyConnect Secure Mobility Client Administrator Guide, Release 3.1 Document Revised: Document Published: November 25, 2013 August 9, 2012 Cisco Systems, Inc. www.cisco.com Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed on the Cisco website at www.cisco.com/go/offices. Text Part Number: THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS, INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BE ACCURATE BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. USERS MUST TAKE FULL RESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS. THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT SHIPPED WITH THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE OR LIMITED WARRANTY, CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY. The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCB’s public domain version of the UNIX operating system. All rights reserved. Copyright © 1981, Regents of the University of California. NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED “AS IS” WITH ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED...

Words: 126829 - Pages: 508

Free Essay

Cisco Security

...captured from a massive footprint of security devices into dynamic updates and actionable intelligence, such as "reputation" scores, and pushes that intelligence out to a business's network security infrastructure for protective action. By incorporating Global Correlation, Cisco IPS 7.0 is up to two times as effective in stopping malicious attacks, in a shorter amount of time, than traditional signature-only IPS technologies. • Cisco ASA 5500 Series 8.2 Software: This offering in the Cisco Adaptive Security Appliances family is designed to enhance end-to-end security for offices of all sizes, improving threat mitigation and enabling companies to more securely connect, communicate and conduct business. With a new Botnet Traffic Filter for identifying infected clients, IPS availability for small offices, and increased clientless remote-access capabilities, Cisco now offers support for the widest range of platforms, operating systems and endpoints in the industry. • Cisco ASA Botnet Traffic Filter: The new Botnet Traffic Filter enables Cisco ASA 5500 Series appliances to...

Words: 532 - Pages: 3

Premium Essay

Asas

...I Bad Effects of Television on the life of the Youth __________________________________________________________ By: Julius Marco R. Geroleo Karl David Juico Stephen john C. Delacruz Darwin L. Dacara Justine S. Aspiras Raul J. Moraleda __________________________________________________________ Centerville Academy Inc. _________________________________________________________ II Approval Sheet This Term Paper entitled THE EFFECTS OF MEDIA ON YOUTH,” prepared and submitted by Julius Marco R. Geroleo, Karl David Juico, Stephen john C. Delacruz, Darwin L. Dacara, Justine S. Aspiras and Raul J. Moraleda, in partial fulfillment of the requirements in English IV is hereby accepted. ______________ Adviser Acknowledgement The researchers wish to thank the following people who helped and made this study possible: To the mother of Stephen John C. Delacruz for the comfortable venue that we used all the time to finish our research report. To the parents of Justine Aspiras for giving us money; for us able to print our work. To all our teachers for their effectiveness in teaching our great appreciation and gratitude. To all our friends who have accompanied us in times of hardships and trials we have encountered while we trying to finish this study. ------------------------------------------------- ...

Words: 519 - Pages: 3

Premium Essay

Asas

...Types of landforms There are many different types of landforms on the Earth. Some of them were formed over millions of years and others were formed in a matter of hours. The formation of a mountain range, for example, would usually take a few million years. Events like earthquakes and volcanic eruptions can 'wipe off' landforms, or form new ones in a matter of hours. Examples of some natural landforms are mountains, oceans, rivers, hills, volcanoes, valleys, desserts, waterfalls, caves and cliffs. This chapter looks at the formation of some major types of landforms. Mountains A mountain is a raised part of the Earth's surface. Mountains can be formed in different ways that involve internal (inside) or external (outside) natural forces. The movement of tectonic plates is called plate tectonics. Plate tectonics is an internal natural force because it happens inside the Earth. When tectonic plates collide, they raise the Earth's crust. As mentioned before, tectonic plates move very slowly, so it takes many millions of years to build a mountain. Mountains can also be formed by external natural forces like rain, wind and frost in the process of erosion. Mountains with shapes that are sharp and jagged are called young mountains. Mountains that have a smoother, more rounded look are called old mountains. The South American mountain range, the Andes, is a young mountain range. Old mountains look smoother because they have been shaped by natural weathering over a longer period...

Words: 688 - Pages: 3

Premium Essay

Asas

...1 General Science General Science CHAPTER I. CHAPTER CHAPTER CHAPTER I CHAPTER I CHAPTER I CHAPTER II CHAPTER II CHAPTER II CHAPTER III CHAPTER III CHAPTER III CHAPTER IV CHAPTER IV CHAPTER IV CHAPTER V CHAPTER V CHAPTER V CHAPTER VI CHAPTER VI CHAPTER VI CHAPTER VII CHAPTER VII CHAPTER VII CHAPTER VIII CHAPTER VIII CHAPTER VIII CHAPTER IX CHAPTER IX 2 CHAPTER IX CHAPTER X CHAPTER X CHAPTER X CHAPTER XI CHAPTER XI CHAPTER XI CHAPTER XII CHAPTER XII CHAPTER XII CHAPTER XIII CHAPTER XIII CHAPTER XIII CHAPTER XIV CHAPTER XIV CHAPTER XIV CHAPTER XV CHAPTER XV CHAPTER XV CHAPTER XVI CHAPTER XVI CHAPTER XVI CHAPTER XVII CHAPTER XVII CHAPTER XVII CHAPTER XVIII CHAPTER XVIII CHAPTER XVIII CHAPTER XIX CHAPTER XIX CHAPTER XIX CHAPTER XX CHAPTER XX CHAPTER XX CHAPTER XXI CHAPTER XXI CHAPTER XXI CHAPTER XXII CHAPTER XXII CHAPTER XXII CHAPTER XXIII CHAPTER XXIII CHAPTER XXIII CHAPTER XXIV CHAPTER XXIV CHAPTER XXIV CHAPTER XXV CHAPTER XXV CHAPTER XXV CHAPTER XXVI CHAPTER XXVI CHAPTER XXVI General Science CHAPTER XXVII CHAPTER XXVII CHAPTER XXVII CHAPTER XXVIII CHAPTER XXVIII CHAPTER XXVIII CHAPTER XXIX CHAPTER XXIX CHAPTER XXIX CHAPTER XXX CHAPTER XXX CHAPTER XXX CHAPTER XXXI CHAPTER XXXI CHAPTER XXXI CHAPTER XXXII CHAPTER XXXII CHAPTER XXXII CHAPTER XXXIII CHAPTER XXXIII CHAPTER XXXIII CHAPTER XXXIV CHAPTER XXXIV CHAPTER XXXIV CHAPTER XXXV CHAPTER XXXV CHAPTER XXXV General...

Words: 102356 - Pages: 410

Premium Essay

Asas

...ISLAMIC FINANCE Week 3, Tutorial 2 2. Differentiate between Riba Al-Duyun and Riba Al-Buyun. Why profit from trade is allowed but riba is prohibited? 1. Riba al-Duyun * This type of riba occurs in lending and borrowing * Any kind of addition or increase above the amount of principal whether the addition or increase are inflicted by the lenders or willingness of the borrowers (borrowers promised to return in access). a. Riba al-Qardh * RM 1,000 1 year RM 1,000 Year Principal Amount Time Factor Principal Amount Final Amount Paid 10% for one year =RM 100 Riba Qardh RM 1,000 1 year RM 1,000 Year Principal Amount Time Factor Principal Amount Final Amount Paid 10% for one year =RM 100 Riba Qardh imposed from the beginning and will be proportionate to the time taken by borrower to repay the loan b. Riba al-Jahiliyyah * no riba at the beginning and the riba is imposed upon default * This arises when the creditor in a deferred contract of exchange, demands from the debtor an additional amount over and above that which was initially agreed to in the original contract. Riba al qardh and riba al jahiliyyah refer to ‘interest’ because they are associated with the ‘addition or increase’ and the ‘extension’ of time to maturity. * For example: Loans from the banks, credit card etc. 2. Riba al-Buyu’ - Riba al-buyu’ occurs in trading transactions. - A transaction (trading/sale) in which a commodity is exchanged for the same...

Words: 430 - Pages: 2