chapter – 1 introduction
CHAPTER – 1
1.1 Introduction to Graphic Processing
Graphic processing occasionally called visual processing is the field of computer sciences where graphics are processed and corresponding surfaces are created on the display devices. This field of science has evolved a long way from black and white pixel displays to color displays and then now to full HD Display. There is a lot of engineering involved behind getting a display on the screen with graphics. This involves a lot of work in hardware platforms, supporting drivers and operating systems.
Graphic processing is again classified into many types which are covered in later chapters but the main component of computing graphics is floating point calculations.
General Purpose Microprocessors have inbuilt floating point units that are used for mathematical calculations. But for generating an entire surface out of graphics say
1920x1080 resolutions which is most popular now a days, floating point units that are present inside General Purpose microprocessors are not sufficient. There by we use a coprocessor to do these floating point computations. These are very popular now days and are called GPUs (Graphic Processing Units). Now days with every mid-end and high-end computer they come as a standard configuration.
There is wide range of applications of Graphics. One of the most popular applications is Graphic Games which is a Billion dollar industry worldwide. This document discusses of such graphics and their graphic processing units with their applications, latest trends in technology, various display configurations and compatibility.
Page | 1
CHAPTER – 2
Graphics (from Greek graphikos) are visual presentations on some surface, such as a wall, canvas, screen, paper, or stone to brand, inform, illustrate, or entertain. Graphics word is derived from the word graph. A graph has x and y axis. Same way something which is created in digital word is seen on a digital screen, this screen also has x and y axis.
So the output on any digital device is termed as graphics. In other words an image that is generated by a computer is called graphics.
Examples are photographs, drawings, Line Art, graphs, diagrams, typography, numbers, symbols, geometric designs, maps, engineering drawings, or other images.
Graphics often combine text, illustration, and color. Graphic design may consist of the deliberate selection, creation, or arrangement of typography alone, as in a brochure, flier, poster, web site, or book without any other element. Clarity or effective communication may be the objective, association with other cultural elements may be sought, or merely, the creation of a distinctive style.
Graphics can be functional or artistic. The latter can be a recorded version, such as a photograph, or an interpretation by a scientist to highlight essential features, or an artist, in which case the distinction with imaginary graphics may become blurred. Graphics are of many types. They have a wide range of applications ranging from small pocket video game to smart phones. Every electronic gadget now a days comes with computer generated graphics.
There are two types of computer graphics: raster graphics, where each pixel is separately defined (as in a digital photograph), and vector graphics, where mathematical formulas are used to draw lines and shapes, which are then interpreted at the viewer's end to produce the graphic. Using vectors results in infinitely sharp graphics and often
Page | 2
smaller files, but, when complex, like vectors take time to render and may have larger file sizes than a raster equivalent.
In 1950, the first computer-driven display was attached to MIT's Whirlwind
I computer to generate simple pictures. This was followed by MIT's TX-0 and TX-2, interactive computing which increased interest in computer graphics during the late 1950s.
In 1962, Ivan Sutherland invented Sketchpad, an innovative program that influenced alternative forms of interaction with computers.
In the mid-1960s, large computer graphics research projects were begun at MIT,
General Motors, Bell Labs, and Lockheed Corporation. Douglas Toss of MIT developed an advanced compiler language for graphics programming. S.A.Coons, also at MIT, and J. C.
Ferguson at Boeing, began work in sculptured surfaces. GM developed their DAC-
1system, and other companies, such as Douglas, Lockheed, and McDonnell, also made significant developments. In 1968, ray tracing was first described by Arthur Apple of the
IBM Research Center, Yorktown Heights, N.Y.
During the late 1970s, personal computers became more powerful, capable of drawing both basic and complex shapes and designs. In the 1980s, artists and graphic designers began to see the personal computer, particularly the Commodore Amiga and
Macintosh, as a serious design tool, one that could save time and draw more accurately than other methods. 3D computer graphics became possible in the late 1980s with the powerful SGI computers, which were later used to create some of the first fully computergenerated short films at Pixar. The Macintosh remains one of the most popular tools for computer graphics in graphic design studios and businesses of 1970s.
Modern computer systems, dating from the 1980s and onwards, often use a graphical user interface (GUI) to present data and information with symbols, icons and pictures, rather than text. Graphics are one of the five key elements of multimedia technology.
Page | 3
3D graphics became more popular in the 1990s in gaming, multimedia and animation. In 1996, Quake, one of the first fully 3D games, was released. In 1995, Toy
Story, the first full-length computer-generated animation film, was released in cinemas.
Since then, computer graphics have become more accurate and detailed, due to more advanced computers and better 3D modeling software applications, such as Maya, 3D
Studio Max, and Cinema 4D.
Another use of computer graphics is screensavers, originally intended to prevent the layout of much-used GUIs from 'burning into' the computer screen. They have since evolved into true pieces of art. Modern screens are not susceptible to such burn in artifacts.
2.2 Types of Graphics
2.2.1 Computer Generated Imagery (CGI)
This is the application of computer graphics to create or contribute to images in art, printed media, video games, films, television programs, commercials, and simulators. The visual scenes may be dynamic or static, and may be 2D, though the term "CGI" is most commonly used to refer to 3D computer graphics used for creating scenes or special effects in films and television. They can also be used by a home user and edited together on programs such as Windows Movie Maker or iMovie.
The term computer animation refers to dynamic CGI rendered as a movie. The term virtual world refers to agent-based, interactive environments.
Computer graphics software is used to make computer-generated imagery for movies, etc. Recent availability of CGI software and increased computer speeds have allowed individual artists and small companies to produce professional-grade films, games, and fine art from their home computers. This has brought about an internet subculture with its own set of global celebrities, clichés, and technical vocabulary.
Page | 4
2.2.2 3D Computer Graphics
3D computer graphics (in contrast to 2D computer graphics) are graphics that use a three-dimensional representation of geometric data (often Cartesian) that is stored in the computer for the purposes of performing calculations and rendering 2D images. Such images may be stored for viewing later or displayed in real-time.
3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire-frame model and 2D computer raster graphics in the final rendered display. In computer graphics software, the distinction between 2D and 3D is occasionally blurred; 2D applications may use 3D techniques to achieve effects such as lighting, and 3D may use 2D rendering techniques.
3D computer graphics are often referred to as 3D models. Apart from the rendered graphic, the model is contained within the graphical data file. However, there are differences. A 3D model is the mathematical representation of any three-dimensional object. A model is not technically a graphic until it is displayed. Due to 3D printing, 3D models are not confined to virtual space. A model can be displayed visually as a twodimensional image through a process called 3D rendering, or used in non-graphical computer simulations and calculations.
2.2.3 Vector Graphics
Vector graphics refers to the use of geometrical primitives such as points, lines, and curves (i.e. shapes based on mathematical equations) instead of resolution-dependent bitmap graphics to represent images in computer graphics. In video games this type of projection is somewhat rare, but has become more common in recent years in browserbased gaming with the advent of Flash, since Flash supports vector graphics natively.
Earlier examples for the personal computer is Star glider (1986)
Page | 5
A variety of computer graphic techniques have been used to display video game content throughout the history of video games. The predominance of individual techniques have evolved over time, primarily due to hardware advances and restrictions such as the processing power of central or graphics processing units.
Some of the earliest video games were text games or text-based games that used text characters instead of bitmapped or vector graphics. Examples include MUDs (Multi-
User Dungeons), where players could read or view depictions of rooms, objects, other players, and actions performed in the virtual world and rogue likes, a sub-genre of roleplaying video games featuring many monsters, items, and environmental effects, as well as an emphasis on randomization, replayability and permanent death. Some of the earliest text games were developed for computer systems which had no video display at all.
Text games are typically easier to write and require less processing power than graphical games, and thus were more common from 1970 to 1990. However, terminal emulators are still in use today, and people continue to play MUDs and explore interactive fiction. Many beginning programmers still create these types of games to familiarize themselves with programming languages and contests are held even today on who can finish programming a game within a short time period.
Vector game can also refer to a video game that uses vector graphics display capable of projecting images using an electron beam to draw images instead of with pixels, much like a laser show. Many early arcade games used such displays, as they were capable of displaying more detailed images than raster displays on the hardware available at that time. Many vector-based arcade games used full-color overlays to complement the otherwise monochrome vector images. Other uses of these overlays were very detailed drawings of the static gaming environment, while the moving objects were drawn by the vector beam. Games of this type were produced mainly by Atari, Cinematronics, and Sega.
Examples of vector games include Armor Attack, Eliminator, Lunar Lander, Space
Fury, Space Wars, Star Trek, Tac/Scan, Tempest and Zektor. The Vectrex home console also used a vector display. After 1985, vector graphics were substantially declining due to
Page | 6
improvements to sprite technology, rasterized 3D Filled Polygon Graphics were returning to the arcades and were so popular in the late 80s that vector graphics can no longer compete.
2.2.4 Fixed 3D Graphics
Fixed 3D refers to a three-dimensional representation of the game world where foreground objects (i.e. game characters) are typically rendered in real time against a static background. The principal advantage of this technique is its ability to display a high level of detail on minimal hardware. The main disadvantage is that the player's frame of reference remains fixed at all times, preventing players from examining or moving about the environment from multiple viewpoints.
Backgrounds in fixed 3D games tend to be pre-rendered two-dimensional images, but are sometimes rendered in real time (e.g. Blade Runner). The developers of SimCity 4 took advantage of fixed perspective by not texturing the reverse sides of objects (and thereby speeding up rendering) which players could not see anyway. Fixed 3D is also sometimes used to "fake" areas which are inaccessible to players. The Legend of Zelda:
Ocarina of Time, for instance, is nearly completely 3D, but uses fixed 3D to represent many of the building interiors as well as one entire town. (This technique was later dropped in favor of full-3D in the game's successor, The Legend of Zelda: Majora's Mask.) A similar technique, the skybox, is used in many 3D games to represent distant background objects that are not worth rendering in real time.
Used heavily in the survival horror genre, fixed 3D was first seen in Infogrames'
Alone in the Dark series in the early 1990s. It was later revived and brought up-to-date by
Capcom in the Resident Evil series. Gameplay-wise there is little difference between fixed
3D games and their 2D precursors. Players' ability to navigate within a scene still tends to be limited, and interaction with the game world remains mostly "point-and-click".
Page | 7
Further examples include the PlayStation-era titles in the Final Fantasy series
(Square); the role-playing games Parasite Eve and Parasite Eve II (Square); the actionadventure games Ecstatica and Ecstatica 2 (Andrew Spencer/Psygnosis), as well as Little
Big Adventure (Adeline Software International); the graphic adventure Grim Fandango
(LucasArts); and 3D Movie Maker (Microsoft Kids).
Pre-rendered backgrounds are also found in some isometric video games, such as the role-playing game The Temple of Elemental Evil (Troika Games) and the Baldur's Gate series (BioWare); though in these cases the form of graphical projection used is not different.
Page | 8
Like a motherboard, a graphics card is a printed circuit board that houses a processor and RAM. It also has an input/output system (BIOS) chip, which stores the card's settings and performs diagnostics on the memory, input and output at startup. A graphics card's processor, called a graphics processing unit (GPU), is similar to a computer's CPU.
A GPU, however, is designed specifically for performing the complex mathematical and geometric calculations that are necessary for graphics rendering. Some of the fastest GPUs have more transistors than the average CPU. A GPU produces a lot of heat, so it is usually located under a heat sink or a fan.
In addition to its processing power, a GPU uses special programming to help it analyze and use data. ATI and NVidia produce the vast majority of GPUs on the market, and both companies have developed their own enhancements for GPU performance. To improve image quality, the processors use Full scene anti-aliasing (FSAA), which smoothers the edges of 3-D objects anisotropic filtering (AF), which makes images look crisper. Each company has also developed specific techniques to help the GPU apply colors, shading, textures and patterns.
As the GPU creates images, it needs somewhere to hold information and completed pictures. It uses the card's RAM for this purpose, storing data about each pixel, its color and its location on the screen. Part of the RAM can also act as a frame buffer, meaning that it holds completed images until it is time to display them. Typically, video RAM operates at very high speeds as it is dual ported, meaning that the system can read from it and write to it at the same time.
The RAM connects directly to the digital-to-analog converter, called the DAC. This converter, also called the RAMDAC, translates the image into an analog signal that the
Page | 9
monitor can use. Some cards have multiple RAMDACs, which can improve performance and support more than one monitor. The RAMDAC sends the final picture to the monitor through a cable.
Graphics cards connect to the computer through the motherboard. The motherboard supplies power to the card and lets it communicate with the CPU. Newer graphics cards often require more power than the motherboard can provide, so they also have a direct connection to the computer's power supply. Connections to the motherboard are usually through Peripheral component interconnect (PCI) ,Advanced graphics port (AGP) and PCI
PCI Express is the newest of the three and provides the fastest transfer rates between the graphics card and the motherboard. PCIe also supports the use of two graphics cards in the same computer.
Most graphics cards have two monitor connections. Often, one is a DVI connector, which supports LCD screens, and the other is a VGA connector, which supports CRT screens. Some graphics cards have two DVI connectors instead. But that doesn't rule out using a CRT screen; CRT screens can connect to DVI ports through an adapter. At one time, Apple made monitors that used the proprietary Apple Display Connector (ADC).
Although these monitors are still in use, new Apple monitors use a DVI connection.
Most people use only one of their two monitor connections. People who need to use two monitors can purchase a graphics card with dual head capability, which splits the display between the two screens. A computer with two dual head, PCIe-enabled video cards could theoretically support four monitors.
In addition to connections for the motherboard and monitor, some graphics cards have connections for TV display, TV-out, S-video, ViVo or video in/video out, FireWire
,USB, etc., Some cards also incorporate TV tuners.
Page | 10
Graphics cards have come a long way since IBM introduced the first one in 1981.
Called a Monochrome Display Adapter (MDA), the card provided text-only displays of green or white text on a black screen. Now, the minimum standard for new video cards is
Video Graphics Array (VGA), which allows 256 colors. With high-performance standards like Quantum Extended Graphics Array (QXGA), video cards can display millions of colors at resolutions of up to 2040 x 1536 pixels.
A graphics processing unit (GPU), also occasionally called visual processing unit
(VPU), is a specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display.
GPUs are used in embedded systems, mobile phones, personal computers, workstations, and game consoles. Modern GPUs are very efficient at manipulating computer graphics, and their highly parallel structure makes them more effective than general-purpose CPUs for algorithms where processing of large blocks of data is done in parallel. In a personal computer, a GPU can be present on a video card, or it can be on the motherboard or in certain CPUs on the CPU die.
The term GPU was popularized by NVidia in 1999, who marketed the GeForce 256 as "the world's first 'GPU', or Graphics Processing Unit, a single-chip processor with integrated transform, lighting, triangle setup/clipping, and rendering engines that are capable of processing a minimum of 10 million polygons per second". Competitor ATI
Technologies coined the term visual processing unit or VPU with the release of the Radeon
9700 in 2002.
With the advent of the OpenGL API and similar functionality in DirectX, GPUs added programmable shading to their capabilities. Each pixel could now be processed by a short program that could include additional image textures as inputs, and each geometric vertex could likewise be processed by a short program before it was projected onto the
Page | 11
screen. NVidia was first to produce a chip capable of programmable shading, the GeForce
3 (code named NV20). By October 2002, with the introduction of the ATI Radeon 9700
(also known as R300), the world's first Direct3D 9.0 accelerator, pixel and vertex shaders could implement looping and lengthy floating point math, and in general were quickly becoming as flexible as CPUs, and orders of magnitude faster for image-array operations.
Pixel shading is often used for things like bump mapping, which adds texture, to make an object look shiny, dull, rough, or even round or extruded.
With the introduction of the GeForce 8 series and the then new generic stream processing unit GPUs became a more generalized computing device. Today, parallel GPUs have begun making computational inroads against the CPU, and a subfield of research, dubbed GPU Computing or GPGPU for General Purpose Computing on GPU, has found its way into fields as diverse as machine learning, oil exploration, scientific image processing, linear algebra, statistics, 3D reconstruction and even stock options pricing determination.
NVidia’s CUDA platform was the earliest widely adopted programming model for GPU computing.
Modern GPUs use most of their transistors to do calculations related to 3D computer graphics. They were initially used to accelerate the memory-intensive work of texture mapping and rendering polygons, later adding units to accelerate geometric calculations such as the rotation and translation of vertices into different coordinate systems. Recent developments in GPUs include support for programmable shader which can manipulate vertices and textures with many of the same operations supported by CPUs, oversampling and interpolation techniques to reduce aliasing, and very high-precision color spaces. Because most of these computations involve matrix and vector operations, engineers and scientists have increasingly studied the use of GPUs for non-graphical calculations. An example of GPUs being used non-graphically is the generation of
Bitcoins, where the graphical processing unit is used to solve hash functions.
Page | 12
3.3 DirectX and OpenGL
Microsoft DirectX is a collection of application programming interfaces (APIs) for handling tasks related to multimedia, especially game programming and video, on
Microsoft platforms. Originally, the names of these APIs all began with Direct, such as
Direct3D, DirectDraw, DirectMusic, DirectPlay, DirectSound, and so forth. The name
DirectX was coined as shorthand term for all of these APIs (the X standing in for the particular API names) and soon became the name of the collection. When Microsoft later set out to develop a gaming console, the X was used as the basis of the name Xbox to indicate that the console was based on DirectX technology. The X initial has been carried forward in the naming of APIs designed for the Xbox such as XInput and the Crossplatform Audio Creation Tool (XACT), while the DirectX pattern has been continued for
Windows APIs such as Direct2D and Direct Write.
Direct3D (the 3D graphics API within DirectX) is widely used in the development of video games for Microsoft Windows, Microsoft Xbox, Microsoft Xbox 360 and some
Sega Dreamcast games. Direct3D is also used by other software applications for visualization and graphics tasks such as CAD/CAM engineering. As Direct3D is the most widely publicized component of DirectX, it is common to see the names "DirectX" and
"Direct3D" used interchangeably.
The DirectX software development kit (SDK) consists of runtime libraries in redistributable binary form, along with accompanying documentation and headers for use in coding. Originally, the runtimes were only installed by games or explicitly by the user.
Windows 95 did not launch with DirectX, but DirectX was included with Windows 95
OEM Service Release 2. Windows 98 and Windows NT 4.0 both shipped with DirectX, as has every version of Windows released since. The SDK is available as a free download.
While the runtimes are proprietary, closed-source software, source code is provided for most of the SDK samples. Starting with the release of Windows 8 Developer Preview,
DirectX SDK has been integrated into Windows SDK.
Page | 13
Direct3D 9Ex, Direct3D 10, and Direct3D 11 are only available for Windows Vista and newer because each of these new versions was built to depend upon the new Windows
Display Driver Model that was introduced for Windows Vista. The new Vista/WDDM graphics architecture includes a new video memory manager supporting virtualization of graphics hardware for various applications and services like the Desktop Window
OpenGL (Open Graphics Library) is a cross-language, multi-platform API for rendering 2D and 3D computer graphics. The API is typically used to interact with a GPU, to achieve hardware-accelerated rendering. OpenGL was developed by Silicon Graphics
Inc. (SGI) from 1991 and released in January 1992 and is widely used in CAD, virtual reality, scientific visualization, information visualization, flight simulation, and video games. OpenGL is managed by the non-profit technology consortium Khronos Group.
The OpenGL specification describes an abstract API for drawing 2D and 3D graphics. Although it's possible for the API to be implemented entirely in software, it's designed to be implemented mostly or entirely in hardware.
The API is defined as a number of functions which may be called by the client program, alongside a number of named integer constants (for example, the constant
3D rendering from within a web browser) the C bindings WGL, GLX and CGL; the C binding provided by iOS and the Java and C bindings provided by Android.
In addition to being language-independent, OpenGL is also platform-independent.
The specification says nothing on the subject of obtaining, and managing, an OpenGL context, leaving this as a detail of the underlying windowing system. For the same reason,
OpenGL is purely concerned with rendering, providing no APIs related to input, audio, or windowing.
Page | 14
3.4 Integrated Graphics
Many motherboards have integrated graphics capabilities and function without a separate graphics card. These motherboards handle 2-D images easily, so they are ideal for productivity and Internet applications. Plugging a separate graphics card into one of these motherboards overrides the onboard graphics functions.
3.5 Working of Graphic Cards
Figure 3.1 Parts of a Graphic Card
The images on monitor are made of tiny dots called pixels. At most common resolution settings, a screen displays over a million pixels, and the computer has to decide what to do with everyone in order to create an image. To do this, it needs a translator -- something to take binary data from the CPU and turn it into a picture you can see. Unless a computer has graphics capability built into the motherboard, that translation takes place on the graphics card.
Page | 15
Figure 3.2 Graphic Card Internal Parts
A graphics card's job is complex, but its principles and components are easy to understand. In this article, we will look at the basic parts of a video card and what they do.
We'll also examine the factors that work together to make a fast, efficient graphics card.
The CPU, working in conjunction with software applications, sends information about the image to the graphics card. The graphics card decides how to use the pixels on the screen to create the image. It then sends that information to the monitor through a cable.
Creating an image out of binary data is a demanding process. To make a 3-D image, the graphics card first creates a wire frame out of straight lines. Then, it rasterizes the image (fills in the remaining pixels). It also adds lighting, texture and color. For fast-paced games, the computer has to go through this process about sixty times per second. Without a graphics card to perform the necessary calculations, the workload would be too much for the computer to handle.
The graphics card accomplishes this task using a motherboard connection for data and power, processor to decide what to do with each pixel on the screen, Memory to hold information about each pixel and to temporarily store completed pictures, A monitor connection so you can see the final result.
Page | 16
Picture 3.2 shows how a graphic card looks like from outside. Picture 3.3 shows the heat sink of the card. The heat sink of the card is widely distributed covering all the memory modules of the graphic card. In the case of this card we have 12 Memory chips each 256 MB thereby making a total of 3GB Graphic RAM.
Figure 3.3 Top View of a Graphic Card
Figure 3.4 Heat Sink of a Graphic Card
3.6 Choosing a Good Graphic Card
A top-of-the-line graphics card is easy to spot. It has lots of memory and a fast processor. Often, it's also more visually appealing than anything else that's intended to go
Page | 17
inside a computer's case. Lots of high-performance video cards are illustrated or have decorative fans or heat sinks.
But a high-end card provides more power than most people really need. People who use their computers primarily for e-mail, word processing or Web surfing can find all the necessary graphics support on a motherboard with integrated graphics. A mid-range card is sufficient for most casual gamers. People who need the power of a high-end card include gaming enthusiasts and people who do lots of 3-D graphic work.
A good overall measurement of a card's performance is its frame rate, measured in frames per second (FPS). The frame rate describes how many complete images the card can display per second. The human eye can process about 25 frames every second, but fastaction games require a frame rate of at least 60 FPS to provide smooth animation and scrolling. Components of the frame rate are as below.
Triangles or vertices per second describe how quickly the GPU can calculate the whole polygon or the vertices that define it. 3-D images are made of triangles, or polygons.
In general, it describes how quickly the card builds a wire frame image.
Pixel fill rate is the measurement describes how many pixels the GPU can process in a second, which translates to how quickly it can rasterize the image.
The graphics card's hardware directly affects its speed. The hardware specifications that most affect the card's speed and the units in which they are measured are GPU clock speed (MHz) ,Size of the memory bus (bits) , Amount of available memory (MB),
Memory clock rate (MHz), Memory bandwidth (GB/s), RAMDAC speed (MHz)
The computer's CPU and motherboard also play a part, since a very fast graphics card can't compensate for a motherboard's inability to deliver data quickly. Similarly, the card's connection to the motherboard and the speed at which it can get instructions from the
CPU affect its performance. Thereby finding a graphic card with the best of these numbers on all its specifications for a price range will surely make the best graphic card decision.
Page | 18
AMD Eyefinity technology is a solution developed by AMD that allows consumers to run up to six simultaneous displays off of a single graphics card. This is a unique feature of AMD graphics products that cannot be found on any other consumer graphics solution at this time.
More importantly for consumers, AMD Eyefinity technology is not a feature we reserve for our most expensive products. Indeed, AMD Eyefinity technology is available on more than 45 consumer and professional-grade products. These products cover a very large spectrum of prices, giving you the flexibility to find the solution that you need.
4.2 Working AMD Eyefinity Technology
On the hardware level, each graphics chip we manufacture is equipped with the ability to support a certain maximum number of displays. The graphics chip is then connected to display outputs (like DVI or Display Port), which allow you to physically connect displays. The number and type of display outputs will vary based upon the product and its display output configuration.
On the software side, the AMD Catalyst™ driver suite is the one-stop shop for configuring the way your connected displays actually behave. From configuring the orientation to combining their resolutions (more on that later), AMD Catalyst™ makes it easy to get multiple displays up and running.
Page | 19
4.3 Display Output ports supporting Eyefinity
Display outputs are the ports on the back of your graphics card, which can accept a connection with a monitor. The following pictures illustrate the outputs you might find on an AMD graphics product.
Digital Visual Interface (DVI)
Mini DisplayPort (mDP)
High-Definition Multimedia Interface (HDMI)
Video Graphics Array (VGA)
Figure 4.1 Display ports supported in Eyefinity
4.4 Supported Products
Eyefinity technology support was introduced in 2009, and that support has expanded to many graphic products. Each product page details the available display connectors and the maximum number of displays supported on reference design.
When a new design of graphics products is done, then there are a certain set of parts, materials and specifications that add up to a standard, or reference, design. This reference design makes it easy for the partner companies that actually sell Eyefinity products such as Sapphire, XFX or ASUS to manufacture them according to the reference design
An AMD Radeon HD 6850 GPU using the official reference design from AMD.
Page | 20
Figure 4.2 Radeon HD 6850
However, design companies don’t necessarily have to follow the reference design for a given product. These companies can often manufacture custom designs that offer different display output configurations, new cooling or tuned performance. As an example, the Sapphire HD 6870 FleX uses a unique chip and an included adapter to easily connect a third DVI display to the card’s HDMI port. This pre-packaged “simplifies” Eyefinity technology by providing everything a user needs right out of the box.
Figure 4.3 Flex Edition of Radeon HD 6850
The SAPPHIRE FleX HD 6870, a non-reference design, is built for Eyefinity technology. New cooling and an included adapter allow this product to support three DVI monitors out of the box.
Page | 21
Innovative designs like the Sapphire HD 6870 FleX demonstrate the flexibility we’ve in Eyefinity technology. In other words, non-reference designs can make it even easier to find and configure an Eyefinity technology solution that meets needs of end user.
4.5 Bzel Compensation
In traditional multi-monitor setups, any piece of an object moving from one monitor to the next is simply chopped off and moved, regardless of how small that piece may be.
For example, a small piece of a character’s armor might reach the edge of one display, resulting in the armor appearing to “jump” crudely to the next display.
Figure 4.4 Without Bzel compensation
In the above picture it can be seen how the edge of this character’s shield in
DragonAge II does not transition to another monitor as a player would expect.
This chopping may also cause objects to become misaligned as they pass between displays. That piece of armor on the next display may be positioned higher or lower than the player would expect it to be, and that effect can compromise the immersion of the game.
Page | 22
Bezel compensation corrects for this jarring visual anomaly. The image with Bezel compensation is as below.
Figure 4.5 With Bzel Compensation
Bezel compensation remedies these issues by treating the plastic frame of displays as an object that games and applications merely pass behind. The effect is subtle, but impressive. Objects are no longer interrupted by the bezel, and remain aligned when passing from one display to the next.
The below are the possible configurations of eye finity with Bzel Compensation
Figure 4.6 3x1 Landscape mode in Eyefinity
Page | 23
Figure 4.7 5x1 Landscape mode in Eyefinity
Figure 4.8 3x2 Landscape mode in Eyefinity
Page | 24
AMD CrossFireX (previously known as CrossFire) is a brand name for the multi-
GPU solution by Advanced Micro Devices, originally developed by ATI Technologies.
The technology allows up to four GPUs to be used in a single computer to improve graphics performance. CrossFire ignites an entirely new gaming experience that works with all games, all the time, with the power of multiple GPUs within a single PC.
Crossfire was first made available to the public on September 27, 2005. The system required a CrossFire-compliant motherboard with a pair of ATI Radeon PCI Express
(PCIe) graphics cards. Radeon x800s, x850s, x1800s and x1900s came in a regular edition, and a 'CrossFire Edition' which has 'master' capability built into the hardware. 'Master' capability is a term used for 5 extra image compositing chips, which combine the output of both cards. One had to buy a Master card, and pair it with a regular card from the same series. The Master card shipped with a proprietary DVI Y-dongle, which plugged into the primary DVI ports on both cards, and into the monitor cable. This dongle serves as the main link between both cards, sending incomplete images between them, and complete images to the monitor. Low-end Radeon x1300 and x1600 cards have no 'CrossFire
Edition' but are enabled via software, with communication forwarded via the standard PCI
Express slots on the motherboard. ATI currently has not created the infrastructure to allow
FireGL cards to be set up in a CrossFire configuration. The 'slave' graphics card needed to be from the same family as the 'master'.
Page | 25
An example of a limitation in regard to a Master-card configuration would be the first-generation CrossFire implementation in the Radeon X850 XT Master Card. Because it used a compositing chip from Silicon Image (SiI 163B TMDS), the maximum resolution on an X850 CrossFire setup was limited to 1600×1200 at 60 Hz, or 1920×1440 at 52 Hz.
This was considered a problem for CRT owners wishing to use CrossFire to play games at high resolutions, or owners of Widescreen LCD monitors. As many people found a 60 Hz refresh rate with a CRT to strain one's eyes, the practical resolution limit became
1280×1024, which did not push CrossFire enough to justify the cost. The next generation of CrossFire, as employed by the X1800 Master cards, used two sets of compositing chips and a custom double density dual-link DVI Y-dongle to double the bandwidth between cards, raising the maximum resolution and refresh rate to far higher levels.
5.2.2 Second Generation
When used with ATI's "CrossFire Xpress 3200" motherboard chipset, the 'master' card is no longer required for every "CrossFire Ready" card (with the exception of the
Radeon X1900 series). With the CrossFire Xpress 3200, two normal cards can be run in a
Crossfire setup, using the PCI-E bus for communications. This is similar to X1300
CrossFire, which also uses PCI Express, except that the Xpress 3200 had been built for low-latency and high-speed communication between graphics cards. While performance was impacted, this move was viewed as an overall improvement in market strategy, because Crossfire Master Cards were expensive, in very high demand, and largely unavailable at the retail level.
Page | 26
Figure 5.1 X850 XT CrossFire Edition master card
Page | 27
Figure 5.2 Extra Chips that are seen without the cooler
Figure 5.3 the five chips that make a master card
Page | 28
Although the CrossFire Xpress 3200 chipset is indeed capable of CrossFire through the PCI-e bus for every Radeon series below the X1900s, the driver accommodations for this CrossFire method has not yet materialized for the X1800 series. ATI has said that future revisions of the Catalyst driver suite will contain what is required for X1800 dongle less CrossFire, but has not yet mentioned a specific date.
5.2.3 Current Generation (CrossFireX)
With the release of the Radeon X1950 Pro (RV570 GPU), ATI has completely revised CrossFire's connection infrastructure to further eliminate the need for past Ydongle/Master card and slave card configurations for CrossFire to operate. ATI's CrossFire connector is now a ribbon-like connector attached to the top of each graphics adapter, similar to NVidia’s SLI bridges, but different in physical and logical natures. As such,
Master Cards no longer exist, and are not required for maximum performance. Two dongles can be used per card; these were put to full use with the release of CrossFireX.
Radeon HD 2900 and HD 3000 series cards use the same ribbon connectors, but the HD
3800 series of cards only require one ribbon connector, to facilitate CrossFireX. Unlike older series of Radeon cards, different HD 3800 series cards can be combined in CrossFire, each with separate clock control.
Since the release of the codenamed Spider desktop platform from AMD on
November 19, 2007, the CrossFire setup has been updated with support for a maximum of four video cards with the 790FX chipset; the CrossFire branding was then changed to "ATI
CrossFireX". The setup, which, according to internal testing by AMD, will bring at least
3.2x performance increase in several games and applications which required massive graphics capabilities of the computer system, is targeted to the enthusiast market.
A later development to the CrossFire infrastructure includes a dual GPU solution with on-board PCI-E Bridge that was released in early 2008, the Radeon HD 3870 X2 and
Page | 29
later in Radeon HD 4870 X2 graphics cards, featuring only one CrossFire connector for dual card, four GPU scalability.
5.3 Hybrid Crossfire (Dual Graphics)
There is also a “hybrid” mode of Crossfire which combines on-board graphics using the AMD north bridge architecture with selected graphic cards. The current generation is called Hybrid CrossFireX and is available for motherboards with integrated AMD chipsets in the 7 and 8 series IGPs, referred to as Hybrid CrossFireX. It allows combining discrete video cards and the IGP for increased performance. This combination results in powersavings when simple or 2D graphics are used and performance increases of 25% to over
200% in 3D graphics over using a non Crossfire option. As of March 2012, it appears that this is now called "AMD Radeon Dual Graphics" and means using A-series Fusion APUs together with video cards.
5.4 Advantages of Crossfire
Advantages are that CrossFire can be implemented with varying-GPU cards of the same generation (this is in contrast to NVidia’s SLI, which generally only works if all cards have the same GPU). This allows buyers who have varying budgets over time to purchase different cards and still get the benefits of increased performance. With the latest generation cards, they will only crossfire with other cards in their sub series. For example,
GPU in the same series can be CrossFired with each other. So a 5800 series GPU (e.g. a
5830) can run together with another 5800 series GPU (e.g. 5870). However GPUs not in the same hundred series cannot be CrossFired successfully (e.g. a 5770 cannot run with a
5870). ATI CrossFire configurations can run many monitors of varying size and resolution, while SLI only allows three monitors. The exception being NVidia Surround which enables connection of up to four 2D displays and three 3D displays, although all displays must be the same resolution for this to work.
Page | 30
5.5 Disadvantages of Crossfire
Disadvantages are that the first generation CrossFire implementations (the Radeon
X800 to X1900 series) require an external y-cable/dongle to operate in CrossFire mode due to the PCI-e bus not being able to provide enough bandwidth to run CrossFire without losing a significant amount of performance. In some cases CrossFire doesn't improve 3D performance – in some extreme cases, it can lower the frame rate due to the particulars of an application's coding. This is also true for NVidia’s SLI, as the problem is inherent in multi-GPU systems. This is often witnessed when running an application at low resolutions. When using CrossFire with AFR, the subjective frame rate can often be lower than the frame rate reported by benchmarking applications, and may even be poorer than the frame rate of its single-GPU equivalent. This phenomenon is known as micro stuttering and also applies to SLI since its inherent to multi-GPU configurations.
Page | 31
A monitor which is 32” is affordable. Multiple monitors of 32” are affordable. But a single monitor which is 200” is not practical. Therefore Eyefinity helps to extends monitors and allow making displays that are as wide as 360”. For rendering such a wide screen a single GPU is not sufficient to compute all the 12 million pixels for every frame. Therefore if there are multiple GPUs available then this can be possible. An enterprise graphic card is really expensive thereby 4 midrange GPUs can be cascaded with crossfire to get this resolution UP and Running. Also, if a gamer needs high resolution or an animator needs more Graphic computing power then he can go for this crossfire technology. Though
Eyefinity and Crossfire has many advantages the key advantage is affordability of the outcome.
Page | 32
AMD CrossFire by Frederic P. Miller, Agnes F. Vandome, McBrewster John
AMD 580 Chipset Series by Frederic P Miller
Tomshardware – How Graphic Card Works forums.AMD.com – Crossfire Compatibility and Testing
AMD.com – Eyefinity
HARDWA – Buyers Guide by Maximum PC Staff
Page | 33
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project