Real-time Rendering 5th Edition

1 view
Skip to first unread message

Dhara Lyford

unread,
Aug 5, 2024, 5:13:32 AM8/5/24
to dickprobtabe
Thoroughlyrevised, this third edition focuses on modern techniques used to generate synthetic three-dimensional images in a fraction of a second. With the advent of programmable shaders, a wide variety of new algorithms have arisen and evolved over the past few years. This edition discusses current, practical rendering methods used in games and other applications. It also presents a solid theoretical framework and relevant mathematics for the field of interactive computer graphics, all in an approachable style. The authors have made the figures used in the book available for download for fair use.:Download Figures.

Rendering ... has been completely revised and revamped for its updated third edition, which focuses on modern techniques used to generate three-dimensional images in a fraction of the time old processes took. From practical rendering for games to math and details for better interactive applications, it's not to be missed.

-- The Bookwatch, November 2008


Real-time computer graphics or real-time rendering is the sub-field of computer graphics focused on producing and analyzing images in real time. The term can refer to anything from rendering an application's graphical user interface (GUI) to real-time image analysis, but is most often used in reference to interactive 3D computer graphics, typically using a graphics processing unit (GPU). One example of this concept is a video game that rapidly renders changing 3D environments to produce an illusion of motion.


Computers have been capable of generating 2D images such as simple lines, images and polygons in real time since their invention. However, quickly rendering detailed 3D objects is a daunting task for traditional Von Neumann architecture-based systems. An early workaround to this problem was the use of sprites, 2D images that could imitate 3D graphics.


Different techniques for rendering now exist, such as ray-tracing and rasterization. Using these techniques and advanced hardware, computers can now render images quickly enough to create the illusion of motion while simultaneously accepting user input. This means that the user can respond to rendered images in real time, producing an interactive experience.


Real-time graphics systems must render each image in less than 1/30th of a second. Ray tracing is far too slow for these systems; instead, they employ the technique of z-buffer triangle rasterization. In this technique, every object is decomposed into individual primitives, usually triangles. Each triangle gets positioned, rotated and scaled on the screen, and rasterizer hardware (or a software emulator) generates pixels inside each triangle. These triangles are then decomposed into atomic units called fragments that are suitable for displaying on a display screen. The fragments are drawn on the screen using a color that is computed in several steps. For example, a texture can be used to "paint" a triangle based on a stored image, and then shadow mapping can alter that triangle's colors based on line-of-sight to light sources.


Real-time graphics are typically employed when interactivity (e.g., player feedback) is crucial. When real-time graphics are used in films, the director has complete control of what has to be drawn on each frame, which can sometimes involve lengthy decision-making. Teams of people are typically involved in the making of these decisions.


Real-time previewing with graphics software, especially when adjusting lighting effects, can increase work speed.[3] Some parameter adjustments in fractal generating software may be made while viewing changes to the image in real time.


The graphics rendering pipeline ("rendering pipeline" or simply "pipeline") is the foundation of real-time graphics.[4] Its main function is to render a two-dimensional image in relation to a virtual camera, three-dimensional objects (an object that has width, length, and depth), light sources, lighting models, textures and more.


The application stage is responsible for generating "scenes", or 3D settings that are drawn to a 2D display. This stage is implemented in software that developers optimize for performance. This stage may perform processing such as collision detection, speed-up techniques, animation and force feedback, in addition to handling user input.


Collision detection is an example of an operation that would be performed in the application stage. Collision detection uses algorithms to detect and respond to collisions between (virtual) objects. For example, the application may calculate new positions for the colliding objects and provide feedback via a force feedback device such as a vibrating game controller.


The application stage also prepares graphics data for the next stage. This includes texture animation, animation of 3D models, animation via transforms, and geometry morphing. Finally, it produces primitives (points, lines, and triangles) based on scene information and feeds those primitives into the geometry stage of the pipeline.


The geometry stage manipulates polygons and vertices to compute what to draw, how to draw it and where to draw it. Usually, these operations are performed by specialized hardware or GPUs.[5] Variations across graphics hardware mean that the "geometry stage" may actually be implemented as several consecutive stages.


Before the final model is shown on the output device, the model is transformed onto multiple spaces or coordinate systems. Transformations move and manipulate objects by altering their vertices. Transformation is the general term for the four specific ways that manipulate the shape or position of a point, line or shape.


In order to give the model a more realistic appearance, one or more light sources are usually established during transformation. However, this stage cannot be reached without first transforming the 3D scene into view space. In view space, the observer (camera) is typically placed at the origin. If using a right-handed coordinate system (which is considered standard), the observer looks in the direction of the negative z-axis with the y-axis pointing upwards and the x-axis pointing to the right.


Projection is a transformation used to represent a 3D model in a 2D space. The two main types of projection are orthographic projection (also called parallel) and perspective projection. The main characteristic of an orthographic projection is that parallel lines remain parallel after the transformation. Perspective projection utilizes the concept that if the distance between the observer and model increases, the model appears smaller than before. Essentially, perspective projection mimics human sight.


Clipping is the process of removing primitives that are outside of the view box in order to facilitate the rasterizer stage. Once those primitives are removed, the primitives that remain will be drawn into new triangles that reach the next stage.


It's all about me me me. This page is mostly about my professional interests, newest to oldest, with some hobby bits interspersed. I tend to put graphics-related links on Twitter and blog here. You can also check LinkedIn. Write me at er...@acm.org.Along with Elena Garces, I'm a Program (aka Papers) Chair for EGSR 2024: July 3-5, Imperial College, London, UK.I co-edited the book Ray Tracing Gems, released in March 2019. On that site we provide an unofficial version of the free PDF version of the book, one with the errata corrected. Ray Tracing Gems II is now also out, which I helped on in various minor ways (and wrote a short reference article for).I coauthored Real-Time Rendering, now in its fourth edition, released in 2018. The book's site has and points to all sorts of resources.The portal page sums up what real-time computer graphics resources I use the most. There are also pages on ray tracing and WebGL resources.We also maintain the 3D Object Intersection Page, a handy table of references to algorithms for object/object intersection, and the obscure and entertaining (IMO) Real Artifacts collection. Oh, and a free graphics books list, a recent graphics books list, and a recommended graphics books list.I help with the free & open-access Journal of Computer Graphics Techniques (JCGT), the successor to the Journal of Graphics Tools (JGT).Various repositories I maintain: Graphics Gems, Journal of Graphics Tools, and Ray Tracing Gems code repos.I made a video tutorial for using NVIDIA's free instant-ngp NeRF software to make fly-throughs.If you're on an iPhone, download this Minecraft model and you'll see it in AR mode. This model is part of my two test scenes in the Universal Scene Description (USD) format. I made them to test viewers for features implemented, to help users understand the UsdPreviewSurface material, and to motivate clearer specification of some material elements. I co-chaired I3D 2020 and I3D 2021. Lots of the keynotes and talks are online.Here's a series of seven short talks I made back in 2019 on the basics of ray tracing, a total of about an hour of content.Sadly now in disrepair, back around 2013 I created and narrated Udacity's free course "Interactive 3D Graphics". Read more about it here. The site is starting to decay (the online exercise system has died), so this page can help you get going. If you don't like video instruction, you can instead download the 800+ pages of text making up the course - scroll down to the "Course Syllabus" section.I was a section editor for GPU Zen and ShaderX4.I like Minecraft, and wrote Mineways, a model exporter for the game. Twelve years it, it gets about 200 downloads a day.Andrew Glassner and I collaborated on T2Z, a little art project using Processing. I also like to keep track of basic resources for 3D printing.My various other public repositories are here, and includemodifiable illusions (see one here) and a simple, flexible tool for checking LaTeX and text files for errors.I finally made a demo for the demoscene, which was fun to do. The music has serious sync problems, for some reason. Toggling F12 on Windows sometimes helps. Fourth place, woo hoo, and about what it deserved. What's fun is that you can take control of the camera at any point, and change the music played.Other fun stuff: I now maintain a library box and micro-pantry map for the area where I live, north of Boston. More about this here, as well as a map of farmers markets northwest of Boston. I'm also making a Cottages (and more) of the Berkshires map, and an "interesting things in Somerville MA" map (which got a bit of news coverage here and here).I made some puzzles for Somerville Open Studios 2024.I have an ancient personal page with book and board game recommendations, plus wildflower, tree, and bird identification programs. These were kinda broken last I looked.Other me me me: SIGGRAPH and Wikipedia.People I'm not: the comedian/one-man-band/juggler/stilt-walker, the Nashville songwriter who penned "Moonshine Margaritas" and other tunes, the photographer, the rhythm guitarist with the pop punk band "Real Friends," and the karma-filled Unity developer eric5h5, to name a few. Past InterestsI helped start and worked on the editorial board for many years of the journal of graphics tools. This was a journal dedicated to presenting practical tools and techniques. The web site used to have useful code for some of the articles - sadly, CRC has let this repository founder. The code can still be found via the Way Back Machine.I created and rarely help maintain the ACM TOG Software Related Tools and Research Resources pages. Mentioned mostly so I can find the links.The now-slumbering Ray Tracing News contains articles about ray tracing. Nowadays I put my efforts into this page of ray tracing resources. I also once maintained The Realtime Raytracing Realm page of real-time ray tracing demos. Dated now, but some of the demos still astound me (256 byte ray tracer? Gotta love it).Decades ago I created the Standard Procedural Databases (SPD) software package, which occasionally still gets used for testing ray tracers. It was presented in an IEEE CG&A article in November 1987. Sphereflake, the most popular model of the set, now runs in real time with 48 million spheres.For I3D 2008 John Owens, Spike Hughes, and I came up with a pub quiz, meant to take about an hour for teams of about 6 people, 10 minutes per set of questions. Here is the answer key. Here's a photo of the scoreboard near the end.For some years I ran the Fantasy Graphics League - demented or silly, you decide...I edited the Ray Tracing News for many years, which grew out of coauthoring An Introduction to Ray Tracing from 1989 (now free to download)One last thing, from 1987: the Standard Procedural Databases, for testing ray tracers. Since you can scale up the number of primitives in a scene, they're still usable, e.g., here's a bit about Sphereflake. Kind of like me (I'm less blotchy - really - though am certainly noisier): Eric at SIGGRAPH 2014 by scanfab on Sketchfab

3a8082e126
Reply all
Reply to author
Forward
0 new messages