An Honors University In Maryland. Dawn Biehler, GES, reflects on the subjective line between valued urban wildlife and unwanted pests. New York Times | January 1. UMBC, Northrop Grumman Foundation, BCPS launch $1. M partnership to support Baltimore City students, teachers, and communities. Baltimore Sun | January 7, 2. ![]() Erle Ellis, GES, argues Earth has entered a new epoch defined by human impacts on the planet in Science. Reuters | January 7, 2. In- depth Q& A on the future of artificial intelligence with Marie des. Jardins, COEIT assoc. Tech. Republic | January 6, 2. Govind Rao, CBEE, explains how his low- cost, disposable incubator can save the lives of preterm and low- birthweight newborns. The Hindu Business Line | January 4, 2. Learn how to make your own video game. Summer video game camp in design & development courses for kids & teens. Minecraft, RPG, FPS, arcade, iPhone & Android.![]() Amy Bhatt, GWS, describes how Seattle's South Asian tech workforce is changing the face of global giving. Seattle Magazine | December 3. More news headlines »President Hrabowski's media appearances, publications, and speeches > > >. UMBC Games, Animation and Interactive Media · Game Development at the University of Maryland, Baltimore County. I know you can’t take everything in a PR posting at face value, but the phrase “our invention of programmable shading” in NVIDIA’s announcement of their patent suits against Samsung and Qualcomm definitely rubbed me the wrong way. Maybe it’s something about personally having more of a claim to having invented programmable shading, at least on graphics hardware, than NVIDIA. ![]() ![]() Since many of the accounts (I’m looking at you Wikipedia) of the background of programmable shading seem to have been written by people who don’t even remember a time before GPUs, this seems like a good excuse for some historical recollections. In the beginning…The seeds of programmable shading were planted in 1. Turner Whitted and David Weimer. They didn’t have a shading language, but did create the first deferred shading renderer by splitting a scan line renderer into two parts. The first part rasterized all the parameters you’d need for shading (we’d call it a G- buffer now), and the second part could compute the shading from that G- buffer. The revolutionary idea was that you could make changes and re- shade the image without needing to redo the (then expensive) rasterization. Admittedly, no shading language, so you’d better be comfortable writing your shading code in C. Game Development at the University of Maryland, Baltimore County. I know you can’t take everything in a PR posting at face value, but the phrase “ our. The University of Maryland, Baltimore County [4] (often referred to as UMBC) is an American public research university, located in Baltimore County, Maryland, United. Simulation and Digital Entertainment Degree Requirements. These requirements apply to students entering this program in fall 2014 and thereafter. © 2016 Computer Science and Electrical Engineering Department 1000 Hilltop Circle, Room ITE 325, Baltimore, MD 21250 • 410-455-3500. News. UMBC, Northrop Grumman Foundation, BCPS launch $1.6M partnership to support Baltimore City students, teachers, and communities Baltimore Sun | January 7, 2016. Make a Bee Line Toward Your Degree. Bee Line, a transfer access program designed to benefit Montgomery College students who plan to transfer to a University of. The real invention of the shading language. In 1. 98. 4 (yes, 3. Rob Cook published a system called “Shade Trees”, that let you write shading expressions that it parsed. I’ve seen some mis- interpretation (maybe because of the name?) that this was a graphical node/network interface for creating shaders. It wasn’t. That was Abram and Whitted’s Building Block Shaders in 1. Shade Trees was more like writing a single expression a C- like language, without loops or branches. It also introduced the shader types of surface, light, atmosphere, etc. Render. Man today. In 1. 98. 5, Ken Perlin’s Image Synthesizer expanded this into a full language, with functions, loops and branching. This is the same paper that introduced his noise function — talk about packing a lot into one paper! Over the next few years, Pixar built a shading language based on these ideas into Render. Man. This was published in The Render. Man Companion by Steve Upstill in 1. Hanrahan and Lawson’s 1. SIGGRAPH paper. Shading comes to graphics hardware. In 1. 99. 0, I started as a new grad student at the University of North Carolina. I was working on the Pixel- Planes 5 project, which, among other things, featured a 5. SIMD array. It only had 2. ALU, so you could make your data any size you wanted, not just multiples of bytes (1. No problem!). This was, of course, important to give you any chance of having everything (data and computation) fit into just 2. I was writing shading code for it inside the driver/graphics library in something that basically looked like assembly language. By 1. 99. 2, a group of others at UNC created an assembly language interface that could be directly programmed without having to change the guts of the graphics library. This is really the first example of end- user programmable shading in graphics hardware. Unfortunately, the limitations of the underlying system made it really, really hard to do anything complex, so it didn’t end up being used outside the Pixel- Planes team. Meanwhile, we were making plans for the next machine. This ended up being Pixel. Study Game Design as an international student and learn how to create incredibly sophisticated and alluring video game content. Find top accredited online game design schools and training programs. Game Design courses, degrees, and more. Official web site of the University of Maryland located in College Park, Maryland. Offers student, faculty and staff directories, departments and programs, athletics. ![]() Flow, largely an implementation of ideas Steve Molnar had just finished in his dissertation, with shading accommodations for what I was planning for my dissertation. I had this crazy idea that you ought to be able to compile high- level shading code for graphics hardware, and if you abstracted away enough of the hardware details and relied on the compiler to manage the mapping between shading code (what I want to do) and implementation (how to make it happen), that you’d get something an actual non- hardware person would be able to use. It took a while, and a bunch of people to make it work, but the result of actual high- level programmable shading on actual graphics hardware was published in 1. I followed the Render. Man model of surface/light shaders, rather than the vertex/pixel division that became popular in later hardware. I still think the “what I want to do” choice is better than the “how I think you should to do it”, though the latter does have advantages when you are working near the limits of what the hardware can do. The SIGGRAPH paper described just the language and the surface and light shaders, along with a few of the implementation/translation problems I had to solve to make it work. There’s more in my dissertation itself, including shading stages for transformation and for primitives. The latter was combination of what you can do in geometry shaders with a shading interface for rasterization (much like pixel shader/pixel discard approaches for billboards or some non- linear distortion correction rendering). Note that both Pixel- Planes 5 and Pixel. Flow were SIMD engines, processing a bunch of pixels at once, so they actually had quite a bit in common with that aspect of current GPUs. Much more so than some of the intermediate steps that the actual GPUs went through before they got to the same place. OK, but how about on commercial hardware? Pixel. Flow was developed in partnership with HP, and they did demo it as a product at SIGGRAPH, but it was cancelled before any shipped. After I graduated in 1. I went to SGI to work on adding shading to a commercial product. At first, we were working on a true Render. Man implementation on a new hardware product (that never shipped). That hardware would have done just one or two operations per pass, and relied on streaming and prefetching of data per pixel to hide the framebuffer accesses. After it was cancelled, we switched to something that would use a similar approach on existing hardware. The language ended up being being very assembly- like, with one operation per statement, but we did actually ship that and had at least a few external customers who used it. Both shading systems were described in a 2. SIGGRAPH paper. In 2. I co- taught a class at Stanford with Bill Mark. That helped spur their work in hardware shading compilers. Someone who was there would have to say whether they were doing anything before that class, though I would not be surprised if they were already looking at it, given Pat Hanrahan’s involvement in the original Render. Man. In any case in 2. RTSL language and compiler, which could compile a high level shading language to the assembly- language vertex shaders NVIDIA had introduced, to the NVIDIA register combiner insanity, and to multiple rendering passes in the way the SGI stuff worked, if necessary. RTSL was also the origin of the Vertex/Pixel division that still exists today. And on GPUs. And now we leave the personal reminiscing part of this post. I did organize a series of SIGGRAPH courses on programmable shading in graphics hardware from 2. I wasn’t actually at NVIDIA, ATI, 3. DLabs or Microsoft, I don’t know the details of when some of these efforts started or what else might have been going on behind the scenes. Around 1. 99. 9, NVIDIA’s register combiners were probably the first step from fixed- function to true programmability. Each of the two (later eight) combiner stages could do two vector multiples and add or dot product the result. You just had to make separate API calls to set each of the four inputs, each of the three outputs, and the function. In the SGI stuff, we were using blend, color transform and multi- texture operations as ALU ops for the compiler to target. The Stanford RTSL compiler could do the same type of compilation with the register combiners as well. Without something like RTSL, it was pretty ugly, and definitely wins the award for most lines of code per ALU operation. Better, was the assembler vertex programs in the Ge. Force. 3, around 2. It didn’t allow branching or looping, but executed in a deep instruction- per- stage pipeline. Among other things, that meant that as long as your program fit, doing one instruction was exactly the same cost as doing the maximum your hardware supported. Assembly- level fragment programs came in around 2. Around 2. 00. 2, there was an explosion of high- level shading languages targeting GPUs, including NVIDIA’s Cg, the closely related Direct. X HLSL, and the Open. GL Shading Language. This list of dates is pretty NVIDIA centric, and they were definitely pushing the feature envelope on many of these elements. On the other hand, most of these also were connected to Direct. X versions requiring similar features, so soon everyone had some kind of programmability. NVIDIA’s Cg tutorial put’s the first generation of programmable GPUs as appearing around 2. ATI and 3. DLabs also started to introduce programmable shading in a similar time period (some of which was represented in my 2. SIGGRAPH course). As a particular example of multiple companies all working toward similar goals, NVIDIA’s work, especially on Cg, had a huge influence on Direct. X. Meanwhile, 3. DLabs was introducing their own programmable hardware that I believe was a bit more flexible, and they had a big influence on the Open. GL Shading Language. As a result, though they were very similar in many ways, especially in the early versions there was a significant difference in philosophy between exposing hardware limitations in Direct. D vs. generality (even when slow on a particular GPU) in Open. GL. In hindsight, though generality makes sense now, on that original generation of GPUs, it lead too often to unexpected performance cliffs, which certainly hurt Open. GL’s reputation among game developers. References. Gregory D. Abram and Turner Whitted. Building block shaders. In Proceedings of the 1. Computer graphics and interactive techniques (SIGGRAPH ’9. ACM, New York, NY, USA, 2. DOI=1. 0. 1. 14. 5/9. Robert L. Cook. 1. Shade trees. In Proceedings of the 1. Computer graphics and interactive techniques (SIGGRAPH ’8.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
October 2016
Categories |