« on: April 10, 2015, 04:06:15 PM »
This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
According to Wardell, Sony’s current API is much low level compared to Mantle and even Vulkan but they should look into adding Vulkan support for the console as it will reduce a lot of developer overhead for cross platform development.
“What I was referencing at the time was Vulkan. We’re part of the Khronos Group and now it depends who you talk to at Sony and this gets in to a debate. Sony has a very low-level API already for the PlayStation 4. The problem I have with it is that if you want to make use for it you’re writing some very specific code just for the PlayStation 4. And in the real world people don’t do that right. I write code generally to be as cross-platform as I can.”
The rasterization rendering technique is surely the most commonly used technique to render images of 3D scenes, and yet, that is probably the least understood and the least properly documented technique of all (especially compared to ray-tracing).
Why this is so, depends on different factors. First, it's a technique from the past. We don't mean to say the technique is obsolete, quite the contrary, but that most of the techniques that are used to produce an image with this algorithm, were developed somewhere between the 1960s and the early 1980s. In the world of computer graphics, this is middle-ages and the knowledge about the papers in which these techniques were developed tends to be lost. Rasterization is also the technique used by GPUs to produce 3D graphics. Hardware technology changed a lot since GPUs were first invented, but the fondamental techniques they implement to produce images haven't changed much since the early 1980s (the hardware changed, but the underlying pipeline by which an image is formed hasn't). In fact these techniques are so fondamental and consequently so deeply integrated within the hardware architecture, that no one pays attention to them anymore (only people designing GPUs can tell what they really do, and this is far from being a trivial task; but designing a GPU and understanding the principle of the rasterization algorithm are two different things; thus explaining the latter should actually not be that hard!).
Regardless, we thought it was urgent and important to correct this situation. With this lesson, we believe to be the first ressource that provides a clear and complete picture of the algorithm as well as a complete and full practical implementation of the technique. If you found in this lesson the answers you have been desperately looking for anywhere else, please consider making a donation! This work is provided to you for free and requires many hours of hard work.
CUDA 7 adds C++11 feature support to nvcc, the CUDA C++ compiler. This means that you can use C++11 features not only in your host code compiled with nvcc, but also in device code. In my post “The Power of C++11 in CUDA 7” I covered some of the major new features of C++11, such as lambda functions, range-based for loops, and automatic type deduction (auto). In this post, I’ll cover variadic templates.
In parallel to Khronos defining OpenGL ES 3.0, there was an effort to develop an industry-leading compression format that provided developers with finer grained control. This resulted in the mid-2012 launch of the ASTC texture compression format. The key to ASTC is that while it uses a fixed 128 bits-per-block, each texture can have a different size block fit in those 128 bits, unlike the fixed 4x4 block of prior formats. Leveraging a large variety of square and non-square block sizes, ASTC delivers a wide range of derived compression ratios, scaling from 8bpp down to just under 1bpp.
Hardware supporting ASTC has achieved sufficient enough market share that developers should seriously consider how to leverage it in their titles: to improve quality, decrease storage size, or both. This is especially true in titles that require a high enough level of graphics hardware such that ASTC is a given.
EMERYVILLE, CA – (March 23rd, 2015) Pixar Animation Studios today released its Academy Award®-winning RenderMan software for non-commercial use. Free Non-Commercial RenderMan can be used for research, education, evaluation, plug-in development, and any personal projects that do not generate commercial profits. Free Non-Commercial RenderMan is also fully featured, without watermark, time limits, or other user limitations.
Featuring Pixar’s new RIS technology, RenderMan delivers extremely fast global illumination and interactive shading and lighting for artists. Currently in use at many studios, RIS is fast, robust, and changing how movies are made. Today, exactly the same technology is available to all users of Non-Commercial RenderMan.
In conjunction with the release, Pixar has also launched a new RenderMan Community site where users can exchange knowledge and resources, showcase their own work, share assets such as shaders and scripts, and learn about RenderMan from tutorials created by the best in the community. The RenderMan Community site is an example of Pixar’s ongoing commitment to making the film industries finest rendering tools accessible to anyone working in visual effects, animation, and visualization.
“The latest release of RenderMan is a technological reinvention. It’s the result of focused research and development at both Pixar and Disney, and these advancements are now freely available to the visual effects and animation community through Non-Commercial RenderMan.” said Dr Ed Catmull, President of Pixar and Walt Disney Animation Studios, and also one of the founders of the original RenderMan architecture. “We look forward to seeing what our users create.”
"We’ve recently begun to see some final images for ‘Finding Dory.’ It’s our first feature using RenderMan’s RIS technology… and it’s just unbelievable,” said Andrew Stanton, Director, “Finding Dory.” “RIS has opened so many new creative possibilities for us; we’re creating images that were previously impossible for us to achieve. It really is looking spectacular.”
Those interested in exploring Free Non-Commercial RenderMan are invited to go to the RenderMan website and download a copy.
Availability & Compatibility
RenderMan is compatible with the following 64-bit operating systems: Mac OS 10.9, 10.8 and 10.7, Windows 8 and 7, and Linux glibc 2.12 or higher and gcc 4.4.5 and higher. RenderMan is compatible with versions 2013.5, 2014, and 2015 of Autodesk’s Maya, and with versions 1.5, 1.6, and 2.0 of The Foundry’s KATANA. RenderMan is available commercially as individual licenses with volume discounts or through custom site licensing packages tailored for each customer. In addition, Pixar’s annual maintenance program provides access to ongoing support and free upgrades. For more information please visit www.pixar.com or contact email@example.com.
Microsoft Corp is making its biggest push into the heavily pirated Chinese consumer computing market this summer by offering free upgrades to Windows 10 to all Windows users, regardless of whether they are running genuine copies of the software.
The move is an unprecedented attempt by Microsoft to get legitimate versions of its software onto machines of the hundreds of millions of Windows users in China. Recent studies show that three-quarters of all PC software is not properly licensed there.
Unlike the competition, Intel’s shader hardware has a full set of registers dedicated to each hardware thread. The red and green team each lose thread occupancy if a shader has a lot of register pressure, but not the blue team, they just exploit their ridiculous process advantage and pack the little suckers in, and then stop worrying about it. Our shader has quite a bit of register pressure in it, but that doesn’t hurt Intel’s concurrency one bit. Their enormous register file functions as a big on-chip buffer.
Even though it is possible to implement geometry shaders efficiently, the fact that two of the three vendors don’t do it that way means that the GS is not a practical choice for production use. It should be avoided wherever possible.
It is flawed, in that it injects a serialized, high bandwidth operation into an already serialized part of the pipeline. It requires a lot of per-thread storage. It is clearly a very unnatural fit for wide SIMD machines. However, this little exercise has made me wonder if it can’t be redeemed by spreading a single instance across multiple warps/wavefronts, squeezing ILP out of a DLP architecture. Perhaps I’ll try and write a compute shader that does this.
During today's Epic Games event at the Game Developers Conference 2015, NVIDIA co-founder Jen-Hsun Huang rushed the stages like a professional wrestling hero to announce the Titan X, NVIDIA's latest GPU. Huang claims it is the most powerful GPU on the planet. With 12GB frame buffer and 8 billion transistors, it is — on paper — a significant step past NVIDIA's current hardware. NVIDIA's Titan used to be its most powerful hardware.
As there is no compatibility between OpenGL and the Vulkan API, many large GL code bases will not switch immediately. But older applications will still want to access new hardware features, so I believe that OpenGL is not dead yet, but will in fact live on for some more versions. I’m confident that we will see new hardware features exposed in OpenGL 5.0 (maybe at Siggraph this year?). Just as NVIDIA, AMD and Intel didn’t drop the Compatibility profile even in the latest OpenGL versions, I don’t expect them to drop support for new OpenGL versions any time soon. I guess it’s actually more likely that we will quite some OpenGL extensions which expose more Vulkan like features in OpenGL to help transitioning (e.g. the Vulkan shader “binaries” SPIR-V). Limited compatibility in the other direction at least for the shaders is planned by providing a GLSL to SPIR-V compiler.
Imagination is a promoting member of the Khronos Group and has been working on developing a proof-of-concept driver for Vulkan for our PowerVR Rogue GPUs. Our PowerVR demo team has also spent the last two months porting one of our new OpenGL ES 3.0 demos to the new API and today we are able to show you a snapshot of our work.
For example, there are no glUniform*() equivalent entry points in Vulkan; instead, writing to GPU memory is the only way to pass data to shaders.
Command buffers can be created on a different thread to the thread they are submitted on. This means rendering commands could be created on all cores of a CPU.
When you call glTexStorage2D() in OpenGL, the driver has to allocate memory for a two-dimensional or one-dimensional array texture. The function and the memory allocation process represent a black box.
In Vulkan however, the memory allocation is done by the application. This means that the application knows more about what type of memory it is using and more importantly how much memory it is using, which should be useful for applications that are memory-bound.
SPIR-V is a new platform-independent intermediate language. It is a self-contained, fully specified, binary format
for representing graphical-shader stages and compute kernels for multiple APIs. Physically, it is a stream of 32-bit words. Logically, it is a header and a linear stream of instructions. These encode, first, a set of annotations and decorations, and second a collection of functions. Each function encodes a control-flow graph (CFG) of basic blocks, with additional instructions to preserve source-code structured flow control. Load/store instructions are used to access declared variables, which includes all input/output (IO). Intermediate results bypassing load/store use single static-assignment (SSA) representation. Data objects are represented logically, with hierarchical type information: There is no flattening of aggregates or assignment to physical register banks, etc. Selectable addressing models establish whether general pointers may be used, or if memory access is purely logical.
AMD is a company that fundamentally believes in technologies unfettered by restrictive contracts, licensing fees, vendor lock-ins or other arbitrary hurdles to solving the big challenges in graphics and computing. Mantle was destined to follow suit, and it does so today as we proudly announce that the 450-page programming guide and API reference for Mantle will be available this month (March, 2015) at www.amd.com/mantle
Mantle’s definition of “open” must widen. It already has, in fact. This vital effort has replaced our intention to release a public Mantle SDK, and you will learn the facts on Thursday, March 5 at GDC 2015.
Unreal Engine 4 is now available to everyone for free, and all future updates will be free!
You can download the engine and use it for everything from game development, education, architecture, and visualization to VR, film and animation. When you ship a game or application, you pay a 5% royalty on gross revenue after the first $3,000 per product, per quarter. It’s a simple arrangement in which we succeed only when you succeed.
This is the complete technology we use at Epic when building our own games. It scales from indie projects to high-end blockbusters; it supports all the major platforms; and it includes 100% of the C++ source code. Our goal is to give you absolutely everything, so that you can do anything and be in control of your schedule and your destiny. Whatever you require to build and ship your game, you can find it in UE4, source it in the Marketplace, or build it yourself – and then share it with others.
Companies from Facebook Inc. to Sony Corp. and Google Inc. have spent billions of dollars investing in virtual-reality technology.
Now they have to turn it into a business. To get consumers to buy their devices, Sony, Facebook’s Oculus VR, Valve Corp. and Razer USA Ltd. need games to play and videos to watch. At this week’s Game Developers Conference in San Francisco, they’ll be trying to convince software makers there’s a market to be had.
“The market for virtual reality is very hot right now, and a lot of highly visible developers, designers and investors are placing bets,” van Dreunen said. “However, in the absence of solid delivery dates from the leading companies in the space, it is clear that virtual reality is not yet ready for prime time.”